Search Results

Search found 4931 results on 198 pages for 'burnt hand'.

Page 172/198 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Convert a PDF eBook to ePub Format

    - by Matthew Guay
    Would you like to read a PDF eBook on an eReader or mobile device, but aren’t happy with the performance? Here’s how you can convert your PDFs to the popular ePub format so you can easily read them on any device. PDFs are a popular format for eBooks since they render the same on any device and can preserve the exact layout of the print book.  However, this benefit is their major disadvantage on mobile devices, as you often have to zoom and pan back and forth to see everything on the page.  ePub files, on the other hand, are an increasingly popular option. They can reflow to fill your screen instead of sticking to a strict layout style.  With the free Calibre program, you can quickly convert your PDF eBooks to ePub format. Getting Started Download the Calibre installer (link below) for your operating system, and install as normal.  Calibre works on recent versions of Windows, OS X, and Linux.  The Calibre installer is very streamlined, so the install process was quite quick. Calibre is a great application for organizing your eBooks.  It can automatically sort your books by their metadata, and even display their covers in a Coverflow-style viewer. To add an eBook to your library, simply drag-and-drop the file into the Calibre window, or click Add books at the top.  Here you can choose to add all the books from a folder and more. Calibre will then add the book(s) to your library, import the associated metadata, and organize them in the catalog. Convert your Books Once you’ve imported your books into Calibre, it’s time to convert them to the format you want.  Select the book or books you want to convert, and click Convert E-books.  Select whether you want to convert them individually or bulk convert them. The convertor window has lots of options, so you can get your ePub book exactly like you want.  You can simply click Ok and go with the defaults, or you can tweak the settings. Do note that the conversion will only work successfully with PDFs that contain actual text.  Some PDFs are actually images scanned in from the original books; these will appear just like the PDF after the conversion, and won’t be any easier to read. On the first tab, you’ll notice that Calibri will repopulate most of the metadata fields with info from your PDF.  It will also use the first page of the PDF as the cover.  Edit any of the information that may be incorrect, and add any additional information you want associated with the book. If you want to convert your eBook to a different format other than ePub, Calibri’s got you covered, too.  On the top right, you can choose to output the converted eBook into a many different file formats, including the Kindle-friendly MOBI format. One other important settings page is the Structure Detection tab.  Here you can choose to have it remove headers and footers in the converted book, as well as automatically detect chapter breaks. Click Ok when you’ve finished choosing your settings and Calibre will convert the book.  This may take a few minutes, depending on the size of the PDF.  If the conversion seems to be taking too long, you can click Show job details for more information on the progress.   The conversion usually works good, but we did have one job freeze on us.  When we checked the job details, it indicated that the PDF was copy-protected.  Most PDF eBooks, however, worked fine. Now, back in the main Calibri window, select your book and save it to disk.  You can choose to save only the EPUB format, or you can select Save to disk to save all formats of the book to your computer. You can also view the ePub file directly in Calibri’s built-in eBook viewer.  This is the PDF book we converted, and it looks fairly good in the converted format.  It does have some odd line breaks and some misplaced numbers, but on the whole, the converted book is much easier to read, especially on small mobile devices.   Even images get included inline, so you shouldn’t be missing anything from the original eBook. Conclusion Calibri makes it simple to read your eBooks in any format you need. It is a project that is in constant development, and updates regularly adding better stability and features.  Whether you want to ready your PDF eBooks on a Sony Reader, Kindle, netbook or Smartphone, your books will now be more accessible than ever.  And with thousands of free PDF eBooks out there, you’ll be sure to always have something to read. If you’d like some Geeky PDF eBooks, Microsoft Press is offering a number of free PDF eBooks right now.  Check them out at this link (Account Required). Download the Calibre eBook program Similar Articles Productive Geek Tips Format a String as Currency in C#Convert Older Excel Documents to Excel 2007 FormatShare OneNote 2010 Notebooks with OneNote 2007Install an RPM Package on Ubuntu LinuxConvert PDF Files to Word Documents and Other Formats TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos

    Read the article

  • An MCM exam, Rob? Really?

    - by Rob Farley
    I took the SQL 2008 MCM Knowledge exam while in Seattle for the PASS Summit ten days ago. I wasn’t planning to do it, but I got persuaded to try. I was meaning to write this post to explain myself before the result came out, but it seems I didn’t get typing quickly enough. Those of you who know me will know I’m a big fan of certification, to a point. I’ve been involved with Microsoft Learning to help create exams. I’ve kept my certifications current since I first took an exam back in 1998, sitting many in beta, across quite a variety of topics. I’ve probably become quite good at them – I know I’ve definitely passed some that I really should’ve failed. I’ve also written that I don’t think exams are worth studying for. (That’s probably not entirely true, but it depends on your motivation. If you’re doing learning, I would encourage you to focus on what you need to know to do your job better. That will help you pass an exam – but the two skills are very different. I can coach someone on how to pass an exam, but that’s a different kind of teaching when compared to coaching someone about how to do a job. For example, the real world includes a lot of “it depends”, where you develop a feel for what the influencing factors might be. In an exam, its better to be able to know some of the “Don’t use this technology if XYZ is true” concepts better.) As for the Microsoft Certified Master certification… I’m not opposed to the idea of having the MCM (or in the future, MCSM) cert. But the barrier to entry feels quite high for me. When it was first introduced, the nearest testing centres to me were in Kuala Lumpur and Manila. Now there’s one in Perth, but that’s still a big effort. I know there are options in the US – such as one about an hour’s drive away from downtown Seattle, but it all just seems too hard. Plus, these exams are more expensive, and all up – I wasn’t sure I wanted to try them, particularly with the fact that I don’t like to study. I used to study for exams. It would drive my wife crazy. I’d have some exam scheduled for some time in the future (like the time I had two booked for two consecutive days at TechEd Australia 2005), and I’d make sure I was ready. Every waking moment would be spent pouring over exam material, and it wasn’t healthy. I got shaken out of that, though, when I ended up taking four exams in those two days in 2005 and passed them all. I also worked out that if I had a Second Shot available, then failing wasn’t a bad thing at all. Even without Second Shot, I’m much more okay about failing. But even just trying an MCM exam is a big effort. I wouldn’t want to fail one of them. Plus there’s the illusion to maintain. People have told me for a long time that I should just take the MCM exams – that I’d pass no problem. I’ve never been so sure. It was almost becoming a pride-point. Perhaps I should fail just to demonstrate that I can fail these things. Anyway – boB Taylor (@sqlboBT) persuaded me to try the SQL 2008 MCM Knowledge exam at the PASS Summit. They set up a testing centre in one of the room there, so it wasn’t out of my way at all. I had to squeeze it in between other commitments, and I certainly didn’t have time to even see what was on the syllabus, let alone study. In fact, I was so exhausted from the week that I fell asleep at least once (just for a moment though) during the actual exam. Perhaps the questions need more jokes, I’m not sure. I knew if I failed, then I might disappoint some people, but that I wouldn’t’ve spent a great deal of effort in trying to pass. On the other hand, if I did pass I’d then be under pressure to investigate the MCM Lab exam, which can be taken remotely (therefore, a much smaller amount of effort to make happen). In some ways, passing could end up just putting a bunch more pressure on me. Oh, and I did.

    Read the article

  • Analysing and measuring the performance of a .NET application (survey results)

    - by Laila
    Back in December last year, I asked myself: could it be that .NET developers think that you need three days and a PhD to do performance profiling on their code? What if developers are shunning profilers because they perceive them as too complex to use? If so, then what method do they use to measure and analyse the performance of their .NET applications? Do they even care about performance? So, a few weeks ago, I decided to get a 1-minute survey up and running in the hopes that some good, hard data would clear the matter up once and for all. I posted the survey on Simple Talk and got help from a few people to promote it. The survey consisted of 3 simple questions: Amazingly, 533 developers took the time to respond - which means I had enough data to get representative results! So before I go any further, I would like to thank all of you who contributed, because I now have some pretty good answers to the troubling questions I was asking myself. To thank you properly, I thought I would share some of the results with you. First of all, application performance is indeed important to most of you. In fact, performance is an intrinsic part of the development cycle for a good 40% of you, which is much higher than I had anticipated, I have to admit. (I know, "Have a little faith Laila!") When asked what tool you use to measure and analyse application performance, I found that nearly half of the respondents use logging statements, a third use performance counters, and 70% of respondents use a profiler of some sort (a 3rd party performance profilers, the CLR profiler or the Visual Studio profiler). The importance attributed to logging statements did surprise me a little. I am still not sure why somebody would go to the trouble of manually instrumenting code in order to measure its performance, instead of just using a profiler. I personally find the process of annotating code, calculating times from log files, and relating it all back to your source terrifyingly laborious. Not to mention that you then need to remember to turn it all off later! Even when you have logging in place throughout all your code anyway, you still have a fair amount of potentially error-prone calculation to sift through the results; in addition, you'll only get method-level rather than line-level timings, and you won't get timings from any framework or library methods you don't have source for. To top it all, we all know that bottlenecks are rarely where you would expect them to be, so you could be wasting time looking for a performance problem in the wrong place. On the other hand, profilers do all the work for you: they automatically collect the CPU and wall-clock timings, and present the results from method timing all the way down to individual lines of code. Maybe I'm missing a trick. I would love to know about the types of scenarios where you actively prefer to use logging statements. Finally, while a third of the respondents didn't have a strong opinion about code performance profilers, those who had an opinion thought that they were mainly complex to use and time consuming. Three respondents in particular summarised this perfectly: "sometimes, they are rather complex to use, adding an additional time-sink to the process of trying to resolve the existing problem". "they are simple to use, but the results are hard to understand" "Complex to find the more advanced things, easy to find some low hanging fruit". These results confirmed my suspicions: Profilers are seen to be designed for more advanced users who can use them effectively and make sense of the results. I found yet more interesting information when I started comparing samples of "developers for whom performance is an important part of the dev cycle", with those "to whom performance is only looked at in times of crisis", and "developers to whom performance is not important, as long as the app works". See the three graphs below. Sample of developers to whom performance is an important part of the dev cycle: Sample of developers to whom performance is important only in times of crisis: Sample of developers to whom performance is not important, as long as the app works: As you can see, there is a strong correlation between the usage of a profiler and the importance attributed to performance: indeed, the more important performance is to a development team, the more likely they are to use a profiler. In addition, developers to whom performance is an important part of the dev cycle have a higher tendency to use a much wider range of methods for performance measurement and analysis. And, unsurprisingly, the less important performance is, the less varied the methods of measurement are. So all in all, to come back to my random questions: .NET developers do care about performance. Those who care the most use a wider range of performance measurement methods than those who care less. But overall, logging statements, performance counters and third party performance profilers are the performance measurement methods of choice for most developers. Finally, although most of you find code profilers complex to use, those of you who care the most about performance tend to use profilers more than those of you to whom performance is not so important.

    Read the article

  • Recover that Photo, Picture or File You Deleted Accidentally

    - by The Geek
    Have you ever accidentally deleted a photo on your camera, computer, USB drive, or anywhere else? What you might not know is that you can usually restore those pictures—even from your camera’s memory stick. Windows tries to prevent you from making a big mistake by providing the Recycle Bin, where deleted files hang around for a while—but unfortunately it doesn’t work for external USB drives, USB flash drives, memory sticks, or mapped drives. The great news is that this technique also works if you accidentally deleted the photo… from the camera itself. That’s what happened to me, and prompted writing this article. Restore that File or Photo using Recuva The first piece of software that you’ll want to try is called Recuva, and it’s extremely easy to use—just make sure when you are installing it, that you don’t accidentally install that stupid Yahoo! toolbar that nobody wants. Now that you’ve installed the software, and avoided an awful toolbar installation, launch the Recuva wizard and let’s start through the process of recovering those pictures you shouldn’t have deleted. The first step on the wizard page will let you tell Recuva to only search for a specific type of file, which can save a lot of time while searching, and make it easier to find what you are looking for. Next you’ll need to specify where the file was, which will obviously be up to wherever you deleted it from. Since I deleted mine from my camera’s SD card, that’s where I’m looking for it. The next page will ask you whether you want to do a Deep Scan. My recommendation is to not select this for the first scan, because usually the quick scan can find it. You can always go back and run a deep scan a second time. And now, you’ll see all of the pictures deleted from your drive, memory stick, SD card, or wherever you searched. Looks like what happened in Vegas didn’t stay in Vegas after all… If there are a really large number of results, and you know exactly when the file was created or modified, you can switch to the advanced view, where you can sort by the last modified time. This can help speed up the process quite a bit, so you don’t have to look through quite as many files. At this point, you can right-click on any filename, and choose to Recover it, and then save the files elsewhere on your drive. Awesome! Restore that File or Photo using DiskDigger If you don’t have any luck with Recuva, you can always try out DiskDigger, another excellent piece of software. I’ve tested both of these applications very thoroughly, and found that neither of them will always find the same files, so it’s best to have both of them in your toolkit. Note that DiskDigger doesn’t require installation, making it a really great tool to throw on your PC repair Flash drive. Start off by choosing the drive you want to recover from…   Now you can choose whether to do a deep scan, or a really deep scan. Just like with Recuva, you’ll probably want to select the first one first. I’ve also had much better luck with the regular scan, rather than the “dig deeper” one. If you do choose the “dig deeper” one, you’ll be able to select exactly which types of files you are looking for, though again, you should use the regular scan first. Once you’ve come up with the results, you can click on the items on the left-hand side, and see a preview on the right.  You can select one or more files, and choose to restore them. It’s pretty simple! Download DiskDigger from dmitrybrant.com Download Recuva from piriform.com Good luck recovering your deleted files! And keep in mind, DiskDigger is a totally free donationware software from a single, helpful guy… so if his software helps you recover a photo you never thought you’d see again, you might want to think about throwing him a dollar or two. Similar Articles Productive Geek Tips Stupid Geek Tricks: Undo an Accidental Move or Delete With a Keyboard ShortcutRestore Accidentally Deleted Files with RecuvaCustomize Your Welcome Picture Choices in Windows VistaAutomatically Resize Picture Attachments in Outlook 2007Resize Your Photos with Easy Thumbnails TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • OpenWorld: Spotlight on Fusion CRM

    - by Tony Berk
    Oracle OpenWorld is less than 2 weeks away, so you need to start figuring out how you are going to maximize your week. I don't want to discourage you, but I'm pretty sure it is impossible to attend all 2000+ sessions. So you need to focus on what's important to you. Many of our CRM customers will be interested in Fusion CRM, since they have already started Fusion implementations or determining when to start. If that's you, or you are just looking for an overview of Fusion CRM, we've got you covered! Let's start at the top! For an overview of what is in Fusion CRM and where it is going, you should attend the general session and roadmap session: General Session: Oracle Fusion CRM—Improving Sales Effectiveness, Efficiency, and Ease of Use (Session ID: GEN9674) - Oct 2, 11:45 AM. Anthony Lye, Senior VP, Oracle leads this general session focused on Oracle Fusion CRM. Oracle Fusion CRM optimizes territories, combines quota management and incentive compensation, integrates sales and marketing, and cleanses and enriches data—all within a single application platform. Oracle Fusion can be configured, changed, and extended at runtime by end users, business managers, IT, and developers. Oracle Fusion CRM can be used from the Web, from a smartphone, from Microsoft Outlook, or from an iPad. Deloitte, sponsor of the CRM Track, will also present key concepts on CRM implementations. Oracle Fusion Customer Relationship Management: Overview/Strategy/Customer Experiences/Roadmap (CON9407) - Oct 1, 3:15PM. In this session, learn how Oracle Fusion CRM enables companies to create better sales plans, generate more quality leads, and achieve higher win rates and find out why customers are adopting Oracle Fusion CRM. Gain a deeper understanding of the unique capabilities only Oracle Fusion CRM provides, and learn how Oracle’s commitment to CRM innovation is driving a wide range of future enhancements. There is also a General Session for all Fusion Applications providing insight into the current strategy of the full product line and a high-level roadmap for each product area: Oracle Fusion Applications—Overview, Strategy, and Roadmap (GEN9433) - Oct 1, 10:45AM. This session will be repeated on Oct 3, 10:15AM. Now, if you want to drill down into some more detail, there are a lot more sessions with Oracle product management and customers. I'll highlight a few, but suggest you review the Fusion CRM Focus On document, or the search in the Content Catalog or Session Builder.  Driving Sales Performance with Oracle Fusion CRM (CON9744) - Oct 3, 10:15AM. Demonstrates how sales executives can gain instant visibility into their business, deliver pervasive coaching to their reps, maximize their sales pipeline, and drive team alignment. The result is increased sales performance that enables sales executives to deliver more revenue without increasing their resources or expenses. Maximize Your Revenue Potential with Oracle Fusion CRM Sales Planning (CON9751) - Oct 2, 1:15PM. Learn how Oracle Fusion CRM helps companies intelligently optimize sales planning and manage sales performance including the ability to predict their future sales opportunities and use those predictions in conjunction with past sales data to optimally define their sales territories, sales quotas, and incentive compensation plans. Boost Marketing’s Contribution to Revenue with Oracle Fusion CRM Marketing (CON9746) - Oct 3, 11:45AM. Learn how Oracle Fusion CRM can help your organization integrate sales and marketing, using one CRM platform. See how Oracle Fusion CRM can help your organization learn where to invest its precious marketing dollars; drive more revenue with cross-channel marketing and prospecting capabilities, including and not limited to e-mail, Web, and social media; improve lead conversion with integrated lead management functionality; and do more with less by automating many manual tasks. Oracle Fusion CRM: Social Marketing (CON11559) - Oct 1, 3:15PM. Learn how Oracle’s acquisition of Collective Intellect, Vitrue, and Involver extends Oracle Fusion Marketing as a world-class social marketing solution. Oracle Fusion Social CRM Strategy and Roadmap: Future of Collaboration and Social Engagement (CON9750) - Oct 4, 11:15AM. Hear how Oracle can help you know your customers better, encourage brand affinity, and improve collaboration within your ecosystem. This session reviews Oracle's social media solution and shows how you can discover hidden insights buried in your enterprise and social data. Also learn how Oracle Social Network revolutionizes how enterprise users work, collaborate, and share to achieve successful outcomes. Of course, we recommend you hear from the current Fusion CRM customers too. So, don't miss Oracle Fusion Customer Relationship Management: Customer Adoption and Experiences (CON9415) on Oct 3 at 10:15AM for panel of customers discussing implementation experiences, best practices and benefits.  After listening to all of this great information, you are probably going to have questions. Well, the experts will be on hand to help answer your questions and plan how your organization can get going with Fusion CRM. Be sure to head down to the DEMOgrounds and CRM Pavilion in the Moscone West Exhibit Hall. And finally, there is the always popular Meet the Experts session focused on Fusion CRM (MTE9658) on Oct 2 at 5PM (pre-registration via Schedule Builder is recommended.) In addition, there are more sessions on Mobility, Extensibility, Incentive Compensation, Fusion Customer Hub and other key components of the Fusion Applications infrastructure, Oracle Cloud and much, much more! For a full list, utilize the Fusion CRM Focus On document and Content Catalog. Enjoy!

    Read the article

  • Playing with aspx page cycle using JustMock

    - by mehfuzh
    In this post , I will cover a test code that will mock the various elements needed to complete a HTTP page request and  assert the expected page cycle steps. To begin, i have a simple enumeration that has my predefined page steps: public enum PageStep {     PreInit,     Load,     PreRender,     UnLoad } Once doing so, i  first created the page object [not mocking]. Page page = new Page(); Here, our target is to fire up the page process through ProcessRequest call, now if we take a look inside the method with reflector.net,  the call trace will go like : ProcessRequest –> ProcessRequestWithNoAssert –> SetInstrinsics –> Finallly ProcessRequest. Inside SetInstrinsics ,  it requires calls from HttpRequest, HttpResponse and HttpBrowserCababilities. Using this clue at hand, we can easily know the classes / calls  we need to mock in order to get through the expected call. Accordingly, for  HttpBrowserCapabilities our required mock code will look like: var browser = Mock.Create<HttpBrowserCapabilities>(); // Arrange Mock.Arrange(() => browser.PreferredRenderingMime).Returns("text/html"); Mock.Arrange(() => browser.PreferredResponseEncoding).Returns("UTF-8"); Mock.Arrange(() => browser.PreferredRequestEncoding).Returns("UTF-8"); Now, HttpBrowserCapabilities is get though [Instance]HttpRequest.Browser. Therefore, we create the HttpRequest mock: var request = Mock.Create<HttpRequest>(); Then , add the required get call : Mock.Arrange(() => request.Browser).Returns(browser); As, [instance]Browser.PerferrredResponseEncoding and [instance]Browser.PreferredResponseEncoding  are also set to the request object and to make that they are set properly, we can add the following lines as well [not required though]. bool requestContentEncodingSet = false; Mock.ArrangeSet(() => request.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() =>  requestContentEncodingSet = true); Similarly, for response we can write:  var response = Mock.Create<HttpResponse>();    bool responseContentEncodingSet = false;  Mock.ArrangeSet(() => response.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() => responseContentEncodingSet = true); Finally , I created a mock of HttpContext and set the Request and Response properties that will returns the mocked version. var context = Mock.Create<HttpContext>();   Mock.Arrange(() => context.Request).Returns(request); Mock.Arrange(() => context.Response).Returns(response); As, Page internally calls RenderControl method , we just need to replace that with our one and optionally we can check if  invoked properly: bool rendered = false; Mock.Arrange(() => page.RenderControl(Arg.Any<HtmlTextWriter>())).DoInstead(() => rendered = true); That’s  it, the rest of the code is simple,  where  i asserted the page cycle with the PageSteps that i defined earlier: var pageSteps = new Queue<PageStep>();   page.PreInit +=delegate { pageSteps.Enqueue(PageStep.PreInit); }; page.Load += delegate { pageSteps.Enqueue(PageStep.Load); }; page.PreRender += delegate { pageSteps.Enqueue(PageStep.PreRender);}; page.Unload +=delegate { pageSteps.Enqueue(PageStep.UnLoad);};   page.ProcessRequest(context);   Assert.True(requestContentEncodingSet); Assert.True(responseContentEncodingSet); Assert.True(rendered);   Assert.Equal(pageSteps.Dequeue(), PageStep.PreInit); Assert.Equal(pageSteps.Dequeue(), PageStep.Load); Assert.Equal(pageSteps.Dequeue(), PageStep.PreRender); Assert.Equal(pageSteps.Dequeue(), PageStep.UnLoad);   Mock.Assert(request); Mock.Assert(response); You can get the test class shown in this post here to give a try by yourself with of course JustMock :-). Enjoy!!

    Read the article

  • User Experience Fundamentals

    - by ultan o'broin
    Understanding what user experience means in the modern work environment is central to building great-looking usable applications on the desktop or mobile devices. What better place to start a series of blog posts on such Applications User Experience team enablement for customers and partners than by sharing what the term really means, writes team member Karen Scipi. Applications UX have gained valuable insights into developing a user experience that reflects the experience of today’s worker. We have observed real workers performing real tasks in real work environments, and we have developed a set of new standards of application design that have been scientifically proven to be beneficial to enable today’s workers. We share such expertise to enable our customers and partners to benefit from our insights and to further their return on investment when building Oracle applications. So, What is User Experience? ?The user interface (UI) is about the on-screen user context provided by the layout of widgets (such as icons, fields, and buttons and more) and the visual impact of colors, typographic choices, and so on. The UI comprises the “look and feel” of the application that users interact with, and reflects, in essence, the most immediate aspects of usability we can now all relate to.  User experience, on the other hand, is about understanding the whole context of the world of work, how workers go about completing tasks, crossing all sorts of boundaries along the way. It is a study of how business processes and workers goals coincide, how users work with technology or other tools to get their jobs done, their interactions with other users, and their response to the technical, physical, and cultural environment around them. User experience is all about how users work—their work environments, office layouts, desk tools, types of devices, their working day, and more. Even their job aids, such as sticky notes, offer insight for UX innovation. User experience matters because businesses needs to be efficient, work must be productive, and users now demand to be satisfied by the applications they work with. In simple terms, tasks finished quickly and accurately for a business evokes organization and worker satisfaction, which in turn makes workers feel good and more than willing to use the application again tomorrow. Design Principles for the Enterprise Worker The consumerization of information technology has raised the bar for enterprise applications. Applications must be consistent, simple, intuitive, but above all contextual, reflecting how and when workers work, in the office or on the go. For example, the Google search experience with its type-ahead keyword-prompting feature is how workers expect to be able to discover enterprise information, too. Type-ahead in PeopleSoft 9.1 To build software that enables workers to be productive, our design principles meet modern work requirements about consistency, with well-organized, context-driven information, geared for a working world of discovery and collaboration. Our applications must also behave in a simple, web-like way just like Amazon, Google, and Apple products that workers use at home or on the go. Our user experience must also reflect workers’ needs for flexibility and well-loved enterprise practices such as using popular desktop tools like Microsoft Excel or Outlook as required. Building User Experience Productively The building blocks of Oracle Fusion Applications are the user experience design patterns. Based on the Oracle Fusion Middleware technology used to build Oracle Fusion Applications, the patterns are reusable solutions to common usability challenges that ADF developers typically face as they build applications, extensions, and integrations. Developers use the patterns as part of their Oracle toolkits to realize great usability consistently and in a productive way. Our design pattern creation process is informed by user experience research and science, an understanding of our technology’s capabilities, the demands for simplification and intuitiveness from users, and the best of Oracle’s acquisitions strategy (an injection of smart people and smart innovation). The patterns are supported by usage guidelines and are tested in our labs and assembled into a library of proven resources we used to build own Oracle Fusion Applications and other Oracle applications user experiences. The design patterns library is now available to the ADF community and to our partners and customers, for free. Developers with ADF skills and other technology skills can now offer more than just coding and functionality and still use the best in enterprise methodologies to ensure that a great user experience is easily applied, scaled, and maintained, whether it be for SaaS or on-premise deployments for Oracle Fusion Applications, for applications coexistence, or for partner integrations scenarios.  Oracle partners and customers already using our design patterns to build solutions and win business in smart and productive ways are now sharing their experiences and insights on pattern use to benefit your entire business. Applications UX is going global with the message and the means. Our hands-on user experience enablement through ADF  is expanding. So, stay tuned to Misha Vaughan's Voice of User Experience (VOX) blog and follow along on Twitter at @usableapps for news of outreach events and other learning opportunities. Interested in Learning More? Oracle Fusion Applications User Experience Patterns and Guidelines Library Shout-outs for Oracle UX Design Patterns Oracle Fusion Applications User Experience Design Patterns: Productivity Realized

    Read the article

  • ASP.NET WebAPI Security 2: Identity Architecture

    - by Your DisplayName here!
    Pedro has beaten me to the punch with a detailed post (and diagram) about the WebAPI hosting architecture. So go read his post first, then come back so we can have a closer look at what that means for security. The first important takeaway is that WebAPI is hosting independent-  currently it ships with two host integration implementations – one for ASP.NET (aka web host) and WCF (aka self host). Pedro nicely shows the integration into the web host. Self hosting is not done yet so we will mainly focus on the web hosting case and I will point out security related differences when they exist. The interesting part for security (amongst other things of course) is the HttpControllerHandler (see Pedro’s diagram) – this is where the host specific representation of an HTTP request gets converted to the WebAPI abstraction (called HttpRequestMessage). The ConvertRequest method does the following: Create a new HttpRequestMessage. Copy URI, method and headers from the HttpContext. Copies HttpContext.User to the Properties<string, object> dictionary on the HttpRequestMessage. The key used for that can be found on HttpPropertyKeys.UserPrincipalKey (which resolves to “MS_UserPrincipal”). So the consequence is that WebAPI receives whatever IPrincipal has been set by the ASP.NET pipeline (in the web hosting case). Common questions are: Are there situations where is property does not get set? Not in ASP.NET – the DefaultAuthenticationModule in the HTTP pipeline makes sure HttpContext.User (and Thread.CurrentPrincipal – more on that later) are always set. Either to some authenticated user – or to an anonymous principal. This may be different in other hosting environments (again more on that later). Why so generic? Keep in mind that WebAPI is hosting independent and may run on a host that materializes identity completely different compared to ASP.NET (or .NET in general). This gives them a way to evolve the system in the future. How does WebAPI code retrieve the current client identity? HttpRequestMessage has an extension method called GetUserPrincipal() which returns the property as an IPrincipal. A quick look at self hosting shows that the moral equivalent of HttpControllerHandler.ConvertRequest() is HttpSelfHostServer.ProcessRequestContext(). Here the principal property gets only set when the host is configured for Windows authentication (inconsisteny). Do I like that? Well – yes and no. Here are my thoughts: I like that it is very straightforward to let WebAPI inherit the client identity context of the host. This might not always be what you want – think of an ASP.NET app that consists of UI and APIs – the UI might use Forms authentication, the APIs token based authentication. So it would be good if the two parts would live in a separate security world. It makes total sense to have this generic hand off point for identity between the host and WebAPI. It also makes total sense for WebAPI plumbing code (especially handlers) to use the WebAPI specific identity abstraction. But – c’mon we are running on .NET. And the way .NET represents identity is via IPrincipal/IIdentity. That’s what every .NET developer on this planet is used to. So I would like to see a User property of type IPrincipal on ApiController. I don’t like the fact that Thread.CurrentPrincipal is not populated. T.CP is a well established pattern as a one stop shop to retrieve client identity on .NET.  That makes a lot of sense – even if the name is misleading at best. There might be existing library code you want to call from WebAPI that makes use of T.CP (e.g. PrincipalPermission, or a simple .Name or .IsInRole()). Having the client identity as an ambient property is useful for code that does not have access to the current HTTP request (for calling GetUserPrincipal()). I don’t like the fact that that the client identity conversion from host to WebAPI is inconsistent. This makes writing security plumbing code harder. I think the logic should always be: If the host has a client identity representation, copy it. If not, set an anonymous principal on the request message. Btw – please don’t annoy me with the “but T.CP is static, and static is bad for testing” chant. T.CP is a getter/setter and, in fact I find it beneficial to be able to set different security contexts in unit tests before calling in some logic. And, in case you have wondered – T.CP is indeed thread static (and the name comes from a time where a logical operation was bound to a thread – which is not true anymore). But all thread creation APIs in .NET actually copy T.CP to the new thread they create. This is the case since .NET 2.0 and is certainly an improvement compared to how Win32 does things. So to sum it up: The host plumbing copies the host client identity to WebAPI (this is not perfect yet, but will surely be improved). or in other words: The current WebAPI bits don’t ship with any authentication plumbing, but solely use whatever authentication (and thus client identity) is set up by the host. WebAPI developers can retrieve the client identity from the HttpRequestMessage. Hopefully my proposed changes around T.CP and the User property on ApiController will be added. In the next post, I will detail how to add WebAPI specific authentication support, e.g. for Basic Authentication and tokens. This includes integrating the notion of claims based identity. After that we will look at the built-in authorization bits and how to improve them as well. Stay tuned.

    Read the article

  • What Will Happen to Real Estate Leases when Operating Leases are Gone?

    - by Theresa Hickman
    Many people are concerned about what will happen to real estate leases when FASB and IASB abolish operating leases. They plan to unveil the proposed standards on treating leases this summer as part of the convergence project but no "finalized ruling" is expected for at least a year because it will need to get formal consensus from many players, such as the SEC, American Association of Investors, Congress, the Big Four, American Associate of Realtors, the international equivalents of these, etc. If your accounting is a bit rusty, an Operating Lease is where you lease equipment or some asset for a shorter period than the actual (expected) life of the asset and then give the asset back while it still has some useful life in it. (Think leasing a car). Because an Operating Lease does not contain any of the provisions that would qualify it as a Capital Lease, the lease is not treated as a sale or purchase and hits the lessee's rental expense and the lessor's revenue. So it all stays on the P&L (assuming no prepayments are made). Capital Leases, on the other hand, hit lessee's and lessor's balance sheets because the asset is treated as a sale. (I'm ignoring interest and depreciation here to emphasize my point). Question: What will happen to real estate leases when Operating Leases go away and how will Oracle Financials address these changes? Before I attempt to address these questions, here's a real-life example to expound on some of the issues: Let's say a U.S. retailer leases a store in a mall for 15 years. Under U.S. GAAP, the lease is considered an operating or expense lease. Will that same lease be considered a capital lease under IFRS? Real estate leases are supposedly going to be capitalized under IFRS. If so, will everyone need to change all leases from operating to capital? Or, could we make some adjustments so we report the lease as an expense for operations reporting but capitalize it for SEC reporting? Would all aspects of the lease be capitalized, or would some line items still be expensed? For example, many retail store leases are defined to include (1) the agreed-to rent amount; (2) a negotiated increase in base rent, e.g., maybe a 5% increase in Year 5; (3) a sales rent component whereby the retailer pays a variable additional amount based on the sales generated in the prior month; (4) parking lot maintenance fees. Would the entire lease be capitalized, or would some portions still be expensed? To help answer these questions, I met up with our resident accounting expert and walking encyclopedia, Seamus Moran. Here's what he had to say: Oracle is aware of the potential changes specific to reporting/capitalization of real estate leases; i.e., we are aware that FASB and IASB have identified real estate leases as one of the areas for standards convergence. Oracle stays apprised of the on-going convergence through our domain expertise staff, our relationship with customers, our market awareness, and, of course, our relationships with the Big 4. This is part of our normal process with respect to regulatory compliance worldwide. At this time, Oracle expects that the standards convergence committee will make a recommendation about reporting standards for real estate leases in about a year. Following typical procedures, we also expect that the recommendation will be up for review for a year, and customers will then need to start reporting to the new standard about a year after that. So that means we would expect the first customer to report under the new standard in maybe 3 years. Typically, after the new standard is finalized and distributed, we find that our customers then begin to evaluate how they plan to meet the new standard. And through groups like the Customer Advisory Boards (CABs), our customers tell us what kind of product changes are needed in order to satisfy their new reporting requirements. Of course, Oracle is also working with the Big 4 and Accenture and other implementers in order to ascertain that these recommended changes will indeed meet new reporting standards. So the best advice we can offer right now is, stay apprised of the standards convergence committee; know that Oracle is also staying abreast of developments; get involved with your CAB so your voice is heard; know that Oracle products continue to be GAAP compliant, and we will continue to maintain that as our standard. But exactly what is that "standard"--we need to wait on the standards convergence committee. In a nut shell, operating leases will become either capital leases or month to month rentals, but it is still too early, too political and too uncertain to call out at this point.

    Read the article

  • Creating a Reverse Proxy with URL Rewrite for IIS

    - by OWScott
    There are times when you need to reverse proxy through a server. The most common example is when you have an internal web server that isn’t exposed to the internet, and you have a public web server accessible to the internet. If you want to serve up traffic from the internal web server, you can do this through the public web server by creating a tunnel (aka reverse proxy). Essentially, you can front the internal web server with a friendly URL, even hiding custom ports. For example, consider an internal web server with a URL of http://10.10.0.50:8111. You can make that available through a public URL like http://tools.mysite.com/ as seen in the following image. The URL can be made public or it can be used for your internal staff and have it password protected and/or locked down by IP address. This is easy to do with URL Rewrite and IIS. You will also need Application Request Routing (ARR) installed even though for a simple reverse proxy you won’t use most of ARR’s functionality. If you don’t already have URL Rewrite and ARR installed you can do so easily with the Web Platform Installer. A lot can be said about reverse proxies and many different situations and ways to route the traffic and handle different URL patterns. However, my goal here is to get you up and going in the easiest way possible. Then you can dig in deeper after you get the base configuration in place. URL Rewrite makes a reverse proxy very easy to set up. Note that the URL Rewrite Add Rules template doesn’t include Reverse Proxy at the server level. That’s not to say that you can’t create a server-level reverse proxy, but the URL Rewrite rules template doesn’t help you with that. Getting Started First you must create a website on your public web server that has the public bindings that you need. Alternately, you can use an existing site and route using conditions for certain traffic. After you’ve created your site then open up URL Rewrite at the site level. Using the “Add Rule(s)…” template that is opened from the right-hand actions pane, create a new Reverse Proxy rule. If you receive a prompt (the first time) that the proxy functionality needs to be enabled, select OK. This is telling you that a proxy can route traffic outside of your web server, which happens to be our goal in this case. Be aware that reverse proxy rules can be dangerous if you open sites from inside you network to the world, so just be aware of what you’re doing and why. The next and final step of the template asks a few questions. The first textbox asks the name of the internal web server. In our example, it’s 10.10.0.50:8111. This can be any URL, including a subfolder like internal.mysite.com/blog. Don’t include the http or https here. The template assumes that it’s not entered. You can choose whether to perform SSL Offloading or not. If you leave this checked then all requests to the internal server will be over HTTP regardless of the original web request. This can help with performance and SSL bindings if all requests are within a trusted network. If the network path between the two web servers is not completely trusted and safe then uncheck this. Next, the template enables you to create an outbound rule. This is used to rewrite links in the page to look like your public domain name rather than the internal domain name. Outbound rules have a lot of CPU overhead because the entire web content needs to be parsed and updated. However, if you need it, then it’s well worth the extra CPU hit on the web server. If you check the “Rewrite the domain names of the links in HTTP responses” checkbox then the From textbox will be filled in with what you entered for the inbound rule. You can enter your friendly public URL for the outbound rule. This will essentially replace any reference to 10.10.0.50:8111 (or whatever you enter) with tools.mysite.com in all <a>, <form>, and <img> tags on your site. That’s it! Well, there is a lot more that you can do, this but will give you the base configuration. You can now visit www.mysite.com on your public web server and it will serve up the site from your internal web server. You should see two rules show up; one inbound and one outbound. You can edit these, add conditions, and tweak them further as needed. One common issue that can occur without outbound rules has to do with compression. If you run into errors with the new proxied site, try turning off compression to confirm if that’s the issue. Here’s a link with details on how to deal with compression and outbound rules. I hope this was helpful to get started and to see how easy it is to create a simple reverse proxy using URL Rewrite for IIS.

    Read the article

  • Five Reasons to Attend PLM Summit 2013: The Conference Formerly Known as AGILITY

    - by Terri Hiskey
    As we approach the end of 2012, we are also closing in on the last couple of weeks that Agile customers and prospects can register for the upcoming PLM Summit 2013 for the bargain early bird rate of $195. Register now to secure your spot! The Conference Formerly Known as AGILITY... Long-time Agile customers may remember AGILITY, which was Agile's PLM customer conference that was held on an annual basis prior to Oracle's acquisiton of Agile in 2007. In February 2012, due to feedback we received from our Agile PLM community, we successfully resurrected the AGILITY conference and renamed it the PLM Summit. The PLM Summit was so well received and well-attended, that we are doing it again in 2013. This upcoming PLM Summit is being co-located in San Francisco under the overarching banner of the Oracle Value Chain Summit, and will be held alongside several other Oracle customer conferences that cover a range of value chain solutions, including Value Chain Planning, Value Chain Execution, Procurement, Maintenance and Manufacturing. This setup offers PLM attendees the best of all worlds--the opportunity to participate and learn about PLM in smaller, focused sessions by product and by industry, while also giving attendees the chance to see how PLM works together with other critical enterprise applications that address other important aspects of the value chain. Top Five Reasons to Attend the PLM Summit 2013 In the spirit of all of the end-of-the-year lists that are currently popping up, here is a list of the top five reasons to attend the PLM Summit for anyone out there needs a little extra encouragement to register: 1. The Best Opportunities for Customer Networking   The PLM Summit offers attendees numerous opportunities to learn and network with fellow Agile users. Customer stories are featured in keynote and breakout presentations and the schedule allows for plenty of networking time during breakfasts, lunches, breaks and dinners. Customer networking is the number one reason that Agile users attend the PLM Summit. Read what attendees thought of the most recent PLM Summit: "Hearing about the implementation of Agile products from a customers’ perspective is invaluable." - Director of Quality Assurance & Regulatory Affairs, leading medical device manufacturer "Understanding the scope of other companies’ projects and the lessons learned made attending this event well worth my time." - Director of Test Engineering, global industrial manufacturer "The most beneficial thing about attending this event is the opportunity to network with other customers with similar experiences." - Director of Business Process Improvement, leading high technology company Come to the PLM Summit and play an active role within the PLM community: swap war stories and business cards, connect on LinkedIn and Facebook, share your stories and discuss the sessions from each day. Register now! 2. It's Educational! The PLM Summit is the premier educational event for anyone in the Agile PLM community. There are nearly 40 PLM-focused in-depth educational sessions led by Agile PLM experts, customers and partners that will cover a range of specific product and industry-focused topics. Keynotes will give attendees a broad overview of the entire Agile PLM footprint, while sessions will delve deeply into specific product functionality and customer case studies. There is truly something for everyone. Check out the latest agenda for view of all the sessions. 3. Visit with the PLM Partner Community Our partners play a significant and important role within the Agile PLM community. At the PLM Summit, attendees will be able to meet and mingle with several of the top Oracle Agile PLM partners including: Deloitte, Domain, GoEngineer, Hitachi Consulting, IBM, Kalypso, KPIT Cummins (CPG Solutions), Perception Software, Verdant, Xavor and ZeroWaitState. Go here for a complete list of all the Value Chain Summit sponsors. 4. See Agile PLM in Action at our Dedicated PLM Demo Pods At the PLM Summit, attendees will have the chance to see Agile PLM in action at dedicated PLM demo pods, manned by expert members of our Agile PLM team. If you would like to see up close specific Agile PLM functionality, or if you have a question on how to extend the scope of your current implemention or if you want a better understanding of how to leverage Agile PLM to address specific use-cases, stop by one of the Agile PLM demo pods and engage the Agile PLM experts on hand at the PLM Summit. 5. Spend Some Time in Lovely San Francisco Still on the fence about the upcoming PLM Summit? Remember that it is being held in San Francisco, which is a fantastic city for a getaway. After spending time learning and networking about PLM, take an extra day or two to escape the dreary winter and enjoy the beautiful scenery and the unique actitivies offered only by the City by the Bay. You will walk away from the conference not only with renewed excitement about Agile PLM, but feeling rejuvenated in general.

    Read the article

  • Maintaining packages with code - Adding a property expression programmatically

    Every now and then I've come across scenarios where I need to update a lot of packages all in the same way. The usual scenario revolves around a group of packages all having been built off the same package template, and something needs to updated to keep up with new requirements, a new logging standard for example.You'd probably start by updating your template package, but then you need to address all your existing packages. Often this can run into the hundreds of packages and clearly that's not a job anyone wants to do by hand. I normally solve the problem by writing a simple console application that looks for files and patches any package it finds, and it is an example of this I'd thought I'd tidy up a bit and publish here. This sample will look at the package and find any top level Execute SQL Tasks, and change the SQL Statement property to use an expression. It is very simplistic working on top level tasks only, so nothing inside a Sequence Container or Loop will be checked but obviously the code could be extended for this if required. The code that actually sets the expression is shown below, the rest is just wrapper code to find the package and to find the task. /// <summary> /// The CreationName of the Tasks to target, e.g. Execute SQL Task /// </summary> private const string TargetTaskCreationName = "Microsoft.SqlServer.Dts.Tasks.ExecuteSQLTask.ExecuteSQLTask, Microsoft.SqlServer.SQLTask, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; /// <summary> /// The name of the task property to target. /// </summary> private const string TargetPropertyName = "SqlStatementSource"; /// <summary> /// The property expression to set. /// </summary> private const string ExpressionToSet = "@[User::SQLQueryVariable]"; .... // Check if the task matches our target task type if (taskHost.CreationName == TargetTaskCreationName) { // Check for the target property if (taskHost.Properties.Contains(TargetPropertyName)) { // Get the property, check for an expression and set expression if not found DtsProperty property = taskHost.Properties[TargetPropertyName]; if (string.IsNullOrEmpty(property.GetExpression(taskHost))) { property.SetExpression(taskHost, ExpressionToSet); changeCount++; } } } This is a console application, so to specify which packages you want to target you have three options: Find all packages in the current folder, the default behaviour if no arguments are specified TaskExpressionPatcher.exe .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find all packages in a specified folder, pass the folder as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\ .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find a specific package, pass the file path as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\Package.dtsx The code was written against SQL Server 2005, but just change the reference to Microsoft.SQLServer.ManagedDTS to be the SQL Server 2008 version and it will work fine. If you get an error Microsoft.SqlServer.Dts.Runtime.DtsRuntimeException: The package failed to load due to error 0xC0011008… then check that the package is from the correct version of SSIS compared to the referenced assemblies, 2005 vs 2008 in other words. Download Sample Project TaskExpressionPatcher.zip (6 KB)

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Windows 8 Live Accounts and the actual Windows Account

    - by Rick Strahl
    As if Windows Security wasn't confusing enough, in Windows 8 we get thrown yet another curve ball with Windows Live accounts to logon. When I set up my Windows 8 machine I originally set it up with a 'real', non-live account that I always use on my Windows machines. I did this mainly so I have a matching account for resources around my home and intranet network so I could log on to network resources properly. At some point later I decided to set up Windows Live security just to see how changes things. Windows wants you to use Windows Live Windows 8 logins are required in order for the Windows RT account info to work. Not that I care - since installing Windows 8 I've maybe spent 10 minutes with Windows RT because - well it's pretty freaking sucky on the desktop. From shitty apps to mis-managed screen real estate I can't say that there's anything compelling there to date, but then I haven't looked that hard either. Anyway… I set up the Windows Live account to see if that changes things. It does - I do get all my live logins to work from Live Account so that Twitter and Facebook posts and pictures and calendars all show up on live tiles on the start screen and in the actual apps. That's nice-ish, but hardly that exciting given that all of the apps tied to those live tiles are average at best. And it would have been nice if all of this could be done without being forced into running with a Windows Live User Account - this all feels like strong-arming you into moving into Microsofts walled garden… and that's probably what it's meant to do. Who am I? The real problem to me though is that these Windows Live and raw Windows User accounts are a bit unpredictable especially when it comes to developer information about the account and which credentials to use. So for example Windows reports folder security like this: Notice it's showing my Windows Live account. Now if I go to Edit and try to add my Windows user account (rstrahl) it'll just automatically show up as the live account. On the other hand though the underlying system sees everything as my real Windows account. After I switched to a Windows Live login account and I have to login to Windows with my Live account, what do you suppose this returns?Console.WriteLine(Environment.UserName); It returns my raw Windows user account (rstrahl). All my permissions, all my actual settings and the desktop console altogether run under that account. If I look in TaskManager (or Process Explorer for me) I see: Everything running on the desktop shell with my login running under my Windows user account. I suppose it makes sense, but where is that association happening? When I switched to a Windows Live account, nowhere did I associate my real account with the Live account - it just happened. And looking through the account configuration dialogs I can't find any reference to the raw Windows account. Other than switching back I see no mention anywhere of the raw Windows account - everything refers to the Live account. Right then, clear as potato soup! So this is who you really are! The problem is that in some situations this schizophrenic account behavior gets a bit weird. Today I was running a local Web application in IIS that uses Windows Authentication - I tried to log-in with my real Windows account login because that's what I'm used to using with WINDOWS freaking Authentication through IIS. But… it failed. I checked my IIS settings, my apps login settings and I just could not for the life of me get into the site with my Windows username. That is until I finally realized that I should try using my Windows Live credentials instead. And that worked. So now in this Windows Authentication dialog I had to type in my Live ID and password, which is - just weird. Then in IIS if I look at a Trace page (or in my case my app's Status page) I see that the logged on account is - my Windows user account. What's really annoying about this is that in some places it uses the live account in other places it uses my Windows account. If I remote desktop into my Web server online - I have to use the local authentication dialog but I have to put in my real Windows credentials not the Live account. Oh yes, it's all so terribly intuitive and logical… So in summary, when you log on with a Live account you are actually mapped to an underlying Windows user. In any application if you check the user name it'll be the underlying user account (not sure what happens in a Windows RT app or even what mechanism is used there to get the user name info).  When logging on to local machine resource with user name and password you have to use your Live IDs even if the permissions on the resources are mapped to your underlying Windows account. Easy enough I suppose, but still not exactly intuitive behavior…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Kill a tree, save your website? Content strategy in action, part III

    - by Roger Hart
    A lot has been written about how driving content strategy from within an organisation is hard. And that's true. Red Gate is pretty receptive to new ideas, so although I've not had a total walk in the park, it's been a hike with charming scenery. But I'm one of the lucky ones. Lots of people are involved in content, and depending on your organisation some of those people might be the kind who'll gleefully call themselves "stakeholders". People holding a stake generally want to stick it through something's heart and bury it at a crossroads. Winning them over is not always easy. (Richard Ingram has made a nice visual summary of how this can feel - Content strategy Snakes & ladders - pdf ) So yes, a lot of content strategy advocates are having a hard time. And sure, we've got a nice opportunity to get together and have a hug and a cry, but in the interim we could use a hand. What to do? My preferred approach is, I'll confess, brutal. I'd like nothing so much as to take a scorched earth approach to our website. Burn it, salt the ground, and build the new one right: focusing on clearly delineated business and user content goals, and instrumented so we can tell if we're doing it right. I'm never getting buy-in for that, but a boy can dream. So how about just getting buy-in for some small, tenable improvements? Easier, but still non-trivial. I sat down for a chat with our marketing and design guys. It seemed like a good place to start, even if they weren't up for my "Ctrl-A + Delete"  solution. We talked through some of this stuff, and we pretty much agreed that our content is a bit more broken than we'd ideally like. But to get everybody on board, the problems needed visibility. Doing a visual content inventory Print out the internet. Make a Wall Of Content. Seriously. If you've already done a content inventory, you know your architecture, and you know the scale of the problem. But it's quite likely that very few other people do. So make it big and visual. I'm going to carbon hell, but it seems to be working. This morning, I printed out a tiny, tiny part of our website: the non-support content pertaining to SQL Compare I made big, visual, A3 blowups of each page, and covered a wall with them. A page per web page, spread over something like 6M x 2M, with metrics, right in front of people. Even if nobody reads it (and they are doing) the sheer scale is shocking. 53 pages, all told. Some are redundant, some outdated, some trivial, a few fantastic, and frighteningly many that are great ideas delivered not-quite-right. You have to stand quite far away to get it all in your field of vision. For a lot of today, a whole bunch of folks have been gawping in amazement, talking each other through it, peering at the details, and generally getting excited about content. Developers, sales guys, our CEO, the marketing folks - they're engaged. Will it last? I make no promises. But this sort of wave of interest is vital to getting a content strategy project kicked off. While the content strategist is a saucer-eyed orphan in the cupboard under the stairs, they're not getting a whole lot done. Of course, just printing the site won't necessarily cut it. You have to know your content, and be able to talk about it. Ideally, you'll also have page view and time-on-page metrics. One of the most powerful things you can do is, when people are staring at your wall of content, ask them what they think half of it is for. Pretty soon, you've made a case for content strategy. We're also going to get folks to mark it up - cover it with notes and post-its, let us know how they feel about our content. I'll be blogging about how that goes, but it's exciting. Different business functions have different needs from content, so the more exposure the content gets, and the more feedback, the more you know about those needs. Fingers crossed for awesome.

    Read the article

  • Ingame menu is not working correctly

    - by Johnny
    The ingame menu opens when the player presses Escape during the main game. If the player presses Y in the ingame menu, the game switches to the main menu. Up to here, everything works. But: On the other hand, if the player presses N in the ingame menu, the game should switch back to the main game(should resume the main game). But that doesn't work. The game just rests in the ingame menu if the player presses N. I set a breakpoint in this line of the Ingamemenu class: KeyboardState kbState = Keyboard.GetState(); CurrentSate/currentGameState and LastState/lastGameState have the same state: IngamemenuState. But LastState/lastGameState should not have the same state than CurrentSate/currentGameState. What is wrong? Why is the ingame menu not working correctly? public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; IState lastState, currentState; public enum GameStates { IntroState = 0, MenuState = 1, MaingameState = 2, IngamemenuState = 3 } public void ChangeGameState(GameStates newState) { lastGameState = currentGameState; lastState = currentState; switch (newState) { case GameStates.IntroState: currentState = new Intro(this); currentGameState = GameStates.IntroState; break; case GameStates.MenuState: currentState = new Menu(this); currentGameState = GameStates.MenuState; break; case GameStates.MaingameState: currentState = new Maingame(this); currentGameState = GameStates.MaingameState; break; case GameStates.IngamemenuState: currentState = new Ingamemenu(this); currentGameState = GameStates.IngamemenuState; break; } currentState.Load(Content); } public void ChangeCurrentToLastGameState() { currentGameState = lastGameState; currentState = lastState; } public GameStates CurrentState { get { return currentGameState; } set { currentGameState = value; } } public GameStates LastState { get { return lastGameState; } set { lastGameState = value; } } private GameStates currentGameState = GameStates.IntroState; private GameStates lastGameState; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { ChangeGameState(GameStates.IntroState); base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); currentState.Load(Content); } protected override void Update(GameTime gameTime) { currentState.Update(gameTime); if ((lastGameState == GameStates.MaingameState) && (currentGameState == GameStates.IngamemenuState)) { lastState.Update(gameTime); } base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); if ((lastGameState == GameStates.MaingameState) && (currentGameState == GameStates.IngamemenuState)) { lastState.Render(spriteBatch); } currentState.Render(spriteBatch); spriteBatch.End(); base.Draw(gameTime); } } public interface IState { void Load(ContentManager content); void Update(GameTime gametime); void Render(SpriteBatch batch); } public class Intro : IState { Texture2D Titelbildschirm; private Game1 game1; public Intro(Game1 game) { game1 = game; } public void Load(ContentManager content) { Titelbildschirm = content.Load<Texture2D>("gruft"); } public void Update(GameTime gametime) { KeyboardState kbState = Keyboard.GetState(); if (kbState.IsKeyDown(Keys.Space)) game1.ChangeGameState(Game1.GameStates.MenuState); } public void Render(SpriteBatch batch) { batch.Draw(Titelbildschirm, new Rectangle(0, 0, 1280, 720), Color.White); } } public class Menu:IState { Texture2D Choosescreen; private Game1 game1; public Menu(Game1 game) { game1 = game; } public void Load(ContentManager content) { Choosescreen = content.Load<Texture2D>("menubild"); } public void Update(GameTime gametime) { KeyboardState kbState = Keyboard.GetState(); if (kbState.IsKeyDown(Keys.Enter)) game1.ChangeGameState(Game1.GameStates.MaingameState); if (kbState.IsKeyDown(Keys.Escape)) game1.Exit(); } public void Render(SpriteBatch batch) { batch.Draw(Choosescreen, new Rectangle(0, 0, 1280, 720), Color.White); } } public class Maingame : IState { Texture2D Spielbildschirm, axe; Vector2 position = new Vector2(100,100); private Game1 game1; public Maingame(Game1 game) { game1 = game; } public void Load(ContentManager content) { Spielbildschirm = content.Load<Texture2D>("hauszombie"); axe = content.Load<Texture2D>("axxx"); } public void Update(GameTime gametime) { KeyboardState keyboardState = Keyboard.GetState(); float delta = (float)gametime.ElapsedGameTime.TotalSeconds; position.X += 5 * delta; position.Y += 3 * delta; if (keyboardState.IsKeyDown(Keys.Escape)) game1.ChangeGameState(Game1.GameStates.IngamemenuState); } public void Render(SpriteBatch batch) { batch.Draw(Spielbildschirm, new Rectangle(0, 0, 1280, 720), Color.White); batch.Draw(axe, position, Color.White); } } public class Ingamemenu : IState { Texture2D Quitscreen; private Game1 game1; public Ingamemenu(Game1 game) { game1 = game; } public void Load(ContentManager content) { Quitscreen = content.Load<Texture2D>("quit"); } public void Update(GameTime gametime) { KeyboardState kbState = Keyboard.GetState(); if (kbState.IsKeyDown(Keys.Y)) game1.ChangeGameState(Game1.GameStates.MenuState); if (kbState.IsKeyDown(Keys.N)) game1.ChangeCurrentToLastGameState(); } public void Render(SpriteBatch batch) { batch.Draw(Quitscreen, new Rectangle(200, 200, 200, 200), Color.White); } }

    Read the article

  • Welcome to Jackstown

    - by fatherjack
    I live in a small town, the population count isn't that great but let me introduce you to some of the population. We'll start with Martin the Doc, he fixes up anything that gets poorly, so much so that he could be classed as the doctor, the vet and even the garage mechanic. He's got a reputation that he can fix anything and that hasn't been proved wrong yet. He's great friends with Brian (who gets called "Brains") the teacher who seems to have a sound understanding of any topic you care to pass his way. If he isn't sure he tells you and then goes to find out and comes back with a full answer real quick. Its good to have that sort of research capability close at hand. Brains is also great at encouraging anyone who needs a bit of support to get them up to speed and working on their jobs. Steve sees Brains regularly, that's because he is the librarian, he keeps all sorts of reading material and nowadays there's even video to watch about any topic you like. Steve keeps scouring all sorts of places to get the content that's needed and he keeps it in good order so that what ever is needed can be found quickly. He also has to make sure that old stuff gets marked as probably out of date so that anyone reading it wont get mislead. Over the road from him is Greg, he's the town crier. We don't have a newspaper here so Greg keeps us all informed of what's going on "out of town" - what new stuff we might make use of and what wont work in a small place like this. If we are interested he goes ahead and gets people in to demonstrate their products  and tell us about the details. Greg is pretty good at getting us discounts too. Now Greg's brother Ian works for the mayors office in the "waste management department" nowadays its all about the recycling but he still has to make sure that the stuff that cant be used any more gets disposed of properly. It depends on the type of waste he's dealing with that decides how it need to be treated and he has to know a lot about the different methods and when to use which ones. There are two people that keep the peace in town, Brent is the detective, investigating wrong doings and applying justice where necessary and Bart is the diplomat who smooths things over when any people have a dispute or disagreement. Brent is meticulous in his investigations and fair in the way he handles any situation he finds. Discretion is his byword. There's a rumour that Bart used to work for the United Nations but what ever his history there is no denying his ability to get apparently irreconcilable parties working together to their combined benefit. Someone who works closely with Bart is Brad, he is the translator in town. He has several languages that he can converse in but he can also explain things from someone's point of view or  and make it understandable to someone else. To keep things on the straight and narrow from a legal perspective is Ben the solicitor, making sure we all abide by the rules.Two people who make for an interesting evening's conversation if you get them together are Aaron and Grant, Aaron is the local planning inspector and Grant is an inventor of some reputation. Anything being constructed around here needs Aarons agreement. He's quite flexible in his rules though; if you can justify what you want to do with solid logic but he wont stand for any development going on without his inclusion. That gets a demolition notice and there's no argument. Grant as I mentioned is the inventor in town, if something can be improved or created then Grant is your man. He mainly works on his own but isnt averse to getting specific advice and assistance from specialist from out of town if they can help him finish his creations.There aren't too many people left for you to meet in the town, there's Rob, he's an ex professional sportsman. He played Hockey, Football, Cricket, you name it. He was in his element as goal keeper / wicket keeper and that shows in his personal life. He just goes about his business and people often don't even know that he's helped them. Really low profile, doesn't get any glory but saves people from lots of problems, even disasters on occasion. There goes Neil, he's a bit of an odd person, some people say he's gifted with special clairvoyant powers, personally I think he's got his ear to the ground and knows where to find out the important news as soon as its made public. Anyone getting a visit from Neil is best off to follow his advice though, he's usually spot on and you wont be caught by surprise if you follow his recommendations – wherever it comes from.Poor old Andrew is the last person to introduce you to. Andrew doesn't show himself too often but when he does it seems that people find a reason to blame him for their problems, whether he had anything to do with their predicament or not. In all honesty, without fail, and to his great credit, he takes it in good grace and never retaliates or gets annoyed when he's out and about.  It pays off too as its very often the case that those who were blaming him recently suddenly find they need his help and they readily forget the issues pretty rapidly.And then there's me, what do I do in town? Well, I'm just a DBA with a lot of hats. (Jackstown Pop. 1)

    Read the article

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • Refactoring an immediate drawing function into VBO, access violation error

    - by Alex
    I have a MD2 model loader, I am trying to substitute its immediate drawing function with a Vertex Buffer Object one.... I am getting a really annoying access violation reading error and I can't figure out why, but mostly I'd like an opinion as to whether this looks correct (never used VBOs before). This is the original function (that compiles ok) which calculates the keyframe and draws at the same time: glBegin(GL_TRIANGLES); for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } glNormal3f(normal[0], normal[1], normal[2]); MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos[0], pos[1], pos[2]); } } glEnd(); What I'd like to do is to calculate all positions before hand, store them in a Vertex array and then draw them. This is what I am trying to replace it with (in the exact same part of the program) int vCount = 0; for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } indices[vCount] = normal[0]; vCount++; indices[vCount] = normal[1]; vCount++; indices[vCount] = normal[2]; vCount++; MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; indices[vCount] = texCoord->texCoordX; vCount++; indices[vCount] = texCoord->texCoordY; vCount++; indices[vCount] = pos[0]; vCount++; indices[vCount] = pos[1]; vCount++; indices[vCount] = pos[2]; vCount++; } } totalVertices = vCount; glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); glNormalPointer(GL_FLOAT, 0, indices); glTexCoordPointer(2, GL_FLOAT, sizeof(float)*3, indices); glVertexPointer(3, GL_FLOAT, sizeof(float)*5, indices); glDrawElements(GL_TRIANGLES, totalVertices, GL_UNSIGNED_BYTE, indices); glDisableClientState(GL_VERTEX_ARRAY); // disable vertex arrays glEnableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); First of all, does it look right? Second, I get access violation error "Unhandled exception at 0x01455626 in Graphics_template_1.exe: 0xC0000005: Access violation reading location 0xed5243c0" pointing at line 7 Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; where the two Vs seems to have no value in the debugger.... Till this point the function behaves in exactly the same way as the one above, I don't understand why this happens? Thanks for any help you may be able to provide!

    Read the article

  • Maintaining packages with code - Adding a property expression programmatically

    Every now and then I've come across scenarios where I need to update a lot of packages all in the same way. The usual scenario revolves around a group of packages all having been built off the same package template, and something needs to updated to keep up with new requirements, a new logging standard for example.You'd probably start by updating your template package, but then you need to address all your existing packages. Often this can run into the hundreds of packages and clearly that's not a job anyone wants to do by hand. I normally solve the problem by writing a simple console application that looks for files and patches any package it finds, and it is an example of this I'd thought I'd tidy up a bit and publish here. This sample will look at the package and find any top level Execute SQL Tasks, and change the SQL Statement property to use an expression. It is very simplistic working on top level tasks only, so nothing inside a Sequence Container or Loop will be checked but obviously the code could be extended for this if required. The code that actually sets the expression is shown below, the rest is just wrapper code to find the package and to find the task. /// <summary> /// The CreationName of the Tasks to target, e.g. Execute SQL Task /// </summary> private const string TargetTaskCreationName = "Microsoft.SqlServer.Dts.Tasks.ExecuteSQLTask.ExecuteSQLTask, Microsoft.SqlServer.SQLTask, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; /// <summary> /// The name of the task property to target. /// </summary> private const string TargetPropertyName = "SqlStatementSource"; /// <summary> /// The property expression to set. /// </summary> private const string ExpressionToSet = "@[User::SQLQueryVariable]"; .... // Check if the task matches our target task type if (taskHost.CreationName == TargetTaskCreationName) { // Check for the target property if (taskHost.Properties.Contains(TargetPropertyName)) { // Get the property, check for an expression and set expression if not found DtsProperty property = taskHost.Properties[TargetPropertyName]; if (string.IsNullOrEmpty(property.GetExpression(taskHost))) { property.SetExpression(taskHost, ExpressionToSet); changeCount++; } } } This is a console application, so to specify which packages you want to target you have three options: Find all packages in the current folder, the default behaviour if no arguments are specified TaskExpressionPatcher.exe .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find all packages in a specified folder, pass the folder as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\ .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find a specific package, pass the file path as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\Package.dtsx The code was written against SQL Server 2005, but just change the reference to Microsoft.SQLServer.ManagedDTS to be the SQL Server 2008 version and it will work fine. If you get an error Microsoft.SqlServer.Dts.Runtime.DtsRuntimeException: The package failed to load due to error 0xC0011008… then check that the package is from the correct version of SSIS compared to the referenced assemblies, 2005 vs 2008 in other words. Download Sample Project TaskExpressionPatcher.zip (6 KB)

    Read the article

  • Lessons learned from Word 2007 automation with c# 2008

    - by robertphyatt
    My organization has an ongoing project to take documents produced for internal regulations and such, change some of the formatting and then export it as PDF. Our requirements were that only one person would be doing this, but it has been painfully tedious and sometimes error-prone to do by hand. Enter the fearless developer to automate the situation! Since I am one of those guys that just plain does not like VB, I wanted to do the automation in the ever-so-much-more-familiar C#. While Microsoft had made a dll that makes such a task easier, documentation on MSDN is pretty lame and most of the forumns and posts on the internet had little to do with my task. So, I feel like I can give back to the community and make a post here of the things I have learned so far. I hope this is helpful to whoever stumbles upon it. Steps to do this: 1) First of all, make some sort of a project and use some sort of a means to get the filename of the word document you are trying to open. I got the filename the user wanted with an openFileDialog tied to a button that I labeled 'Browse':        private void btnBrowse_Click(object sender, EventArgs e)        {            try            {                DialogResult myResult = openFileDialog1.ShowDialog();                if (myResult.Equals(DialogResult.OK))                {                    if (openFileDialog1.SafeFileName.EndsWith(".doc"))                    {                        txtFileName.Text = openFileDialog1.SafeFileName;                        paramSourceDocPath = openFileDialog1.FileName;                        paramExportFilePath = openFileDialog1.FileName.Replace(".doc", ".pdf");                    }                    else                    {                        txtFileName.Text = "only something that end with .doc, please";                    }                }            }            catch (Exception err)            {                lblError.Text = err.Message;            }        }   2) Add in "using Microsoft.Office.Interop.Word;" after setting your project to reference Microsoft.Office.Core and Microsoft.Office.Interop.Word so that you don't have to add "Microsoft.Office.Interop.Word" to the front of everything. 3) Now you are ready to play. You will need to have a copy of word open and a copy of your word document that you want to modify open to be able to make the changes that are needed. The word interop dll likes using ref on all the parameters passed in, and likes to have them as objects. If you don't want to specify the parameter, you have to give it a "Type.Missing". I suggest creating some objects that you reuse all over the place to maintain sanity. object paramMissing = Type.Missing; ApplicationClass wordApplication = new ApplicationClass(); Document wordDocument = wordApplication.Documents.Open(                ref paramSourceDocPath, ref paramMissing, ref paramMissing,                ref paramMissing, ref paramMissing, ref paramMissing,                ref paramMissing, ref paramMissing, ref paramMissing,                ref paramMissing, ref paramMissing, ref paramMissing,                ref paramMissing, ref paramMissing, ref paramMissing,                ref paramMissing); 4) There are many ways to modify the text of the inside of the word document. One of the ways that was most effective for me was to break it down by paragraph and then do things on each paragraph by what style the particular paragraph had.            foreach (Paragraph thisParagraph in wordDocument.Content.Paragraphs)            {                string strStyleName = ((Style)thisParagraph.get_Style()).NameLocal;                string strText = thisParagraph.Range.Text;                //Do whatever you need to do            } 5) Sometimes you want to insert a new line character somewhere in the text or insert text into the document, etc.  There are a few ways you can do this: you can either modify the text of a paragraph by doing something like this ('\r' makes a new paragraph, '\v' will make a newline without making a new paragraph. If you remove a '\r' from the text, it will eliminate the paragraph you removed it from): thisParagraph.Range.Text = "A\vNew Paragraph!\r" + thisParagraph.Range.Text; OR you could select where you want to insert it and have it act like you were typing in Word like any normal user (note: if you do not collapse the range first, you will overwrite the thing you got the range from) object oCollapseDirectionEnd = WdCollapseDirection.wdCollapseEnd; object oCollapseDirectionStart = WdCollapseDirection.wdCollapseStart; Range rangeInsertAtBeginning = thisParagraph.Range; Range rangeInsertAtEnd = thisParagraph.Range; rangeInsertAtBeginning.Collapse(ref oCollapseDirectionStart); rangeInsertAtEnd.Collapse(ref oCollapseDirectionEnd); rangeInsertAtBeginning.Select(); wordApplication.Selection.TypeText("Blah Blah Blah"); rangeInsertAtEnd.Select(); wordApplication.Selection.TypeParagraph(); 6) If you want to make text columns, like a newspaper or newsletter, you have to modify the page layout of the document or a section of the document to make it happen. In my case, I only wanted a particular section to have that, and I wanted to have a black line before and after the newspaper-like text columns. First you need to do a section break on either side of what you wanted, then you take the section and modify the page layout. Then you can modify the borders of the section (or another object in the word document). I also show here how to modify the alignment of a paragraph.            object oSectionBreak = WdBreakType.wdSectionBreakContinuous;            //These ranges were set while I was going through the paragraphs of my document, like I was showing earlier            rangeHeaderStart.InsertBreak(ref oSectionBreak);            rangeHeaderEnd.InsertBreak(ref oSectionBreak);            //change the alignment to justify            object oRangeHeaderStart = rangeStartJustifiedAlignment.Start;            object oRangeHeaderEnd = rangeHeaderEnd.End;            Range rangeHeader = wordDocument.Range(ref oRangeHeaderStart, ref oRangeHeaderEnd);            rangeHeader.Paragraphs.Alignment = WdParagraphAlignment.wdAlignParagraphJustify;            //find the section break and make it into triple text columns            foreach (Section mySection in wordDocument.Sections)            {                if (mySection.Range.Start == rangeHeaderStart.Start)                {                    mySection.PageSetup.TextColumns.Add(ref paramMissing, ref paramMissing, ref paramMissing);                    mySection.PageSetup.TextColumns.Add(ref paramMissing, ref paramMissing, ref paramMissing);                    //I didn't like the default spacing and column widths. This is how I adjusted them.                    foreach (TextColumn txtc in mySection.PageSetup.TextColumns)                    {                        try                        {                            txtc.SpaceAfter = 151.6f;                            txtc.Width = 7;                        }                        catch (Exception)                        {                            txtc.Width = 151.6f;                        }                    }                }            } That is all  I have time for today! I hope this was helpful to someone!

    Read the article

  • Open World Day 3

    - by Antony Reynolds
    A Day in the Life of an Oracle OpenWorld Attendee Part IV My third day was exhibition day for me!  I took the opportunity to wander around the JavaOne and OpenWorld exhibitions to see what might be useful for me when selling WebLogic, Coherence & SOA Suite.  I found a number of interesting vendors and thought I would share what I found here.  These are not necessarily endorsements, but observations on companies that I thought had interesting looking products that fill a need I have seen at customers. Highly Available EBS Upgrades A few years ago I worked with a customer that was a port authority.  They wanted to tie E-Business Suite into their operations to provide faster processing of cargo and passengers.  However they only had a 2 hour downtime window to perform upgrades.  This was not a problem for core database and middleware technology, this could accommodate those upgrade timescales easily.  It was a problem for EBS however so I intrigued to find Rapid E-Suite Inc offering an 11i to 12i upgrade service that claims to require no outage.  This could be a real boon to EBS customers like my port friends that need to upgrade without disruption to their business. Mobile on WebLogic I have come across a number of customers who want a comprehensive mobile solution, connected and disconnected operation and so forth.  ADF only addresses part of these requirements currently so I was excited to discover mFrontiers Inc offering an apparently comprehensive solution that should integrate easily with Oracle SOA Suite to mobile enable a SOA infrastructure.  The ability to operate without a network is important for many applications, particularly in industries that require their engineers to enter buildings to perform maintenance or repairs, because network access is not always available – many of my colleagues don’t have mobile access from their homes because they live in the middle of nowhere – and disconnected support is crucial in these situations. Sharepoint Connector for WebCenter Content Obviously Sharepoint is an evil pernicious intrusion into a companies IT estate but it is widely deployed and many people like it but also would like to take advantage of Oracle products such as WebCenter Content.  So I was encouraged to see that Fishbowl Solutions have created a connector for Sharepoint that allows it to bring in content from WebCenter, it looks like a valuable way to maintain the Sharepoint interface end users are used to but extend the range of content by pulling stuff (technical term for content) from WebCenter.   Load Balancing The Enterprise Deployment Guides are Oracles bible on building highly available FMW environments, and each of them requires a front end load balancer.  I have been asked to help configure F5 Load Balancers on a number of occasions over my time at Oracle and each time I come back to it I find more useful features have been added to the BigIP line of load balancers that F5 sell, many of their documents are tailored to FMW.  I like F5, they provide (relatively) easy to use products that do what they say on the side of the box.  They may not have all the bells and whistles of some of their more expensive competitors but they do the job and do it well!  Besides which I like their logo! Other Stuff I saw lots of other interesting products and services, such as a lightweight monitoring tool for Coherence, Forms migration services, JCAPS migration services and lots of cool freebies to take home to the children! A Quiet Night Wednesday night was the partner appreciation event and I had decided to go back to the hotel and have an early night.  I decided to attend the last session of the day – a Maven/Hudson/WebLogic tutorial.  I got the wrong hotel for the session and snuck in 20 minutes late at the back and starting working on the hands on workshop.  One of my co-attendees raised his hand for help and as the presenter came over to help he suddenly stopped and yelled – “Is that Antony”!  It was my old friend Steve Button who used to be based in Redwood Shores but is now a WebLogic guru PM in Australia.  It was good to catch up with him.  As he yelled out a guy with really bad posture turned around to see who he was talking to, this turned out to be my friend Simon Haslan, Oracle ACE from the UK.  After the tutorial Simon and I retired to the coffee shop to catch up and share stories.  2 and half hours later we decided it was time to retire, so much for an early night but great to renew old friendships and find out what real customers are worrying about.

    Read the article

  • Red Gate in the Community

    - by Nick Harrison
    Much has been said recently about Red Gate's community involvement and commitment to the DotNet community. Much of this has been unduly negative. Before you start throwing stones and spewing obscenities, consider some additional facts: Red Gate's software is actually very good. I have worked on many projects where Red Gate's software was instrumental in finishing successfully. Red Gate is VERY good to the community. I have spoken at many user groups and code camps where Red Gate has been a sponsor. Red Gate consistently offers up money to pay for the venue or food, and they will often give away licenses as door prizes. There are many such community events that would not take place without Red Gate's support. All I have ever seen them ask for is to have their products mentioned or be listed as a sponsor. They don't insist on anyone following a specific script. They don't monitor how their products are showcased. They let their products speak for themselves. Red Gate sponsors the Simple Talk web site. I publish there regularly. Red Gate has never exerted editorial pressure on me. No one has ever told me we can't publish this unless you mention Red Gate products. No one has ever said, you need to say nice things about Red Gate products in order to be published. They have told me, "you need to make this less academic, so you don't alienate too many readers. "You need to actually write an introduction so people will know what you are talking about". "You need to write this so that someone who isn't a reflection nut will follow what you are trying to say." In short, they have been good editors worried about the quality of the content and what the readers are likely to be interested in. For me personally, Red Gate and Simple Talk have both been excellent to work with. As for the developer outrage… I am a little embarrassed by so much of the response that I am seeing. So much of the complaints remind me of little children whining "but you promised" Semantics aside. A promise is just a promise. It's not like they "pinky sweared". Sadly no amount name calling or "double dog daring" will change the economics of the situation. Red Gate is not a multibillion dollar corporation. They are a mid size company doing the best they can. Without a doubt, their pockets are not as deep as Microsoft's. I honestly believe that they did try to make the "freemium" model work. Sadly it did not. I have no doubt that they intended for it to work and that they tried to make it work. I also have no doubt that they labored over making this decision. This could not have been an easy decision to make. Many people are gleefully proclaiming a massive backlash against Red Gate swearing off their wonderful products and promising to bash them at every opportunity from now on. This is childish behavior that does not represent professionals. This type of behavior is more in line with bullies in the school yard than professionals in a professional community. Now for my own prediction… This back lash against Red Gate is not likely to last very long. We will all realize that we still need their products. We may look around for alternatives, but realize that they really do have the best in class for every product that they produce, and that they really are not exorbitantly priced. We will see them sponsoring Code Camps and User Groups and be reminded, "hey this isn't such a bad company". On the other hand, software shops like Red Gate, will remember this back lash and give a second thought to supporting open source projects. They will worry about getting involved when an individual wants to turn over control for a product that they developed but can no longer support alone. Who wants to run the risk of not being able to follow through on their best intentions. In the end we may all suffer, even the toddlers among us throwing the temper tantrum, "BUT YOU PROMISED!" Disclaimer Before anyone asks or jumps to conclusions, I do not get paid by Red Gate to say any of this. I have often written about their products, and I have long thought that they are a wonderful company with amazing products. If they ever open an office in the SE United States, I will be one of the first to apply.

    Read the article

  • SOA Suite 11g Dynamic Payload Testing with soapUI Free Edition

    - by Greg Mally
    Overview Many web service developers use soapUI for various tests like: smoke test, unit test, and load testing because you can get a free edition that is fairly robust. However, if you need to venture into more complex testing that requires a dynamic payload, then the free edition doesn't necessarily make it easy. This feature does exist in soapUI, but for obvious reasons it is in the Pro version. In this blog I will show you how to use soapUI free edition for dynamic payloads in a simplified example. Hopefully this will open the doors for you to expand into more complex scenarios. The following assumes that you have a working knowledge of soapUI and will not go into concepts like setting up a project etc. For the basics, please review the documentation for soapUI: http://www.soapui.org/Getting-Started/. Additionally, we will be using asynchronous web services and you can review the setup for this in my blog: SOA Suite 11g Asynchronous Testing with soapUI. Features in soapUI Free Edition Relating to this Topic The soapUI test tool provides a very feature rich environment that can do many things provided you are willing to go beyond point and click. For this example, we will be leveraging just a couple features for our dynamic payload example: Test Case Properties Scripting with Groovy Basically, we will be using a property as a global variable and we will manipulate that property using a Groovy script. Setting Up Our Property Properties are available throughout soapUI and here is a snippet from the soapUI website defining the locations: Projects : for handling Project scope values, for example a subscription ID TestSuite : for handling TestSuite scoped values, can be seen as "arguments" to a TestSuite TestCases : for handling TestCase scoped values, can be seen as "arguments" to a TestCase Properties TestStep : for providing local values/state within a TestCase Local TestStep properties : several TestStep types maintain their own list of properties specific to their functionality : DataSource, DataSink, Run TestCase MockServices : for handling MockService scoped values/arguments MockResponses : for handling MockResponse scoped values Global Properties : for handling Global properties, optionally from an external source For our example, we will be defining a custom property in a TestCase called SimpleAsyncPayload. The property can be created in either the Custom Properties tab located at the bottom of the Navigator panel when the TestCase is selected in the Navigator or the Properties label in the TestCase editor: Navigator Panel TestCase Editor You will notice that I set a value of “0” for the custom property. For this simplified example, we will need to retrieve that value and manipulate it prior to making the web service request invocation. In order to accomplish this, we will need to get Groovy ;) Let's Get Groovy We will now add a new Groovy Script step to the TestCase called Manipulate Payload: TestCase Editor > Append Step > Groovy Script Once we have added the Groovy Script step to our TestCase, we can open the Groovy Script editor to add the code to: Get the current value of the property we created called SimpleAsyncPayload. Convert the value of the property to an integer. Increment the value. Store the incremented value back into the TestCase property called SimpleAsyncPayload. The script should look something like the following: Groovy Script Editor – Manipulate Payload At this point we can test the script to see if it is working by simply running the TestCase (left-click on the green triangle in the upper left-hand corner of the TestCase editor). To verify if it ran correctly, we can look at the value of the SimpleAsyncPayload property which should now be 1: TestCase Editor – Run Results All that is left to complete the TestCase is to append another step of type Test Request. The information required to append the request is a name and an operation to invoke. In this example we will use the default name and select the SimpleAsyncBPELProcessBingd -> process as the operation (any other information being requested, simply use the defaults unless you are calling an asynchronous operation then do not add any assertions). We are now in familiar ground with the Test Request editor. Depending upon the type of operation you are invoking (synchronous or asynchronous), please update the request with the necessary information (e.g., callback information for asynchronous operations). We will now tweak the Test Request payload to retrieve the value of the SimpleAsyncPayload property. The soapUI editor makes this very simple: right-click in the payload and navigate to the property (e.g., right-click > Get Data.. > TestCase: [Groovy TestCase] > Property [SimpleAsyncPayload]): Test Request Editor – Insert Property Value Your payload should now look something like the following: Test Request Editor – Inserted Property Value Just like before, we are now ready to run the TestCase. If everything goes as expected we should see a response like the following: Message Viewer – Results of TestCase Run We are now setup to be able to run a stress test where the payload will change for each request. This simple example can be expanded to include multiple payload values, complex calculations in the scripts, or whatever can be done via the soapUI scripting. Hopefully you have found this useful and happy testing to you :)

    Read the article

  • 7 Reasons for Abandonment in eCommerce and the need for Contextual Support by JP Saunders

    - by Tuula Fai
    Shopper confidence, or more accurately the lack thereof, is the bane of the online retailer. There are a number of questions that influence whether a shopper completes a transaction, and all of those attributes revolve around knowledge. What products are available? What products are on offer? What would be the cost of the transaction? What are my options for delivery? In general, most online businesses do a good job of answering basic questions around the products as the shopper engages in the online journey, navigating the product catalog and working through the checkout process. The needs that are harder to address for the shopper are those that are less concerned with product specifics and more concerned with deciding whether the transaction met their needs and delivered value. A recent study by the Baymard Institute [1] finds that more than 60% of ecommerce site visitors will abandon their shopping cart. The study also identifies seven reasons for abandonment out of the commerce process [2]. Most of those reasons come down to poor usability within the commerce experience. Distractions. External distractions within the shopper’s external environment (TV, Children, Pets, etc.) or distractions on the eCommerce page can drive shopper abandonment. Ideally, the selection and check-out process should be straightforward. One common distraction is to drive the shopper away from the task at hand through pop-ups or re-directs. The shopper engaging with support information in the checkout process should not be directed away from the page to consume support. Though confidence may improve, the distraction also means abandonment may increase. Poor Usability. When the experience gets more complicated, buyer’s remorse can set in. While knowledge drives confidence, a lack of understanding erodes it. Therefore it is important that the commerce process is streamlined. In some cases, the number of clicks to complete a purchase is lengthy and unavoidable. In these situations, it is vital to ensure that the complexity of your experience can be explained with contextual support to avoid abandonment. If you can illustrate the solution to a complex action while the user is engaged in that action and address customer frustrations with your checkout process before they arise, you can decrease abandonment. Fraud. The perception of potential fraud can be enough to deter a buyer. Does your site look credible? Can shoppers trust your brand? Providing answers on the security of your experience and the levels of protection applied to profile information may play as big a role in ensuring the sale, as does the support you provide on the product offerings and purchasing process. Does it fit? If it is a clothing item or oversized furniture item, another common form of abandonment is for the shopper to question whether the item can be worn by the intended user. Providing information on the sizing applied to clothing, physical dimensions, and limitations on delivery/returns of oversized items will also assist the sale. A photo alone of the item will help, as it answers some of those questions, but won’t assuage all customer concerns about sizing and fit. Sometimes the customer doesn’t want to buy. Prospective buyers might be browsing through your catalog to kill time, or just might not have the money to purchase the item! You are unlikely to provide any information in contextual support to increase the likelihood to buy if the shopper already has no intentions of doing so. The customer will still likely abandon. Ensuring that any questions are proactively answered as they browse through your site can only increase their likelihood to return and buy at a future date. Can’t Buy. Errors or complexity at checkout can be another major cause of abandonment. Good contextual support is unlikely to help with severe errors caused by technical issues on your site, but it will have a big impact on customers struggling with complexity in the checkout process and needing a question answered prior to completing the sale. Embedded support within the checkout process to patiently explain how to complete a task will help increase conversion rates. Additional Costs. Tax, shipping and other costs or duties can dramatically increase the cost of the purchase and when unexpected, can increase abandonment, particularly if they can’t be adequately explained. Again, a lack of knowledge erodes confidence in the purchase, and cost concerns in particular, erode the perception of your brand’s trustworthiness. Again, providing information on what costs are additive and why they are being levied can decrease the likelihood that the customer will abandon out of the experience. Knowledge drives confidence and confidence drives conversion. If you’d like to understand best practices in providing contextual customer support in eCommerce to provide your shoppers with confidence, download the Oracle Cloud Service and Oracle Commerce - Contextual Support in Commerce White Paper. This white paper discusses the process of adding customer support, including a suggested process for finding where knowledge has the most influence on your shoppers and practical step-by-step illustrations on how contextual self-service can be added to your online commerce experience. Resources: [1] http://baymard.com/checkout-usability [2] http://baymard.com/blog/cart-abandonment

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >