Search Results

Search found 780 results on 32 pages for 'desu never lies'.

Page 29/32 | < Previous Page | 25 26 27 28 29 30 31 32  | Next Page >

  • Community Conversation

    - by ultan o'broin
    Applications User Experience members (Erika Webb, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Applications User Experience Senior Director Laurie Pattison (left) with Anne Gentle at the User Assistance Europe Conference In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: * Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help * The Microsoft Development Network Community Center * ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. * Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches * The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes are reaching out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • PeopleSoft Upgrades, Fusion, & BI for Leading European PeopleSoft Applications Customers

    - by Mark Rosenberg
    With so many industry-leading services firms around the globe managing their businesses with PeopleSoft, it’s always an adventure setting up times and meetings for us to keep in touch with them, especially those outside of North America who often do not get to join us at Oracle OpenWorld. Fortunately, during the first two weeks of May, Nigel Woodland (Oracle’s Service Industries Director for the EMEA region) and I successfully blocked off our calendars to visit seven different customers spanning four countries in Western Europe. We met executives and leaders at four Staffing industry firms, two Professional Services firms that engage in consulting and auditing, and a Financial Services firm. As we shared the latest information regarding product capabilities and plans, we also gained valuable insight into the hot technology topics facing these businesses. What we heard was both informative and inspiring, and I suspect other Oracle PeopleSoft applications customers can benefit from one or more of the following observations from our trip. Great IT Plans Get Executed When You Respect the Users Each of our visits followed roughly the same pattern. After introductions, Nigel outlined Oracle’s product and technology strategy, including a discussion of how we at Oracle invest in each layer of the “technology stack” to provide customers with unprecedented business management capabilities and choice. Then, I provided the specifics of the PeopleSoft product line’s investment strategy, detailing the dramatic number of rich usability and functionality enhancements added to release 9.1 since its general availability in 2009 and the game-changing capabilities slated for 9.2. What was most exciting about each of these discussions was that shortly after my talking about what customers can do with release 9.1 right now to drive up user productivity and satisfaction, I saw the wheels turning in the minds of our audiences. Business analyst and end user-configurable tools and technologies, such as WorkCenters and the Related Action Framework, that provide the ability to tailor a “central command center” to the exact needs of each recruiter, biller, and every other role in the organization were exactly what each of our customers had been looking for. Every one of our audiences agreed that these tools which demonstrate a respect for the user would finally help IT pole vault over the wall of resistance that users had often raised in the past. With these new user-focused capabilities, IT is positioned to definitively partner with the business, instead of drag the business along, to unlock the value of their investment in PeopleSoft. This topic of respecting the user emerged during our very first visit, which was at Vital Services Group at their Head Office “The Mill” in Manchester, England. (If you are a student of architecture and are ever in Manchester, you should stop in to see this amazingly renovated old mill building.) I had just finished explaining our PeopleSoft 9.2 roadmap, and Mike Code, PeopleSoft Systems Manager for this innovative staffing company, said, “Mark, the new features you’ve shown us in 9.1/9.2 are very relevant to our business. As we forge ahead with the 9.1 upgrade, the ability to configure a targeted user interface with WorkCenters, Related Actions, Pivot Grids, and Alerts will enable us to satisfy the business that this upgrade is for them and will deliver tangible benefits. In fact, you’ve highlighted that we need to start talking to the business to keep up the momentum to start reviewing the 9.2 upgrade after we get to 9.1, because as much as 9.1 and PeopleTools 8.52 offers, what you’ve shown us for 9.2 is what we’ve envisioned was ultimately possible with our investment in PeopleSoft applications.” We also received valuable feedback about our investment for the Staffing industry when we visited with Hans Wanders, CIO of Randstad (the second largest Staffing company in the world) in the Netherlands. After our visit, Hans noted, “It was very interesting to see how the PeopleSoft applications have developed. I was truly impressed by many of the new developments.” Hans and Mike, sincere thanks for the validation that our team’s hard work and dedication to “respecting the users” is worth the effort! Co-existence of PeopleSoft and Fusion Applications Just Makes Sense As a “product person,” one of the most rewarding things about visiting customers is that they actually want to talk to me. Sometimes, they want to discuss a product area that we need to enhance; other times, they are interested in learning how to extract more value from their applications; and still others, they want to tell me how they are using the applications to drive real value for the business. During this trip, I was very pleased to hear that several of our customers not only thought the co-existence of Fusion applications alongside PeopleSoft applications made sense in theory, but also that they were aggressively looking at how to deploy one or more Fusion applications alongside their PeopleSoft HCM and FSCM applications. The most common deployment plan in the works by three of the organizations is to upgrade to PeopleSoft 9.1 or 9.2, and then adopt one of the new Fusion HCM applications, such as Fusion Performance Management or the full suite of  Fusion Talent Management. For example, during an applications upgrade planning discussion with the staffing company Hays plc., Mark Thomas, who is Hays’ UK IT Director, commented, “We are very excited about where we can go with the latest versions of the PeopleSoft applications in conjunction with Fusion Talent Management.” Needless to say, this news was very encouraging, because it reiterated that our applications investment strategy makes good business sense for our customers. Next Generation Business Intelligence Is the Key to the Future The third, and perhaps most exciting, lesson I learned during this journey is that our audiences already know that the latest generation of Business Intelligence technologies will be the “secret sauce” for organizations to transform business in radical ways. While a number of the organizations we visited on the trip have deployed or are deploying Oracle Business Intelligence Enterprise Edition and the associated analytics applications to provide dashboards of easy-to-understand, user-configurable metrics that help optimize business performance according to current operating procedures, what’s most exciting to them is being able to use Business Intelligence to change the way an organization does business, grows revenue, and makes a profit. In particular, several executives we met asked whether we can help them minimize the need to have perfectly structured data and at the same time generate analytics that improve order fulfillment decision-making. To them, the path to future growth lies in having the ability to analyze unstructured data rapidly and intuitively and leveraging technology’s ability to detect patterns that a human cannot reasonably be expected to see. For illustrative purposes, here is a good example of a business problem where analyzing a combination of structured and unstructured data can produce better results. If you have a resource manager trying to decide which person would be the best fit for an assignment in terms of ensuring (a) client satisfaction, (b) the individual’s satisfaction with the work, (c) least travel distance, and (d) highest margin, you traditionally compare resource qualifications to assignment needs, calculate margins on past work with the client, and measure distances. To perform these comparisons, you are likely to need the organization to have profiles setup, people ranked against profiles, margin targets setup, margins measured, distances setup, distances measured, and more. As you can imagine, this requires organizations to plan and implement data setup, capture, and quality management initiatives to ensure that dependable information is available to support resourcing analysis and decisions. In the fast-paced, tight-budget world in which most organizations operate today, the effort and discipline required to maintain high-quality, structured data like those described in the above example are certainly not desirable and in some cases are not feasible. You can imagine how intrigued our audiences were when I informed them that we are ready to help them analyze volumes of unstructured data, detect trends, and produce recommendations. Our discussions delved into examples of how the firms could leverage Oracle’s Secure Enterprise Search and Endeca technologies to keyword search against, compare, and learn from unstructured resource and assignment data. We also considered examples of how they could employ Oracle Real-Time Decisions to generate statistically significant recommendations based on similar resourcing scenarios that have produced the desired satisfaction and profit margin results. --- Although I had almost no time for sight-seeing during this trip to Europe, I have to say that it may have been one of the most energizing and engaging trips of my career. Showing these dedicated customers how they can give every user a uniquely tailored set of tools and address business problems in ways that have to date been impossible made the journey across the Atlantic more than worth it. If any of these three topics intrigue you, I’d recommend you contact your Oracle applications representative to arrange for more detailed discussions with the appropriate members of our organization.

    Read the article

  • Observations in Migrating from JavaFX Script to JavaFX 2.0

    - by user12608080
    Observations in Migrating from JavaFX Script to JavaFX 2.0 Introduction Having been available for a few years now, there is a decent body of work written for JavaFX using the JavaFX Script language. With the general availability announcement of JavaFX 2.0 Beta, the natural question arises about converting the legacy code over to the new JavaFX 2.0 platform. This article reflects on some of the observations encountered while porting source code over from JavaFX Script to the new JavaFX API paradigm. The Application The program chosen for migration is an implementation of the Sudoku game and serves as a reference application for the book JavaFX – Developing Rich Internet Applications. The design of the program can be divided into two major components: (1) A user interface (ideally suited for JavaFX design) and (2) the puzzle generator. For the context of this article, our primary interest lies in the user interface. The puzzle generator code was lifted from a sourceforge.net project and is written entirely in Java. Regardless which version of the UI we choose (JavaFX Script vs. JavaFX 2.0), no code changes were required for the puzzle generator code. The original user interface for the JavaFX Sudoku application was written exclusively in JavaFX Script, and as such is a suitable candidate to convert over to the new JavaFX 2.0 model. However, a few notable points are worth mentioning about this program. First off, it was written in the JavaFX 1.1 timeframe, where certain capabilities of the JavaFX framework were as of yet unavailable. Citing two examples, this program creates many of its own UI controls from scratch because the built-in controls were yet to be introduced. In addition, layout of graphical nodes is done in a very manual manner, again because much of the automatic layout capabilities were in flux at the time. It is worth considering that this program was written at a time when most of us were just coming up to speed on this technology. One would think that having the opportunity to recreate this application anew, it would look a lot different from the current version. Comparing the Size of the Source Code An attempt was made to convert each of the original UI JavaFX Script source files (suffixed with .fx) over to a Java counterpart. Due to language feature differences, there are a small number of source files which only exist in one version or the other. The table below summarizes the size of each of the source files. JavaFX Script source file Number of Lines Number of Character JavaFX 2.0 Java source file Number of Lines Number of Characters ArrowKey.java 6 72 Board.fx 221 6831 Board.java 205 6508 BoardNode.fx 446 16054 BoardNode.java 723 29356 ChooseNumberNode.fx 168 5267 ChooseNumberNode.java 302 10235 CloseButtonNode.fx 115 3408 CloseButton.java 99 2883 ParentWithKeyTraversal.java 111 3276 FunctionPtr.java 6 80 Globals.java 20 554 Grouping.fx 8 140 HowToPlayNode.fx 121 3632 HowToPlayNode.java 136 4849 IconButtonNode.fx 196 5748 IconButtonNode.java 183 5865 Main.fx 98 3466 Main.java 64 2118 SliderNode.fx 288 10349 SliderNode.java 350 13048 Space.fx 78 1696 Space.java 106 2095 SpaceNode.fx 227 6703 SpaceNode.java 220 6861 TraversalHelper.fx 111 3095 Total 2,077 79,127 2531 87,800 A few notes about this table are in order: The number of lines in each file was determined by running the Unix ‘wc –l’ command over each file. The number of characters in each file was determined by running the Unix ‘ls –l’ command over each file. The examination of the code could certainly be much more rigorous. No standard formatting was performed on these files.  All comments however were deleted. There was a certain expectation that the new Java version would require more lines of code than the original JavaFX script version. As evidenced by a count of the total number of lines, the Java version has about 22% more lines than its FX Script counterpart. Furthermore, there was an additional expectation that the Java version would be more verbose in terms of the total number of characters.  In fact the preceding data shows that on average the Java source files contain fewer characters per line than the FX files.  But that's not the whole story.  Upon further examination, the FX Script source files had a disproportionate number of blank characters.  Why?  Because of the nature of how one develops JavaFX Script code.  The object literal dominates FX Script code.  Its not uncommon to see object literals indented halfway across the page, consuming lots of meaningless space characters. RAM consumption Not the most scientific analysis, memory usage for the application was examined on a Windows Vista system by running the Windows Task Manager and viewing how much memory was being consumed by the Sudoku version in question. Roughly speaking, the FX script version, after startup, had a RAM footprint of about 90MB and remained pretty much the same size. The Java version started out at about 55MB and maintained that size throughout its execution. What About Binding? Arguably, the most striking observation about the conversion from JavaFX Script to JavaFX 2.0 concerned the need for data synchronization, or lack thereof. In JavaFX Script, the primary means to synchronize data is via the bind expression (using the “bind” keyword), and perhaps to a lesser extent it’s “on replace” cousin. The bind keyword does not exist in Java, so for JavaFX 2.0 a Data Binding API has been introduced as a replacement. To give a feel for the difference between the two versions of the Sudoku program, the table that follows indicates how many binds were required for each source file. For JavaFX Script files, this was ascertained by simply counting the number of occurrences of the bind keyword. As can be seen, binding had been used frequently in the JavaFX Script version (and does not take into consideration an additional half dozen or so “on replace” triggers). The JavaFX 2.0 program achieves the same functionality as the original JavaFX Script version, yet the equivalent of binding was only needed twice throughout the Java version of the source code. JavaFX Script source file Number of Binds JavaFX Next Java source file Number of “Binds” ArrowKey.java 0 Board.fx 1 Board.java 0 BoardNode.fx 7 BoardNode.java 0 ChooseNumberNode.fx 11 ChooseNumberNode.java 0 CloseButtonNode.fx 6 CloseButton.java 0 CustomNodeWithKeyTraversal.java 0 FunctionPtr.java 0 Globals.java 0 Grouping.fx 0 HowToPlayNode.fx 7 HowToPlayNode.java 0 IconButtonNode.fx 9 IconButtonNode.java 0 Main.fx 1 Main.java 0 Main_Mobile.fx 1 SliderNode.fx 6 SliderNode.java 1 Space.fx 0 Space.java 0 SpaceNode.fx 9 SpaceNode.java 1 TraversalHelper.fx 0 Total 58 2 Conclusions As the JavaFX 2.0 technology is so new, and experience with the platform is the same, it is possible and indeed probable that some of the observations noted in the preceding article may not apply across other attempts at migrating applications. That being said, this first experience indicates that the migrated Java code will likely be larger, though not extensively so, than the original Java FX Script source. Furthermore, although very important, it appears that the requirements for data synchronization via binding, may be significantly less with the new platform.

    Read the article

  • Using Lightbox with _Screen

    Although, I have to admit that I discovered Bernard Bout's ideas and concepts about implementing a lightbox in Visual FoxPro quite a while ago, there was no "spare" time in active projects that allowed me to have a closer look into his solution(s). Luckily, these days I received a demand to focus a little bit more on this. This article describes the steps about how to integrate and make use of Bernard's lightbox class in combination with _Screen in Visual FoxPro. The requirement in this project was to be able to visually lock the whole application (_Screen area) and guide the user to an information that should not be ignored easily. Depending on the importance any current user activity should be interrupted and focus put onto the notification. Getting the "meat", eh, source code Please check out Bernard's blog on Foxite directly in order to get the latest and greatest version. As time of writing this article I use version 6.0 as described in this blog entry: The Fastest Lightbox Ever The Lightbox class is sub-classed from the imgCanvas class from the GdiPlusX project on VFPx and therefore you need to have the source code of GdiPlusX as well, and integrate it into your development environment. The version I use is available here: Release GDIPlusX 1.20 As soon as you open the bbGdiLightbox class the first it, VFP might ask you to update the reference to the gdiplusx.vcx. As we have the sources, no problem and you have access to Bernard's code. The class itself is pretty easy to understand, some properties that you do not need to change and three methods: Setup(), ShowLightbox() and BeforeDraw() The challenge - _Screen or not? Reading Bernard's article about the fastest lightbox ever, he states the following: "The class will only work on a form. It will not support any other containers" Really? And what about _Screen? Isn't that a form class, too? Yes, of course it is but nonetheless trying to use _Screen directly will fail. Well, let's have look at the code to see why: WITH This .Left = 0 .Top = 0 .Height = ThisForm.Height .Width = ThisForm.Width .ZOrder(0) .Visible = .F.ENDWITH During the setup of the lightbox as well as while capturing the image as replacement for your forms and controls, the object reference Thisform is used. Which is a little bit restrictive to my opinion but let's continue. The second issue lies in the method ShowLightbox() and introduced by the call of .Bitmap.FromScreen(): Lparameters tlVisiblilty* tlVisiblilty - show or hide (T/F)* grab a screen dump with controlsIF tlVisiblilty Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = IIF(ThisForm.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing loCaptureBmp = .Bitmap.FromScreen(ThisForm.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; ThisForm.Width ,; ThisForm.Height) ENDWITH * save it to a property This.capturebmp = loCaptureBmp ThisForm.SetAll("Visible",.F.) This.DraW() This.Visible = .T.ELSE ThisForm.SetAll("Visible",.T.) This.Visible = .F.ENDIF My first trials in using the class ended in an exception - GdiPlusError:OutOfMemory - thrown by the Bitmap object. Frankly speaking, this happened mainly because of my lack of knowledge about GdiPlusX. After reading some documentation, especially about the FromScreen() method I experimented a little bit. Capturing the visible area of _Screen actually was not the real problem but the dimensions I specified for the bitmap. The modifications - step by step First of all, it is to get rid of restrictive object references on Thisform and to change them into either This.Parent or more generic into This.oForm (even better: This.oControl). The Lightbox.Setup() method now sets the necessary object reference like so: *====================================================================* Initial setup* Default value: This.oControl = "This.Parent"* Alternative: This.oControl = "_Screen"*====================================================================With This .oControl = Evaluate(.oControl) If Vartype(.oControl) == T_OBJECT .Anchor = 0 .Left = 0 .Top = 0 .Width = .oControl.Width .Height = .oControl.Height .Anchor = 15 .ZOrder(0) .Visible = .F. EndIfEndwith Also, based on other developers' comments in Bernard articles on his lightbox concept and evolution I found the source code to handle the differences between a form and _Screen and goes into Lightbox.ShowLightbox() like this: *====================================================================* tlVisibility - show or hide (T/F)* grab a screen dump with controls*====================================================================Lparameters tlVisibility Local loControl m.loControl = This.oControl If m.tlVisibility Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = Iif(m.loControl.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing If Upper(m.loControl.Name) == Upper("Screen") loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd) Else loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; m.loControl.Width ,; m.loControl.Height) EndIf Endwith * save it to a property This.CaptureBmp = loCaptureBmp m.loControl.SetAll("Visible",.F.) This.Draw() This.Visible = .T. Else This.CaptureBmp = .Null. m.loControl.SetAll("Visible",.T.) This.Visible = .F. Endif {loadposition content_adsense} Are we done? Almost... Although, Bernard says it clearly in his article: "Just drop the class on a form and call it as shown." It did not come clear to my mind in the first place with _Screen, but, yeah, he is right. Dropping the class on a form provides a permanent link between those two classes, it creates a valid This.Parent object reference. Bearing in mind that the lightbox class can not be "dropped" on the _Screen, we have to create the same type of binding during runtime execution like so: *====================================================================* Create global lightbox component*==================================================================== Local llOk, loException As Exception m.llOk = .F. m.loException = .Null. If Not Vartype(_Screen.Lightbox) == "O" Try _Screen.AddObject("Lightbox", "bbGdiLightbox") Catch To m.loException Assert .F. Message m.loException.Message EndTry EndIf m.llOk = (Vartype(_Screen.Lightbox) == "O")Return m.llOk Through runtime instantiation we create a valid binding to This.Parent in the lightbox object and the code works as expected with _Screen. Ease your life: Use properties instead of constants Having a closer look at the BeforeDraw() method might wet your appetite to simplify the code a little bit. Looking at the sample screenshots in Bernard's article you see several forms in different colors. This got me to modify the code like so: *====================================================================* Apply the actual lightbox effect on the captured bitmap.*====================================================================If Vartype(This.CaptureBmp) == T_OBJECT Local loGfx As xfcGraphics loGfx = This.oGfx With _Screen.System.Drawing loGfx.DrawImage(This.CaptureBmp,This.Rectangle,This.Rectangle,.GraphicsUnit.Pixel) * change the colours as needed here * possible colours are (220,128,0,0),(220,0,0,128) etc. loBrush = .SolidBrush.New(.Color.FromArgb( ; This.Opacity, .Color.FromRGB(This.BorderColor))) loGfx.FillRectangle(loBrush,This.Rectangle) EndwithEndif Create an additional property Opacity to specify the grade of translucency you would like to have without the need to change the code in each instance of the class. This way you only need to change the values of Opacity and BorderColor to tweak the appearance of your lightbox. This could be quite helpful to signalize different levels of importance (ie. green, yellow, orange, red, etc...) of notifications to the users of the application. Final thoughts Using the lightbox concept in combination with _Screen instead of forms is possible. Already Jim Wiggins comments in Bernard's article to loop through the _Screen.Forms collection in order to cascade the lightbox visibility to all active forms. Good idea. But honestly, I believe that instead of looping all forms one could use _Screen.SetAll("ShowLightbox", .T./.F., "Form") with Form.ShowLightbox_Access method to gain more speed. The modifications described above might provide even more features to your applications while consuming less resources and performance. Additionally, the restrictions to capture only forms does not exist anymore. Using _Screen you are able to capture and cover anything. The captured area of _Screen does not include any toolbars, docked windows, or menus. Therefore, it is advised to take this concept on a higher level and to combine it with additional classes that handle the state of toolbars, docked windows and menus. Which I did for the customer's project.

    Read the article

  • How To Remove People and Objects From Photographs In Photoshop

    - by Eric Z Goodnight
    You might think that it’s a complicated process to remove objects from photographs. But really Photoshop makes it quite simple, even when removing all traces of a person from digital photographs. Read on to see just how easy it is. Photoshop was originally created to be an image editing program, and it excels at it. With hardly any Photoshop experience, any beginner can begin removing objects or people from their photos. Have some friends that photobombed an otherwise great pic? Tell them to say their farewells, because here’s how to get rid of them with Photoshop! Tools for Removing Objects Removing an object is not really “magical” work. Your goal is basically to cover up the information you don’t want in an image with information you do want. In this sample image, we want to remove the cigar smoking man, and leave the geisha. Here’s a couple of the tools that can be useful to work with when attempting this kind of task. Clone Stamp and Pattern Stamp Tool: Samples parts of your image from your background, and allows you to paint into your image with your mouse or stylus. Eraser and Brush Tools: Paint flat colors and shapes, and erase cloned layers of image information. Basic, down and dirty photo editing tools. Pen, Quick Selection, Lasso, and Crop tools: Select, isolate, and remove parts of your image with these selection tools. All useful in their own way. Some, like the pen tool, are nightmarishly tough on beginners. Remove a Person with the Clone Stamp Tool (Video) The video above uses the Clone Stamp tool to sample and paint with the background texture. It’s a simple tool to use, although it can be confusing, possibly counter-intuitive. Here’s some pointers, in addition to the video above. Select shortcut key to choose the Clone tool stamp from the Tools Panel. Always create a copy of your background layer before doing heavy edits by right clicking on the background in your Layers Panel and selecting “Duplicate.” Hold with the Clone Tool selected, and click anywhere in your image to sample that area. When you’re sampling an area, your cursor is “Aligned” with your sample area. When you paint, your sample area moves. You can turn the “Aligned” setting off by clicking the in the Options Panel at the top of your screen if you want. Change your brush size and hardness as shown in the video by right-clicking in your image. Use your lasso to copy and paste pieces of your image in order to cover up any parts that seem appropriate. Photoshop Magic with the “Content-Aware Fill” One of the hallmark features of CS5 is the “Content-Aware Fill.” Content aware fill can be an excellent shortcut to removing objects and even people in Photoshop, but it is somewhat limited, and can get confused. Here’s a basic rundown on how it works. Select an object using your Lasso tool, shortcut key . The Lasso works fine as this selection can be rough. Navigate to Edit > Fill, and select “Content-Aware,” as illustrated above, from the pull-down menu. It’s surprisingly simple. After some processing, Photoshop has done the work of removing the object for you. It takes a few moments, and it is not perfect, so be prepared to touch it up with some Copy-Paste, or some Clone stamp action. Content Aware Fill Has Its Limits Keep in mind that the Content Aware Fill is meant to be used with other techniques in mind. It doesn’t always perform perfectly, but can give you a great starting point. Take this image for instance. It is actually plausible to hide this figure and make this image look like he was never there at all. With a selection made with the Lasso tool, navigate to Edit > Fill and select “Content Aware” again. The result is surprisingly good, but as you can see, worthy of some touch up. With a result like this one, you’ll have to get your hands dirty with copy-paste to create believable lines in the background. With many photographs, Content Aware Fill will simply get confused and give you results you won’t be happy with. Additional Touch Up for Bad Background Textures with the Pattern Stamp Tool For the perfectionist, cleaning up the lumpy looking textures that the Clone Stamp can leave is fairly simple using the Pattern Stamp Tool. Sample an piece of your image with your Marquee Tool, shortcut key . Navigate to Edit > Define Pattern to create a new Pattern from your selection. Click OK to continue. Click and hold down on the Clone Stamp tool in your Tools Panel until you can select the Pattern Stamp Tool. Pick your new pattern from the Options at the top of your screen, in the Options Panel. Then simply right click in your image in order to pick as soft a brush as possible to paint with. Paint into your image until your background is as smooth as you want it to be, making your painted out object more and more invisible. If you get lines from your repeated texture, experiment turning the on and off and paint over them. In addition to this, simple use of the Crop Tool, shortcut , can recompose an image, making it look as if it never had another object in it at all. Combine these techniques to find a method that works best for your images. Have questions or comments concerning Graphics, Photos, Filetypes, or Photoshop? Send your questions to [email protected], and they may be featured in a future How-To Geek Graphics article. Image Credits: Geisha Kyoto Gion by Todd Laracuenta via Wikipedia, used under Creative Commons. Moai Rano raraku by Aurbina, in Public Domain. Chris Young visits Wrigley by TonyTheTiger, via Wikipedia, used under Creative Commons. Latest Features How-To Geek ETC Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents

    Read the article

  • Charms and the App Bar

    - by Dennis Vroegop
    Ok. I admit. I made a mistake in the last post about our planespotter app. I have dedicated a full part of the hub to Social. I also had a section called Friends but that made sense since I said that “Friends” is a special group of people that connect to each other through our app and only our app. Social however is sharing our spots with Twitter, Facebook and so on. Now, we could write that functionality in our app in a different section but there is one small problem with that: users don’t expect that. Ok, I admit. The mistake was quite deliberate to give me an excuse to write this part. But still: the mistake is one I see a lot. People are trying to do stuff in their application that they shouldn’t be doing. This always strike me as slightly odd: why do some work when others have already done it for you and you can just use it? After all: good developers are lazy (lazy people will always try to find the easiest way to do something and in development land this usually means the cleanest and best to support way…) So. What is that part that Microsoft has done for us and we don’t have to do ourselves? The answer lies on the right hand of your Win8 screen: This is a screenshot of my tablet (as you can see I am writing this right now….) When I swipe my finger from out of the screen on the right inside the screen (or move the mouse to the upper right corner) this menu will appear. Next to settings and the start menu button we’ll find the Search and the Share charms. These are two ways that your app can share the information it contains with the rest of the world, or at least: the rest of your system. So don’t write a Search feature in your app. Don’t write a Share feature in your app. It’s here already. Users, once they are used to Windows 8, will use that feature and expect it to work. If it doesn’t, they won’t like your app and you can kiss you dreams of everlasting fame goodbye. So use these two. What are they? Well, simply they are parts of a contract. In your app you say somewhere in code that you are supporting Search and Share. So when the user selects Share the system will interrogate the current app in the foreground if it supports this feature. Your app will say “But why, yes, I do!” Then the system will ask the app “Ok then, wisecrack, then share!” and you will have to provide the system with some information about the format. Other applications have subscribed to be at the receiving end of the Share contract. They have told the system that they support Sharing (receiving) and which formats they understand. If one or more of them support the formats you specify, the user will see them. The user clicks / taps on the app of their choice and data is moved from your app to the new one. So if you say you support Facebook and Twitter users can post data from your app to these networks by selecting Share. The same applies to Search. Don’t make a “search” button in your app but use the contract to tell the system that you support search and use that instead. Users will be grateful (remember that bar with men/women/creatures that are waiting for you?) The more and more people get to know Windows 8, the more they will use this. And if you are one of the people who wrote an app that helped them learn the system, well, that’s even better. So. We don’t have a Share or a Search button. We do have other buttons. Most important: we probably need a “New Spot” button. And a “Filter” might be useful. Or someway to open the camera so you can add a picture to the spot. Where will be put those? The answer is the “Appbar” . This is a application / context aware menu that slides up from the bottom of the screen when you move your finger / mouse from below the screen into it. From above downwards works just as well. Here you see an example of the appbar from the People app. (click on it for a larger version). This appears whenever you slide your finger up from below of down from above. This is where you put your commands. Remember, this is context aware so this menu will change when you are in different parts of your app or when you have selected different items. There are a few conventions when you create this appbar. First, the items on the right are “General” items, meaning they have little to do with what is on the screen right now. I think this would be a great place to add our “New Spot” icon. On the far left are items associated with the current selected item or screen. So if you have a spot selected, the button for Add Photo should be visible here and on the left hand side. Not everything is as clear as this, but this is what you should strive for. Group items together. And please note: this is the only place in Metro design where we are allowed to use lines as separators. So when you want to separate a group of icons from another group, add a line. Also note the simplicity of the buttons. No colors, no lights or shadows, no 3D. After a couple of years of fancy almost realistic looking icons people have finally decided that hey, this is a virtual world: it’s ok to look virtual as well. So make things as readable and clear as possible and don’t try to duplicate nature. It’s all about the information, remember? (If you don’t remember I’d like to point you to a older blog post of mine about the what and why of Metro). So.. think about the buttons a bit and think about Share and Search. What will you put there? Remember: this is the way the users interact with your apps and while you shouldn’t judge a book by its covers when it comes to people, this isn’t entirely so when it comes to apps. People DO judge an app by its looks and the way it feels. Take advantage of that. History has learned that a crappy app with a GREAT user interface gets better reviews than a GREAT app with a lousy UI… I know: developers will find this extremely unfair but that’s the world we live in (No, I am not saying you should deliver rubbish apps). Next time: we’ll start by building the darn thing!

    Read the article

  • Personal Development : Time, Planning , Repairs & Maintenance

    - by Rajesh Pillai
    Personal Development : Time, Planning, Repairs & Maintenance These are just my thoughts, but some you may find something interesting in it. Please think over it. We may know many things, but still we always keeps procrastinating it. I have written this as I have heard many people coming back and saying they don’t have time to do things they like. These are my thoughts buy may be useful to someone else too. Certain things in life needs periodic repairs and maintenance. To cite some examples , your CAR, your HOUSE, your personal laptop/desktop, your health etc. Likewise there are certain other things in professional life that requires repair/ maintenance /or some kind of polishing, so that you always stay on top of it. But they are not always obvious. Some of them are - Improving your communication skills - Increasing your vocabulary - Upgrading your technical skills - Pursuing your hobby - Increasing your knowledge/awareness etc… etc… And then there are certain things that we are always short of…. one is TIME. We all know TIME is one of the most precious things in life and yet we all are very miserable at managing it. Remember you can only manage it and not control it. You can only control which you own or which you create. In theory time is infinite. So, there should be abundant of it. But remember one thing, you know this, it’s not reversible. Once it has elapsed you cannot live it again. Think over it. So, how do find that golden 25th hour every day. To find the 25th hour you need to reflect back on your current daily activities. Analyze them and see where you are spending most of your time and is it really important. Even the 8 hours that you spent in the office, is it spent fruitfully. At the end of the day is the 8 precious hour that you spent was worth it. Just reflect back on your activities. Did you learn something? If yes did you make a point to NOTE IT. If you didn’t NOTED it then was the time you spent really worth it. Just ponder over it. Some calculations of your daily activities where most of the time is spent. Let’s start (in no particular order though) - Sleep (6.5 hours) [Remember you only require 6 good hours of sleep every day]. Some may thing it is 8, but it’s a myth.   o To achive 6 hours of sleep and be in good health you can practice 15 minutes of daily meditation. So effectively you can    round it to 6.5 hours. - Morning chores(2 hours) : Some may need to prepare breakfast and all other things. - Office commuting (avg. to and fro 3 hours) - Office Work (avg 9.5 hours) Total Hours: 21 hours effective time which is spent irrespective of what you do. There may be some variations here and there. Still you have 3 hours EXTRA. Where do these 3 hours go? If you can find it, then you may get that golden 25th hour out of these 3 hours. Let’s discount 2 hours for contingencies, still you have 1 hour with you. If you can’t find it then you are living a direction less life. As you can see, the 25th Hour lies within the 24 hours of the day. It’s upto each one of us to find and make use of it. Now what can you do with that 25th hour i.e. 1 hour extra of your life. Imagine the possibility. Again some calculations 1 hour daily * 30 days = 30 hours every month 30 hours pm * 12 month = 360 hours every year. 360 hours every year seems very promising. Let’s add some contingencies, say, let’s be optimistic and say 50 % contingency. Still you have 180 hours every year. That leaves with 30 minutes every day of extra time. That’s hell a lot of time, if you could manage it. These may sound like a high talk [yes, it is, unless you apply these simple rules and rationalize your everyday living and stop procrastinating]. NOTE: I haven’t taken weekend, holidays and leaves into account. So, that leaves us with a lot of buffer time. You can meet family friends, relatives, other tasks, and yet have these 180 pure hours of joy every year. Do whatever you want to do with it. So, how important is this 180 hours per year to you? Just think over it. You may use it the way you like - 50 hours [pursue your hobby like drawing, crafting, learn dance, learn juggling, learn swimming, travelling hmm.. anything you like doing and you didn’t had time to do it.] - 30 hours you can learn a new programming language or technology (i.e. you can get comfortable with it) - 50 hours [improve existing skills] - 20 hours [improve you communication skill]. Do some light reading. - 30 hours [YOU DECIDE WHAT TO DO]? So, if you had done this for one year you would have learnt a new programming language, upgraded existing skills, improved you communication etc.. If you had done this for two years.. imagine the level of personal development or growth which you may have attained….. If you had done this for three years….. NOW I think I don’t need to mention this… So, you still have TIME, as they say TIME is infinite. So, make judicious use of this precious thing. And never ever comeback saying “I don’t have time”. So, if you are RICH in TIME, everything else will be automatically taken care of, as those things may just be a byproduct of how you spend your time… So, happy TIMING your TIME everyday.

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • Real Excel Templates I

    - by Tim Dexter
    As promised, I'm starting to document the new Excel templates that I teased you all with a few weeks back. Leslie is buried in 11g documentation and will not get to officially documenting the templates for a while. I'll do my best to be professional and not ramble on about this and that, although the weather here has finally turned and its 'scorchio' here in Colorado today. Maybe our stand of Aspen will finally come into leaf ... but I digress. Preamble These templates are not actually that new, I helped in a small way to develop them a few years back with Excel 'meistress' Shirley for a company that was trying to use the Report Manager(RR) Excel FSG outputs under EBS 12. The functionality they needed was just not there in the RR FSG templates, the templates are actually XSL that is created from the the RR Excel template builder and fed to BIP for processing. Think of Excel from our RTF templates and you'll be there ie not really Excel but HTML masquerading as Excel. Although still under controlled release in EBS they have now made their way to the standlone release and are willing to share their Excel goodness. You get everything you have with hte Excel Analyzer Excel templates plus so much more. Therein lies a question, what will happen to the Analyzer templates? My understanding is that both will come together into a single Excel template format some time in the post-11g release world. The new XLSX format for Exce 2007/10 is also in the mix too so watch this space. What more do these templates offer? Well, you can structure data in the Excel output. Similar to RTF templates you can create sheets of data that have master-detail n relationships. Although the analyzer templates can do this, you have to get into macros whereas BIP will do this all for you. You can also use native XSL functions in your data to manipulate it prior to rendering. BP functions are not currently supported. The most impressive, for me at least, is the sheet 'bursting'. You can split your hierarchical data across multiple sheets and dynamically name those sheets. Finally, you of course, still get all the native Excel functionality. Pre-reqs You must be on 10.1.3.4.1 plus the latest rollup patch, 9546699. You can patch upa BIP instance running with OBIEE, no problem You need Excel 2000 or above to build the templates Some patience - there is no Excel template builder for these new templates. So its all going to have to be done by hand. Its not that tough but can get a little 'fiddly'. You can not test the template from Excel , it has to be deployed and then run. Limitations The new templates are definitely superior to the Analyzer templates but there are a few limitations. Re-grouping is not supported. You can only follow a data hierarchy not bend it to your will unless you want to get into macros. No support for BIP functions. The templates support native XSL functions only. No template builder Getting Started The templates make the use of named cells and groups of cells to allow BIP to find the insertion point for data points. It also uses a hidden sheet to store calculation mappings from named cells to XML data elements. To start with, in the great BIP tradition, we need some sample XML data. Becasue I wanted to show the master-detail output we need some hierarchical data. If you have not yet gotten into the data templates, now is a good time, I wrote a post a while back starting from the simple to more complex. They generate ideal data sets for these templates. Im working with the following data set: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... <LIST_G_DEPT> <EMPLOYEES> Simple enough to follow and bread and butter stuff for an RTF template. Building the Template For an Excel template we need to start by thinking about how we want to render the data. Come up with a sample output in Excel. Its all dummy data, nothing marked up yet with one row of data for each level. I have the department name and then a repeating row for the employees. You can apply Excel formatting to the layout. The total is going to be derived from a data element. We'll get to Excel functions later. Marking Up Cells Next we need to start marking up the cells with custom names to map them to data elements. The cell names need to follow a specific format: For data grouping, XDO_GROUP_?group_name? For data elements, XDO_?element_name? Notice the question mark delimter, the group_name and element_name are case sensitive. The next step is to find how to name cells; the easiest method is to highlight the cell and then type in the name. You can also find the Name Manager dialog. I use 2007 and its available on the ribbon under the Formulas section Go thorugh the process of naming all the cells for the element values you have. Using my data set from above.You should end up with something like this in your 'Name Manager' dialog. You can update any mistakes you might have made through this dialog. Creating Groups In the image above you can see there are a couple of named group cells. To create these its a simple case of highlighting the cells that make up the group and then naming them. For the EMP group, highlight the employee row and then type in the name, XDO_GROUP?G_EMP? Notice the 10,000 total is outside of the G_EMP group. Its actually named, XDO_?TOTAL_SALARY?, a query calculated value. For the department group, we need to include the department name cell and the sub EMP grouping and name it, XDO_GROUP?G_DEPT? Notice, the 10,000 total is included in the G_DEPT group. This will ensure it repeats at the department level. Lastly, we do need to include a special sheet in the workbook. We will not have anything meaningful in there for now, but it needs to be present. Create a new sheet and name it XDO_METADATA. The name is important as the BIP rendering engine will looking for it. For our current example we do not need anything other than the required stuff in our XDO_METADATA sheet but, it must be present. Easy enough to hide it. Here's what I have: The only cell that is important is the 'Data Constraints:' cell. The rest is optional. To save curious users getting distracted, hide the metadata sheet. Deploying & Running Templates We should now have a usable Excel template. Loading it into a report is easy enough using the browser UI, just like an RTF template. Set the template type to Excel. You will now be able to run the report and hopefully get something like this. You will not get the red highlighting, thats just some conditional formatting I added to the template using Excel functionality. Your dates are probably going to look raw too. I got around this for now using an Excel function on the cell: =--REPLACE(SUBSTITUTE(E8,"T"," "),LEN(E8)-6,6,"") Google to the rescue on that one. Try some other stuff out. To avoid constantly loading the template through the UI. If you have BIP running locally or you can access the reports repository, once you have loaded the template the first time. Just save the template directly into the report folder. I have put together a sample report using a sample data set, available here. Just drop the xml data file, EmpbyDeptExcelData.xml into 'demo files' folder and you should be good to go. Thats the basics, next we'll start using some XSL functions in the template and move onto the 'bursting' across sheets.

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • How to Avoid Your Next 12-Month Science Project

    - by constant
    While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack. After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades? On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea. Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company. Engineering Systems is Hard Work! The backbone of Exalogic is its InfiniBand network: 4 times better bandwidth than even 10 Gigabit Ethernet, and only about a tenth of its latency. What a potential for increased scalability and throughput across the middleware and database layers! But InfiniBand is a beast that needs to be tamed: It is true that Exalogic uses a standard, open-source Open Fabrics Enterprise Distribution (OFED) InfiniBand driver stack. Unfortunately, this software has been developed by the HPC community with fastest speed in mind (which is good) but, despite the name, not many other enterprise-class requirements are included (which is less good). Here are some of the improvements that Oracle's InfiniBand development team had to add to the OFED stack to make it enterprise-ready, simply because typical HPC users didn't have the need to implement them: More than 100 bug fixes in the pieces that were not related to the Message Passing Interface Protocol (MPI), which is the protocol that HPC users use most of the time, but which is less useful in the enterprise. Performance optimizations and tuning across the whole IB stack: From Switches, Host Channel Adapters (HCAs) and drivers to low-level protocols, middleware and applications. Yes, even the standard HPC IB stack could be improved in terms of performance. Ethernet over IB (EoIB): Exalogic uses InfiniBand internally to reach high performance, but it needs to play nicely with datacenters around it. That's why Oracle added Ethernet over InfiniBand technology to it that allows for creating many virtual 10GBE adapters inside Exalogic's nodes that are aggregated and connected to Exalogic's IB gateway switches. While this is an open standard, it's up to the vendor to implement it. In this case, Oracle integrated the EoIB stack with Oracle's own IB to 10GBE gateway switches, and made it fully virtualized from the beginning. This means that Exalogic customers can completely rewire their server infrastructure inside the rack without having to physically pull or plug a single cable - a must-have for every cloud deployment. Anybody who wants to match this level of integration would need to add an InfiniBand switch development team to their project. Or just buy Oracle's gateway switches, which are conveniently shipped with a whole server infrastructure attached! IPv6 support for InfiniBand's Sockets Direct Protocol (SDP), Reliable Datagram Sockets (RDS), TCP/IP over IB (IPoIB) and EoIB protocols. Because no IPv6 = not very enterprise-class. HA capability for SDP. High Availability is not a big requirement for HPC, but for enterprise-class application servers it is. Every node in Exalogic's InfiniBand network is connected twice for redundancy. If any cable or port or HCA fails, there's always a replacement link ready to take over. This requires extra magic at the protocol level to work. So in addition to Weblogic's failover capabilities, Oracle implemented IB automatic path migration at the SDP level to avoid unnecessary failover operations at the middleware level. Security, for example spoof-protection. Another feature that is less important for traditional users of InfiniBand, but very important for enterprise customers. InfiniBand Partitioning and Quality-of-Service (QoS): One of the first questions we get from customers about Exalogic is: “How can we implement multi-tenancy?” The answer is to partition your IB network, which effectively creates many networks that work independently and that are protected at the lowest networking layer possible. In addition to that, QoS allows administrators to prioritize traffic flow in multi-tenancy environments so they can keep their service levels where it matters most. Resilient IB Fabric Management: InfiniBand is a self-managing network, so a lot of the magic lies in coming up with the right topology and in teaching the subnet manager how to properly discover and manage the network. Oracle's Infiniband switches come with pre-integrated, highly available fabric management with seamless integration into Oracle Enterprise Manager Ops Center. In short: Oracle elevated the OFED InfiniBand stack into an enterprise-class networking infrastructure. Many years and multiple teams of manpower went into the above improvements - this is something you can only get from Oracle, because no other InfiniBand vendor can give you these features across the whole stack! Exabus: Because it's not About the Size of Your Network, it's How You Use it! So let's assume that you somehow were able to get your hands on an enterprise-class IB driver stack. Or maybe you don't care and are just happy with the standard OFED one? Anyway, the next step is to actually leverage that InfiniBand performance. Here are the choices: Use traditional TCP/IP on top of the InfiniBand stack, Develop your own integration between your middleware and the lower-level (but faster) InfiniBand protocols. While more bandwidth is always a good thing, it's actually the low latency that enables superior performance for your applications when running on any networking infrastructure: The lower the latency, the faster the response travels through the network and the more transactions you can close per second. The reason why InfiniBand is such a low latency technology is that it gets rid of most if not all of your traditional networking protocol stack: Data is literally beamed from one region of RAM in one server into another region of RAM in another server with no kernel/drivers/UDP/TCP or other networking stack overhead involved! Which makes option 1 a no-go: Adding TCP/IP on top of InfiniBand is like adding training wheels to your racing bike. It may be ok in the beginning and for development, but it's not quite the performance IB was meant to deliver. Which only leaves option 2: Integrating your middleware with fast, low-level InfiniBand protocols. And this is what Exalogic's "Exabus" technology is all about. Here are a few Exabus features that help applications leverage the performance of InfiniBand in Exalogic: RDMA and SDP integration at the JDBC driver level (SDP), for Oracle Weblogic (SDP), Oracle Coherence (RDMA), Oracle Tuxedo (RDMA) and the new Oracle Traffic Director (RDMA) on Exalogic. Using these protocols, middleware can communicate a lot faster with each other and the Oracle database than by using standard networking protocols, Seamless Integration of Ethernet over InfiniBand from Exalogic's Gateway switches into the OS, Oracle Weblogic optimizations for handling massive amounts of parallel transactions. Because if you have an 8-lane Autobahn, you also need to improve your ramps so you can feed it with many cars in parallel. Integration of Weblogic with Oracle Exadata for faster performance, optimized session management and failover. As you see, “Exabus” is Oracle's word for describing all the InfiniBand enhancements Oracle put into Exalogic: OFED stack enhancements, protocols for faster IB access, and InfiniBand support and optimizations at the virtualization and middleware level. All working together to deliver the full potential of InfiniBand performance. Who else has 100% control over their middleware so they can develop their own low-level protocol integration with InfiniBand? Even if you take an open source approach, you're looking at years of development work to create, test and support a whole new networking technology in your middleware! The Extras: Less Hassle, More Productivity, Faster Time to Market And then there are the other advantages of Engineered Systems that are true for Exalogic the same as they are for every other Engineered System: One simple purchasing process: No headaches due to endless RFPs and no “Will X work with Y?” uncertainties. Everything has been engineered together: All kinds of bugs and problems have been already fixed at the design level that would have only manifested themselves after you have built the system from scratch. Everything is built, tested and integrated at the factory level . Less integration pain for you, faster time to market. Every Exalogic machine world-wide is identical to Oracle's own machines in the lab: Instant replication of any problems you may encounter, faster time to resolution. Simplified patching, management and operations. One throat to choke: Imagine finger-pointing hell for systems that have been put together using several different vendors. Oracle's Engineered Systems have a single phone number that customers can call to get their problems solved. For more business-centric values, read The Business Value of Engineered Systems. Conclusion: Buy Exalogic, or get ready for a 6-12 Month Science Project And here's the reason why it's not easy to "build your own Exalogic": There's a lot of work required to make such a system fly. In fact, anybody who is starting to "just put together a bunch of servers and an InfiniBand network" is really looking at a 6-12 month science project. And the outcome is likely to not be very enterprise-class. And it won't have Exalogic's performance either. Because building an Engineered System is literally rocket science: It takes a lot of time, effort, resources and many iterations of design/test/analyze/fix to build such a system. That's why InfiniBand has been reserved for HPC scientists for such a long time. And only Oracle can bring the power of InfiniBand in an enterprise-class, ready-to use, pre-integrated version to customers, without the develop/integrate/support pain. For more details, check the new Exalogic overview white paper which was updated only recently. P.S.: Thanks to my colleagues Ola, Paul, Don and Andy for helping me put together this article! var flattr_uid = '26528'; var flattr_tle = 'How to Avoid Your Next 12-Month Science Project'; var flattr_dsc = 'While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack.After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades?On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea.Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company.'; var flattr_tag = 'Engineered Systems,Engineered Systems,Infiniband,Integration,latency,Oracle,performance'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2012/04/how-avoid-your-next-12-month-science-project'; var flattr_lng = 'en_GB'

    Read the article

  • The future for Microsoft

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2013/10/16/the-future-for-microsoft.aspxMicrosoft is in the process of reinventing itself. While some may argue that it’s “too little, too late” or that their growing consumer-focused strategy is wrong, the truth of the situation is that Microsoft is reinventing itself into a new company. While Microsoft is now calling themselves a “devices and services” company, that’s not entirely accurate. Let’s look at some facts: Microsoft will always (for the long-term foreseeable future) be financially split into the following divisions: Windows/Operating Systems, which for FY13 made up approximately 24% of overall revenue. Server and Tools, which for FY13 made up approximately 26% of overall revenue. Enterprise/Business Products, which for FY13 made up approximately 32% of overall revenue. Entertainment and Devices, which for FY13 made up approximately 13% of overall revenue. Online Services, which for FY13 made up approximately 4% of overall revenue. It is important to realize that hardware products like the Surface fall under the Windows/Operating Systems division while products like the Xbox 360 fall under the Entertainment and Devices division. (Presumably other hardware, such as mice, keyboards, and cameras, also fall under the Entertainment and Devices division.) It’s also unclear where Microsoft’s recent acquisition of Nokia’s handset division will fall, but let’s assume that it will be under Entertainment and Devices as well. Now, for the sake of argument, let’s assume a slightly different structure that I think is more in line with how Microsoft presents itself and how the general public sees it: Consumer Products and Devices, which would probably make up approximately 9% of overall revenue. Developer Tools, which would probably make up approximately 13% of overall revenue. Enterprise Products and Devices, which would probably make up approximately 47% of overall revenue. Entertainment, which would probably make up approximately 13% of overall revenue. Online Services, which would probably make up approximately 17% of overall revenue. (Just so we’re clear, in this structure hardware products like the Surface, a portion of Windows sales, and other hardware fall under the Consumer Products and Devices division. I’m assuming that more of the income for the Windows division is coming from enterprise/volume licenses so 15% of that income went to the Enterprise Products and Devices division. Most of the enterprise services, like Azure, fall under the Online Services division so half of the Server and Tools income went there as well.) No matter how you look at it, the bulk of Microsoft’s income still comes from not just the enterprise but also software sales, and this really shouldn’t surprise anyone. So, now that the stage is set…what’s the future for Microsoft? The future I see for Microsoft (again, this is just my prediction based on my own instinct, gut-feel and publicly available information) is this: Microsoft is becoming a consumer-focused enterprise company. Let’s look at it a different way. Microsoft is an enterprise-focused company trying to create a larger consumer presence.  To a large extent, this is the exact opposite of Apple, who is really a consumer-focused company trying to create a larger enterprise presence. The major reason consumer-focused companies (like Apple) have started making in-roads into the enterprise is the “bring your own device” phenomenon. Yes, Apple has created some “game-changing” products but their enterprise influence is still relatively small. Unfortunately (for this blog post at least), Apple provides revenue in terms of hardware products rather than business divisions, so it’s not possible to do a direct comparison. However, in the interest of transparency, from Apple’s Quarterly Report (filed 24 July 2013), their revenue breakdown is: iPhone, which for the 3 months ending 29 June 2013 made up approximately 51% of revenue. iPad, which for the 3 months ending 29 June 2013 made up approximately 18% of revenue. Mac, which for the 3 months ending 29 June 2013 made up approximately 14% of revenue. iPod, which for the 3 months ending 29 June 2013 made up approximately 2% of revenue. iTunes, Software, and Services, which for the 3 months ending 29 June 2013 made up approximately 11% of revenue. Accessories, which for the 3 months ending 29 July 2013 made up approximately 3% of revenue. From this, it’s pretty clear that Apple is a consumer-and-hardware-focused company. At this point, you may be asking yourself “Where is all of this going?” The answer to that lies in Microsoft’s shift in company focus. They are becoming more consumer focused, but what exactly does that mean? The biggest change (at least that’s been in the news lately) is the pending purchase of Nokia’s handset division. This, in combination with their Surface line of tablets and the Xbox, will put Microsoft squarely in the realm of a hardware-focused company in addition to being a software-focused company. That can (and most likely will) shift the revenue split to looking at revenue based on software sales (both consumer and enterprise) and also hardware sales (mostly on the consumer side). If we look at things strictly from a Windows perspective, Microsoft clearly has a lot of irons in the fire at the moment. Discounting the various product SKUs available and painting the picture with broader strokes, there are currently 5 different Windows-based operating systems: Windows Phone Windows Phone 7.x, which runs on top of the Windows CE kernel Windows Phone 8.x+, which runs on top of the Windows 8 kernel Windows RT The ARM-based version of Windows 8, which runs on top of the Windows 8 kernel Windows (Pro) The Intel-based version of Windows 8, which runs on top of the Windows 8 kernel Xbox The Xbox 360, which runs it’s own proprietary OS. The Xbox One, which runs it’s own proprietary OS, a version of Windows running on top of the Windows 8 kernel and a proprietary “manager” OS which manages the other two. Over time, Windows Phone 7.x devices will fade so that really leaves 4 different versions. Looking at Windows RT and Windows Phone 8.x paints an interesting story. Right now, all mobile phone devices run on some sort of ARM chip and that doesn’t look like it will change any time soon. That means Microsoft has two different Windows based operating systems for the ARM platform. Long term, it doesn’t make sense for Microsoft to continue supporting that arrangement. I have long suspected (since the Surface was first announced) that Microsoft will unify these two variants of Windows and recent speculation from some of the leading Microsoft watchers lends credence to this suspicion. It is rumored that upcoming Windows Phone releases will include support for larger screen sizes, relax the requirement to have a hardware-based back button and will continue to improve API parity between Windows Phone and Windows RT. At the same time, Windows RT will include support for smaller screen sizes. Since both of these operating systems are based on the same core Windows kernel, it makes sense (both from a financial and development resource perspective) for Microsoft to unify them. The user interfaces are already very similar. So similar in fact, that visually it’s difficult to tell them apart. To illustrate this, here are two screen captures: Other than a few variations (the Bing News app, the picture shown in the Pictures tile and the spacing between the tiles) these are identical. The one on the left is from my Windows 8.1 laptop (which looks the same as on my Surface RT) and the one on the right is from my Windows Phone 8 Lumia 925. This pretty clearly shows that from a consumer perspective, there really is no practical difference between how these two operating systems look and how you interact with them. For the consumer, your entertainment device (Xbox One), phone (Windows Phone) and mobile computing device (Surface [or some other vendors tablet], laptop, netbook or ultrabook) and your desktop computing device (desktop) will all look and feel the same. While many people will denounce this consistency of user experience, I think this will be a good thing in the long term, especially for the upcoming generations. For example, my 5-year old son knows how to use my tablet, phone and Xbox because they all feature nearly identical user experiences. When Windows 8 was released, Microsoft allowed a Windows Store app to be purchased once and installed on as many as 5 devices. With Windows 8.1, this limit has been increased to over 50. Why is that important? If you consider that your phone, computing devices, and entertainment device will be running the same operating system (with minor differences related to physical hardware chipset), that means that I could potentially purchase my sons favorite Angry Birds game once and be able to install it on all of the devices I own. (And for those of you wondering, it’s only 7 [at the moment].) From an app developer perspective, the story becomes even more compelling. Right now there are differences between the different operating systems, but those differences are shrinking. The user interface technology for both is XAML but there are different controls available and different user experience concepts. Some of the APIs available are the same while some are not. You can’t develop a Windows Phone app that can also run on Windows (either Windows Pro or RT). With each release of Windows Phone and Windows RT, those difference become smaller and smaller. Add to this mix the Xbox One, which will also feature a Windows-based operating system and the same “modern” (tile-based) user interface and the visible distinctions between the operating systems will become even smaller. Unifying the operating systems means one set of APIs and one code base to maintain for an app that can run on multiple devices. One code base means it’s easier to add features and fix bugs and that those changes become available on all devices at the same time. It also means a single app store, which will increase the discoverability and reach of your app and consolidate revenue and app profile management. Now, the choice of what devices an app is available on becomes a simple checkbox decision rather than a technical limitation. Ultimately, this means more apps available to consumers, which is always good for the app ecosystem. Is all of this just rumor, speculation and conjecture? Of course, but it’s not unfounded. As I mentioned earlier, some of the prominent Microsoft watchers are also reporting similar rumors. However, Microsoft itself has even hinted at this future with their recent organizational changes and by telling developers “if you want to develop for Xbox One, start developing for Windows 8 now.” I think this pretty clearly paints the following picture: Microsoft is committed to the “modern” user interface paradigm. Microsoft is changing their release cadence (for all products, not just operating systems) to be faster and more modular. Microsoft is going to continue to unify their OS platforms both from a consumer perspective and a developer perspective. While this direction will certainly concern some people it will excite many others. Microsoft’s biggest failing has always been following through with a strong and sustained marketing strategy that presents a consistent view point and highlights what this unified and connected experience looks like and how it benefits consumers and enterprises. We’ve started to see some of this over the last few years, but it needs to continue and become more aggressive and consistent. In the long run, I think Microsoft will be able to pull all of these technologies and devices together into one seamless ecosystem. It isn’t going to happen overnight, but my prediction is that we will be there by the end of 2016. As both a consumer and a developer, I, for one, am excited about the future of Microsoft.

    Read the article

  • Monitoring SQL Server Agent job run times

    - by okeofs
    Introduction A few months back, I was asked how long a particular nightly process took to run. It was a super question and the one thing that struck me was that there were a plethora of factors affecting the processing time. This said, I developed a query to ascertain process run times, the average nightly run times and applied some KPI’s to the end query. The end goal being to enable me to quickly detect anomalies and processes that are running beyond their normal times. As many of you are aware, most of the necessary data for this type of query, lies within the MSDB database. The core portion of the query is shown below.select sj.name,sh.run_date, sh.run_duration, case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 3 then '000' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 2 then '0000' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration) end as tt from dbo.sysjobs sj with (nolock) inner join dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where sj.name = 'My Agent Job' and [sh.Message] like '%The job%') Run_date and run_duration are obvious fields. The field ‘Name’ is the name of the job that we wish to follow. The only major challenge was that the format of the run duration which was not as ‘user friendly’ as I would have liked. As an example, the run duration 1 hour 10 minutes and 3 seconds would be displayed as 11003; whereas I wanted it to display this in a more user friendly manner as 01:10:03. In order to achieve this effect, we need to add leading zeros to the run_duration based upon the case logic shown above. At this point what we need to do add colons between the hours and minutes and one between the minutes and seconds. To achieve this I nested the query shown above (in purple) within a ‘super’ query. Thus the run time ([Run Time]) is constructed concatenating a series of substrings (See below in Blue). select run_date,substring(convert(varchar(20),tt),1,2) + ':' +substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select sj.name,sh.run_date, sh.run_duration,case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 3 then '000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 2 then '0000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration)end as ttfrom dbo.sysjobs sj with (nolock)inner join dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where sj.name = 'My Agent Job'and [sh.Message] like '%The job%') a Now that I had each nightly run time in hours, minutes and seconds (01:10:03), I decided that it would very productive to calculate a rolling run time average. To do this, I decided to do the calculations in base units of seconds. This said, I encapsulated the query shown above into a further ‘super’ query (see the code in RED below). This encapsulation is shown below. The astute reader will note that I used implied casting from integer to string, which is not the best method to use however it works. This said and if I were constructing the query again I would definitely do an explicit convert. To Recap: I now have a key field of ‘1’, each and every applicable run date and the total number of SECONDS that the process ran for each run date, all of this data within the #rawdata1 temporary table. Select 1 as keyy,run_date,(substring(b.run_time,1,2)*3600) + (substring(b.run_time,4,2)*60) + (substring(b.run_time,7,2)) as run_time_in_Seconds,run_time into #rawdata1 from ( select run_date,substring(convert(varchar(20),tt),1,2) + ':' + substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select sj.name,sh.run_date, sh.run_duration, case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration)when len(sh.run_duration)    = 3 then '000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration)    = 2 then '0000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration)end as ttfrom dbo.sysjobs sj with (nolock)inner join dbo.sysjobHistory sh with (nolock)on sj.job_id = sh.job_id where sj.name = 'My Agent Job'and [sh.Message] like '%The job%') a )b   Calculating the average run time We now select each run time in seconds from #rawdata1 and place the values into another temporary table called #rawdata2. Once again we create a ‘key’, a hardwired ‘1’. select 1 as Keyy, run_time_in_Seconds into #rawdata2 from #rawdata1The purpose of doing so is to make the average time AVG() available to the query immediately without having to do adverse grouping. Applying KPI Logic At this point, we shall apply some logic to determine whether processing times are within the norms. We do this by applying colour names. Obviously, this example is a super one for SSRS and traffic light icons.select rd1.run_date, rd1.run_time, rd1.run_time_in_Seconds ,Avg(rd2.run_time_in_Seconds) as Average_run_time_in_seconds,casewhenConvert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)<= 1.2 then 'Green' when Convert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)< 1.4 then 'Yellow' else 'Red'end as [color], Calculating the Average Run Time in Hours Minutes and Seconds and the end of the query. casewhen len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))else convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)end as [Average Run Time HH:MM:SS] from #rawdata2 rd2 innerjoin #rawdata1 rd1on rd1.keyy = rd2.keyygroup by run_date,rd1.run_time ,rd1.run_time_in_Seconds order by run_date descThe complete code example use msdbgo/*drop table #rawdata1drop table #rawdata2go*/select 1 as keyy,run_date,(substring(b.run_time,1,2)*3600) + (substring(b.run_time,4,2)*60) + (substring(b.run_time,7,2)) as run_time_in_Seconds,run_time into #rawdata1 from (select run_date,substring(convert(varchar(20),tt),1,2) + ':' +substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select name,run_date, run_duration, casewhenlen(run_duration) = 6 then convert(varchar(8),run_duration)whenlen(run_duration) = 5 then '0' + convert(varchar(8),run_duration)whenlen(run_duration) = 4 then '00' + convert(varchar(8),run_duration)whenlen(run_duration) = 3 then '000' + convert(varchar(8),run_duration)whenlen(run_duration) = 2 then '0000' + convert(varchar(8),run_duration)whenlen(run_duration) = 1 then '00000' + convert(varchar(8),run_duration)end as ttfrom dbo.sysjobs sj with (nolock)innerjoin dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where name = 'My Agent Job'and [Message] like '%The job%') a ) bselect 1 as Keyy, run_time_in_Seconds into #rawdata2 from #rawdata1select rd1.run_date, rd1.run_time, rd1.run_time_in_Seconds ,Avg(rd2.run_time_in_Seconds) as Average_run_time_in_seconds,casewhenConvert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)<= 1.2 then 'Green' when Convert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)< 1.4 then 'Yellow' else 'Red'end as [color],Case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))else convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)end as [Average Run Time HH:MM:SS] from #rawdata2 rd2 innerjoin #rawdata1 rd1on rd1.keyy = rd2.keyygroup by run_date,rd1.run_time ,rd1.run_time_in_Seconds order by run_date desc  

    Read the article

  • Set-Cookie Headers getting stripped in ASP.NET HttpHandlers

    - by Rick Strahl
    Yikes, I ran into a real bummer of an edge case yesterday in one of my older low level handler implementations (for West Wind Web Connection in this case). Basically this handler is a connector for a backend Web framework that creates self contained HTTP output. An ASP.NET Handler captures the full output, and then shoves the result down the ASP.NET Response object pipeline writing out the content into the Response.OutputStream and seperately sending the HttpHeaders in the Response.Headers collection. The headers turned out to be the problem and specifically Http Cookies, which for some reason ended up getting stripped out in some scenarios. My handler works like this: Basically the HTTP response from the backend app would return a full set of HTTP headers plus the content. The ASP.NET handler would read the headers one at a time and then dump them out via Response.AppendHeader(). But I found that in some situations Set-Cookie headers sent along were simply stripped inside of the Http Handler. After a bunch of back and forth with some folks from Microsoft (thanks Damien and Levi!) I managed to pin this down to a very narrow edge scenario. It's easiest to demonstrate the problem with a simple example HttpHandler implementation. The following simulates the very much simplified output generation process that fails in my handler. Specifically I have a couple of headers including a Set-Cookie header and some output that gets written into the Response object.using System.Web; namespace wwThreads { public class Handler : IHttpHandler { /* NOTE: * * Run as a web.config set handler (see entry below) * * Best way is to look at the HTTP Headers in Fiddler * or Chrome/FireBug/IE tools and look for the * WWHTREADSID cookie in the outgoing Response headers * ( If the cookie is not there you see the problem! ) */ public void ProcessRequest(HttpContext context) { HttpRequest request = context.Request; HttpResponse response = context.Response; // If ClearHeaders is used Set-Cookie header gets removed! // if commented header is sent... response.ClearHeaders(); response.ClearContent(); // Demonstrate that other headers make it response.AppendHeader("RequestId", "asdasdasd"); // This cookie gets removed when ClearHeaders above is called // When ClearHEaders is omitted above the cookie renders response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); // *** This always works, even when explicit // Set-Cookie above fails and ClearHeaders is called //response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); response.Write(@"Output was created.<hr/> Check output with Fiddler or HTTP Proxy to see whether cookie was sent."); } public bool IsReusable { get { return false; } } } } In order to see the problem behavior this code has to be inside of an HttpHandler, and specifically in a handler defined in web.config with: <add name=".ck_handler" path="handler.ck" verb="*" type="wwThreads.Handler" preCondition="integratedMode" /> Note: Oddly enough this problem manifests only when configured through web.config, not in an ASHX handler, nor if you paste that same code into an ASPX page or MVC controller. What's the problem exactly? The code above simulates the more complex code in my live handler that picks up the HTTP response from the backend application and then peels out the headers and sends them one at a time via Response.AppendHeader. One of the headers in my app can be one or more Set-Cookie. I found that the Set-Cookie headers were not making it into the Response headers output. Here's the Chrome Http Inspector trace: Notice, no Set-Cookie header in the Response headers! Now, running the very same request after removing the call to Response.ClearHeaders() command, the cookie header shows up just fine: As you might expect it took a while to track this down. At first I thought my backend was not sending the headers but after closer checks I found that indeed the headers were set in the backend HTTP response, and they were indeed getting set via Response.AppendHeader() in the handler code. Yet, no cookie in the output. In the simulated example the problem is this line:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); which in my live code is more dynamic ( ie. AppendHeader(token[0],token[1[]) )as it parses through the headers. Bizzaro Land: Response.ClearHeaders() causes Cookie to get stripped Now, here is where it really gets bizarre: The problem occurs only if: Response.ClearHeaders() was called before headers are added It only occurs in Http Handlers declared in web.config Clearly this is an edge of an edge case but of course - knowing my relationship with Mr. Murphy - I ended up running smack into this problem. So in the code above if you remove the call to ClearHeaders(), the cookie gets set!  Add it back in and the cookie is not there. If I run the above code in an ASHX handler it works. If I paste the same code (with a Response.End()) into an ASPX page, or MVC controller it all works. Only in the HttpHandler configured through Web.config does it fail! Cue the Twilight Zone Music. Workarounds As is often the case the fix for this once you know the problem is not too difficult. The difficulty lies in tracking inconsistencies like this down. Luckily there are a few simple workarounds for the Cookie issue. Don't use AppendHeader for Cookies The easiest and obvious solution to this problem is simply not use Response.AppendHeader() to set Cookies. Duh! Under normal circumstances in application level code there's rarely a reason to write out a cookie like this:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); but rather create the cookie using the Response.Cookies collection:response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); Unfortunately, in my case where I dynamically read headers from the original output and then dynamically  write header key value pairs back  programmatically into the Response.Headers collection, I actually don't look at each header specifically so in my case the cookie is just another header. My first thought was to simply trap for the Set-Cookie header and then parse out the cookie and create a Cookie object instead. But given that cookies can have a lot of different options this is not exactly trivial, plus I don't really want to fuck around with cookie values which can be notoriously brittle. Don't use Response.ClearHeaders() The real mystery in all this is why calling Response.ClearHeaders() prevents a cookie value later written with Response.AppendHeader() to fail. I fired up Reflector and took a quick look at System.Web and HttpResponse.ClearHeaders. There's all sorts of resetting going on but nothing that seems to indicate that headers should be removed later on in the request. The code in ClearHeaders() does access the HttpWorkerRequest, which is the low level interface directly into IIS, and so I suspect it's actually IIS that's stripping the headers and not ASP.NET, but it's hard to know. Somebody from Microsoft and the IIS team would have to comment on that. In my application it's probably safe to simply skip ClearHeaders() in my handler. The ClearHeaders/ClearContent was mainly for safety but after reviewing my code there really should never be a reason that headers would be set prior to this method firing. However, if for whatever reason headers do need to be cleared, it's easy enough to manually clear the headers out:private void RemoveHeaders(HttpResponse response) { List<string> headers = new List<string>(); foreach (string header in response.Headers) { headers.Add(header); } foreach (string header in headers) { response.Headers.Remove(header); } response.Cookies.Clear(); } Now I can replace the call the Response.ClearHeaders() and I don't get the funky side-effects from Response.ClearHeaders(). Summary I realize this is a total edge case as this occurs only in HttpHandlers that are manually configured. It looks like you'll never run into this in any of the higher level ASP.NET frameworks or even in ASHX handlers - only web.config defined handlers - which is really, really odd. After all those frameworks use the same underlying ASP.NET architecture. Hopefully somebody from Microsoft has an idea what crazy dependency was triggered here to make this fail. IAC, there are workarounds to this should you run into it, although I bet when you do run into it, it'll likely take a bit of time to find the problem or even this post in a search because it's not easily to correlate the problem to the solution. It's quite possible that more than cookies are affected by this behavior. Searching for a solution I read a few other accounts where headers like Referer were mysteriously disappearing, and it's possible that something similar is happening in those cases. Again, extreme edge case, but I'm writing this up here as documentation for myself and possibly some others that might have run into this. © Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Movement and Collision with AABB

    - by Jeremy Giberson
    I'm having a little difficulty figuring out the following scenarios. http://i.stack.imgur.com/8lM6i.png In scenario A, the moving entity has fallen to (and slightly into the floor). The current position represents the projected position that will occur if I apply the acceleration & velocity as usual without worrying about collision. The Next position, represents the corrected projection position after collision check. The resulting end position is the falling entity now rests ON the floor--that is, in a consistent state of collision by sharing it's bottom X axis with the floor's top X axis. My current update loop looks like the following: // figure out forces & accelerations and project an objects next position // check collision occurrence from current position -> projected position // if a collision occurs, adjust projection position Which seems to be working for the scenario of my object falling to the floor. However, the situation becomes sticky when trying to figure out scenario's B & C. In scenario B, I'm attempt to move along the floor on the X axis (player is pressing right direction button) additionally, gravity is pulling the object into the floor. The problem is, when the object attempts to move the collision detection code is going to recognize that the object is already colliding with the floor to begin with, and auto correct any movement back to where it was before. In scenario C, I'm attempting to jump off the floor. Again, because the object is already in a constant collision with the floor, when the collision routine checks to make sure moving from current position to projected position doesn't result in a collision, it will fail because at the beginning of the motion, the object is already colliding. How do you allow movement along the edge of an object? How do you allow movement away from an object you are already colliding with. Extra Info My collision routine is based on AABB sweeping test from an old gamasutra article, http://www.gamasutra.com/view/feature/3383/simple_intersection_tests_for_games.php?page=3 My bounding box implementation is based on top left/bottom right instead of midpoint/extents, so my min/max functions are adjusted. Otherwise, here is my bounding box class with collision routines: public class BoundingBox { public XYZ topLeft; public XYZ bottomRight; public BoundingBox(float x, float y, float z, float w, float h, float d) { topLeft = new XYZ(); bottomRight = new XYZ(); topLeft.x = x; topLeft.y = y; topLeft.z = z; bottomRight.x = x+w; bottomRight.y = y+h; bottomRight.z = z+d; } public BoundingBox(XYZ position, XYZ dimensions, boolean centered) { topLeft = new XYZ(); bottomRight = new XYZ(); topLeft.x = position.x; topLeft.y = position.y; topLeft.z = position.z; bottomRight.x = position.x + (centered ? dimensions.x/2 : dimensions.x); bottomRight.y = position.y + (centered ? dimensions.y/2 : dimensions.y); bottomRight.z = position.z + (centered ? dimensions.z/2 : dimensions.z); } /** * Check if a point lies inside a bounding box * @param box * @param point * @return */ public static boolean isPointInside(BoundingBox box, XYZ point) { if(box.topLeft.x <= point.x && point.x <= box.bottomRight.x && box.topLeft.y <= point.y && point.y <= box.bottomRight.y && box.topLeft.z <= point.z && point.z <= box.bottomRight.z) return true; return false; } /** * Check for overlap between two bounding boxes using separating axis theorem * if two boxes are separated on any axis, they cannot be overlapping * @param a * @param b * @return */ public static boolean isOverlapping(BoundingBox a, BoundingBox b) { XYZ dxyz = new XYZ(b.topLeft.x - a.topLeft.x, b.topLeft.y - a.topLeft.y, b.topLeft.z - a.topLeft.z); // if b - a is positive, a is first on the axis and we should use its extent // if b -a is negative, b is first on the axis and we should use its extent // check for x axis separation if ((dxyz.x >= 0 && a.bottomRight.x-a.topLeft.x < dxyz.x) // negative scale, reverse extent sum, flip equality ||(dxyz.x < 0 && b.topLeft.x-b.bottomRight.x > dxyz.x)) return false; // check for y axis separation if ((dxyz.y >= 0 && a.bottomRight.y-a.topLeft.y < dxyz.y) // negative scale, reverse extent sum, flip equality ||(dxyz.y < 0 && b.topLeft.y-b.bottomRight.y > dxyz.y)) return false; // check for z axis separation if ((dxyz.z >= 0 && a.bottomRight.z-a.topLeft.z < dxyz.z) // negative scale, reverse extent sum, flip equality ||(dxyz.z < 0 && b.topLeft.z-b.bottomRight.z > dxyz.z)) return false; // not separated on any axis, overlapping return true; } public static boolean isContactEdge(int xyzAxis, BoundingBox a, BoundingBox b) { switch(xyzAxis) { case XYZ.XCOORD: if(a.topLeft.x == b.bottomRight.x || a.bottomRight.x == b.topLeft.x) return true; return false; case XYZ.YCOORD: if(a.topLeft.y == b.bottomRight.y || a.bottomRight.y == b.topLeft.y) return true; return false; case XYZ.ZCOORD: if(a.topLeft.z == b.bottomRight.z || a.bottomRight.z == b.topLeft.z) return true; return false; } return false; } /** * Sweep test min extent value * @param box * @param xyzCoord * @return */ public static float min(BoundingBox box, int xyzCoord) { switch(xyzCoord) { case XYZ.XCOORD: return box.topLeft.x; case XYZ.YCOORD: return box.topLeft.y; case XYZ.ZCOORD: return box.topLeft.z; default: return 0f; } } /** * Sweep test max extent value * @param box * @param xyzCoord * @return */ public static float max(BoundingBox box, int xyzCoord) { switch(xyzCoord) { case XYZ.XCOORD: return box.bottomRight.x; case XYZ.YCOORD: return box.bottomRight.y; case XYZ.ZCOORD: return box.bottomRight.z; default: return 0f; } } /** * Test if bounding box A will overlap bounding box B at any point * when box A moves from position 0 to position 1 and box B moves from position 0 to position 1 * Note, sweep test assumes bounding boxes A and B's dimensions do not change * * @param a0 box a starting position * @param a1 box a ending position * @param b0 box b starting position * @param b1 box b ending position * @param aCollisionOut xyz of box a's position when/if a collision occurs * @param bCollisionOut xyz of box b's position when/if a collision occurs * @return */ public static boolean sweepTest(BoundingBox a0, BoundingBox a1, BoundingBox b0, BoundingBox b1, XYZ aCollisionOut, XYZ bCollisionOut) { // solve in reference to A XYZ va = new XYZ(a1.topLeft.x-a0.topLeft.x, a1.topLeft.y-a0.topLeft.y, a1.topLeft.z-a0.topLeft.z); XYZ vb = new XYZ(b1.topLeft.x-b0.topLeft.x, b1.topLeft.y-b0.topLeft.y, b1.topLeft.z-b0.topLeft.z); XYZ v = new XYZ(vb.x-va.x, vb.y-va.y, vb.z-va.z); // check for initial overlap if(BoundingBox.isOverlapping(a0, b0)) { // java pass by ref/value gotcha, have to modify value can't reassign it aCollisionOut.x = a0.topLeft.x; aCollisionOut.y = a0.topLeft.y; aCollisionOut.z = a0.topLeft.z; bCollisionOut.x = b0.topLeft.x; bCollisionOut.y = b0.topLeft.y; bCollisionOut.z = b0.topLeft.z; return true; } // overlap min/maxs XYZ u0 = new XYZ(); XYZ u1 = new XYZ(1,1,1); float t0, t1; // iterate axis and find overlaps times (x=0, y=1, z=2) for(int i = 0; i < 3; i++) { float aMax = max(a0, i); float aMin = min(a0, i); float bMax = max(b0, i); float bMin = min(b0, i); float vi = XYZ.getCoord(v, i); if(aMax < bMax && vi < 0) XYZ.setCoord(u0, i, (aMax-bMin)/vi); else if(bMax < aMin && vi > 0) XYZ.setCoord(u0, i, (aMin-bMax)/vi); if(bMax > aMin && vi < 0) XYZ.setCoord(u1, i, (aMin-bMax)/vi); else if(aMax > bMin && vi > 0) XYZ.setCoord(u1, i, (aMax-bMin)/vi); } // get times of collision t0 = Math.max(u0.x, Math.max(u0.y, u0.z)); t1 = Math.min(u1.x, Math.min(u1.y, u1.z)); // collision only occurs if t0 < t1 if(t0 <= t1 && t0 != 0) // not t0 because we already tested it! { // t0 is the normalized time of the collision // then the position of the bounding boxes would // be their original position + velocity*time aCollisionOut.x = a0.topLeft.x + va.x*t0; aCollisionOut.y = a0.topLeft.y + va.y*t0; aCollisionOut.z = a0.topLeft.z + va.z*t0; bCollisionOut.x = b0.topLeft.x + vb.x*t0; bCollisionOut.y = b0.topLeft.y + vb.y*t0; bCollisionOut.z = b0.topLeft.z + vb.z*t0; return true; } else return false; } }

    Read the article

  • Combined Likelihood Models

    - by Lukas Vermeer
    In a series of posts on this blog we have already described a flexible approach to recording events, a technique to create analytical models for reporting, a method that uses the same principles to generate extremely powerful facet based predictions and a waterfall strategy that can be used to blend multiple (possibly facet based) models for increased accuracy. This latest, and also last, addition to this sequence of increasing modeling complexity will illustrate an advanced approach to amalgamate models, taking us to a whole new level of predictive modeling and analytical insights; combination models predicting likelihoods using multiple child models. The method described here is far from trivial. We therefore would not recommend you apply these techniques in an initial implementation of Oracle Real-Time Decisions. In most cases, basic RTD models or the approaches described before will provide more than enough predictive accuracy and analytical insight. The following is intended as an example of how more advanced models could be constructed if implementation results warrant the increased implementation and design effort. Keep implemented statistics simple! Combining likelihoods Because facet based predictions are based on metadata attributes of the choices selected, it is possible to generate such predictions for more than one attribute of a choice. We can predict the likelihood of acceptance for a particular product based on the product category (e.g. ‘toys’), as well as based on the color of the product (e.g. ‘pink’). Of course, these two predictions may be completely different (the customer may well prefer toys, but dislike pink products) and we will have to somehow combine these two separate predictions to determine an overall likelihood of acceptance for the choice. Perhaps the simplest way to combine multiple predicted likelihoods into one is to calculate the average (or perhaps maximum or minimum) likelihood. However, this would completely forgo the fact that some facets may have a far more pronounced effect on the overall likelihood than others (e.g. customers may consider the product category more important than its color). We could opt for calculating some sort of weighted average, but this would require us to specify up front the relative importance of the different facets involved. This approach would also be unresponsive to changing consumer behavior in these preferences (e.g. product price bracket may become more important to consumers as a result of economic shifts). Preferably, we would want Oracle Real-Time Decisions to learn, act upon and tell us about, the correlations between the different facet models and the overall likelihood of acceptance. This additional level of predictive modeling, where a single supermodel (no pun intended) combines the output of several (facet based) models into a single prediction, is what we call a combined likelihood model. Facet Based Scores As an example, we have implemented three different facet based models (as described earlier) in a simple RTD inline service. These models will allow us to generate predictions for likelihood of acceptance for each product based on three different metadata fields: Category, Price Bracket and Product Color. We will use an Analytical Scores entity to store these different scores so we can easily pass them between different functions. A simple function, creatively named Compute Analytical Scores, will compute for each choice the different facet scores and return an Analytical Scores entity that is stored on the choice itself. For each score, a choice attribute referring to this entity is also added to be returned to the client to facilitate testing. One Offer To Predict Them All In order to combine the different facet based predictions into one single likelihood for each product, we will need a supermodel which can predict the likelihood of acceptance, based on the outcomes of the facet models. This model will not need to consider any of the attributes of the session, because they are already represented in the outcomes of the underlying facet models. For the same reason, the supermodel will not need to learn separately for each product, because the specific combination of facets for this product are also already represented in the output of the underlying models. In other words, instead of learning how session attributes influence acceptance of a particular product, we will learn how the outcomes of facet based models for a particular product influence acceptance at a higher level. We will therefore be using a single All Offers choice to represent all offers in our combined likelihood predictions. This choice has no attribute values configured, no scores and not a single eligibility rule; nor is it ever intended to be returned to a client. The All Offers choice is to be used exclusively by the Combined Likelihood Acceptance model to predict the likelihood of acceptance for all choices; based solely on the output of the facet based models defined earlier. The Switcheroo In Oracle Real-Time Decisions, models can only learn based on attributes stored on the session. Therefore, just before generating a combined prediction for a given choice, we will temporarily copy the facet based scores—stored on the choice earlier as an Analytical Scores entity—to the session. The code for the Predict Combined Likelihood Event function is outlined below. // set session attribute to contain facet based scores. // (this is the only input for the combined model) session().setAnalyticalScores(choice.getAnalyticalScores); // predict likelihood of acceptance for All Offers choice. CombinedLikelihoodChoice c = CombinedLikelihood.getChoice("AllOffers"); Double la = CombinedLikelihoodAcceptance.getChoiceEventLikelihoods(c, "Accepted"); // clear session attribute of facet based scores. session().setAnalyticalScores(null); // return likelihood. return la; This sleight of hand will allow the Combined Likelihood Acceptance model to predict the likelihood of acceptance for the All Offers choice using these choice specific scores. After the prediction is made, we will clear the Analytical Scores session attribute to ensure it does not pollute any of the other (facet) models. To guarantee our combined likelihood model will learn based on the facet based scores—and is not distracted by the other session attributes—we will configure the model to exclude any other inputs, save for the instance of the Analytical Scores session attribute, on the model attributes tab. Recording Events In order for the combined likelihood model to learn correctly, we must ensure that the Analytical Scores session attribute is set correctly at the moment RTD records any events related to a particular choice. We apply essentially the same switching technique as before in a Record Combined Likelihood Event function. // set session attribute to contain facet based scores // (this is the only input for the combined model). session().setAnalyticalScores(choice.getAnalyticalScores); // record input event against All Offers choice. CombinedLikelihood.getChoice("AllOffers").recordEvent(event); // force learn at this moment using the Internal Dock entry point. Application.getPredictor().learn(InternalLearn.modelArray, session(), session(), Application.currentTimeMillis()); // clear session attribute of facet based scores. session().setAnalyticalScores(null); In this example, Internal Learn is a special informant configured as the learn location for the combined likelihood model. The informant itself has no particular configuration and does nothing in itself; it is used only to force the model to learn at the exact instant we have set the Analytical Scores session attribute to the correct values. Reporting Results After running a few thousand (artificially skewed) simulated sessions on our ILS, the Decision Center reporting shows some interesting results. In this case, these results reflect perfectly the bias we ourselves had introduced in our tests. In practice, we would obviously use a wider range of customer attributes and expect to see some more unexpected outcomes. The facetted model for categories has clearly picked up on the that fact our simulated youngsters have little interest in purchasing the one red-hot vehicle our ILS had on offer. Also, it would seem that customer age is an excellent predictor for the acceptance of pink products. Looking at the key drivers for the All Offers choice we can see the relative importance of the different facets to the prediction of overall likelihood. The comparative importance of the category facet for overall prediction might, in part, be explained by the clear preference of younger customers for toys over other product types; as evident from the report on the predictiveness of customer age for offer category acceptance. Conclusion Oracle Real-Time Decisions' flexible decisioning framework allows for the construction of exceptionally elaborate prediction models that facilitate powerful targeting, but nonetheless provide insightful reporting. Although few customers will have a direct need for such a sophisticated solution architecture, it is encouraging to see that this lies within the realm of the possible with RTD; and this with limited configuration and customization required. There are obviously numerous other ways in which the predictive and reporting capabilities of Oracle Real-Time Decisions can be expanded upon to tailor to individual customers needs. We will not be able to elaborate on them all on this blog; and finding the right approach for any given problem is often more difficult than implementing the solution. Nevertheless, we hope that these last few posts have given you enough of an understanding of the power of the RTD framework and its models; so that you can take some of these ideas and improve upon your own strategy. As always, if you have any questions about the above—or any Oracle Real-Time Decisions design challenges you might face—please do not hesitate to contact us; via the comments below, social media or directly at Oracle. We are completely multi-channel and would be more than glad to help. :-)

    Read the article

  • How to find source of 301/302 redirect loop? Heroku GoDaddy Zerigo

    - by user179288
    this should be a relatively simple problem but I'm having trouble.I hope this is the right forum to post on as I've seen people get booted off stack-overflow for this sort of thing. I've setup a web app on heroku (cedar stack) at my-web-app.herokuapp.com and I'm trying to direct my-domain.com and www.my-domain.com to it. As per instructions on the heroku documentation, I've set my-domain.com to redirect (forwarding) to www.my-domain.com and then set a C-Name from www.my-domain.com to my-web-app.herokuapp.com. But the C-Name doesn't seem to be working right and is sending back to my-domain.com, causing a loop and I can't work out why. I first configured these setting at GoDaddy.com where I registered the domain but then tried to avoid the problem by using Heroku's Zerigo DNS add-on, setting the nameservers on GoDaddy to the ones given for Zerigo. However the problem remains. Here is the output from dig for my-domain.com ("drop-circles.com"): ; <<>> DiG 9.3.2 <<>> any drop-circles.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 671 ;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 5 ;; QUESTION SECTION: ;drop-circles.com. IN ANY ;; ANSWER SECTION: drop-circles.com. 433 IN NS b.ns.zerigo.net. drop-circles.com. 433 IN NS d.ns.zerigo.net. drop-circles.com. 433 IN NS e.ns.zerigo.net. drop-circles.com. 433 IN NS a.ns.zerigo.net. drop-circles.com. 433 IN NS c.ns.zerigo.net. drop-circles.com. 433 IN SOA a.ns.zerigo.net. hostmaster.zerigo.com. 1372250760 10800 3600 604800 900 drop-circles.com. 433 IN A 64.27.57.29 drop-circles.com. 433 IN A 64.27.57.24 ;; ADDITIONAL SECTION: d.ns.zerigo.net. 68935 IN A 174.36.24.250 e.ns.zerigo.net. 69015 IN A 72.26.219.150 a.ns.zerigo.net. 72602 IN A 64.27.57.11 c.ns.zerigo.net. 69204 IN A 109.74.192.232 b.ns.zerigo.net. 70549 IN A 174.37.229.229 ;; Query time: 15 msec ;; SERVER: 194.168.4.100#53(194.168.4.100) ;; WHEN: Wed Jun 26 14:29:07 2013 ;; MSG SIZE rcvd: 293 Here is the output from dig for www.my-domain.com ("www.drop-circles.com"): ; <<>> DiG 9.3.2 <<>> any www.drop-circles.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1608 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;www.drop-circles.com. IN ANY ;; ANSWER SECTION: www.drop-circles.com. 407 IN CNAME drop-circles-website.herokuapp.com. ;; Query time: 19 msec ;; SERVER: 194.168.4.100#53(194.168.4.100) ;; WHEN: Wed Jun 26 14:29:15 2013 ;; MSG SIZE rcvd: 83 And from Fiddler if I use the inspector when I try either address I get a series of requests, with the my-domain.com ("drop-circles.com") looking like this: Request: GET http://drop-circles.com/ HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-gb User-Agent: Opera/9.80 (Windows NT 5.1; U; Edition IBIS; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: drop-circles.com Response: HTTP/1.1 302 Found Server: nginx/0.8.54 Date: Wed, 26 Jun 2013 13:26:55 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Status: 302 Found Location: http://www.drop-circles.com/ Content-Length: 113 <html><body>Redirecting to <a href="http://www.drop-circles.com/">http://www.drop-circles.com/</a></body></html> And the www.my-domain.com ("www.drop-circles.com") looking like this: Request: GET http://www.drop-circles.com/ HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-gb User-Agent: Opera/9.80 (Windows NT 5.1; U; Edition IBIS; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.drop-circles.com Response: HTTP/1.1 301 Moved Permanently Content-Type: text/html Date: Wed, 26 Jun 2013 13:26:56 GMT Location: http://drop-circles.com/ Vary: Accept X-Powered-By: Express Content-Length: 104 Connection: keep-alive <p>Moved Permanently. Redirecting to <a href="http://drop-circles.com/">http://drop-circles.com/</a></p> Any and all help would be greatly appreciated. If it is not at all obvious from these readouts what it might be could someone at least tell me which company GoDaddy, Zerigo or Heroku should I go to for support since I don't really know enough to be able to say where the problem lies. Thank you.

    Read the article

  • Navigation in Win8 Metro Style applications

    - by Dennis Vroegop
    In Windows 8, Touch is, as they say, a first class citizen. Now, to be honest: they also said that in Windows 7. However in Win8 this is actually true. Applications are meant to be used by touch. Yes, you can still use mouse, keyboard and pen and your apps should take that into account but touch is where you should focus on initially. Will all users have touch enabled devices? No, not in the first place. I don’t think touchscreens will be on every device sold next year. But in 5 years? Who knows? Don’t forget: if your app is successful it will be around for a long time and by that time touchscreens will be everywhere. Another reason to embrace touch is that it’s easier to develop a touch-oriented app and then to make sure that keyboard, nouse and pen work as doing it the other way around. Porting a mouse-based application to a touch based application almost never works. The reverse gives you much more chances for success. That being said, there are some things that you need to think about. Most people have more than one finger, while most users only use one mouse at the time. Still, most touch-developers translate their mouse-knowledge to the touch and think they did a good job. Martin Tirion from Microsoft said that since Touch is a new language people face the same challenges they do when learning a new real spoken language. The first thing people try when learning a new language is simply replace the words in their native language to the newly learned words. At first they don’t care about grammar. To a native speaker of that other language this sounds all wrong but they still will be able to understand what the intention was. If you don’t believe me: try Google translate to translate something for you from your language to another and then back and see what happens. The same thing happens with Touch. Most developers translate a mouse-click into a tap-event and think they’re done. Well matey, you’re not done. Not by far. There are things you can do with a mouse that you cannot do with touch. Think hover. A mouse has the ability to ‘slide’ over UI elements. Touch doesn’t (I know: with Pen you can do this but I’m talking about actual fingers here). A touch is either there or it isn’t. And right-click? Forget about it. A click is a click.  Yes, you have more than one finger but the machine doesn’t know which finger you use… The other way around is also true. Like I said: most users only have one mouse but they are likely to have more than one finger. So how do we take that into account? Thinking about this is really worth the time: you might come up with some surprisingly good ideas! Still: don’t forget that not every user has touch-enabled hardware so make sure your app is useable for both groups. Keep this in mind: we’re going to need it later on! Now. Apps should be easy to use. You don’t want your user to read through pages and pages of documentation before they can use the app. Imagine that spotter next to an airfield suddenly seeing a prototype of a Concorde 2 landing on the nearby runway. He probably wants to enter that information in our app NOW and not after he’s taken a 3 day course. Even if he still has to download the app, install it for the first time and then run it he should be on his way immediately. At least, fast enough to note down the details of that unique, rare and possibly exciting sighting he just did. So.. How do we do this? Well, I am not talking about games here. Games are in a league of their own. They fall outside the scope of the apps I am describing. But all the others can roughly be characterized as being one of two flavors: the navigation is either flat or hierarchical. That’s it. And if it’s hierarchical it’s no more than three levels deep. Not more. Your users will get lost otherwise and we don’t want that. Flat is simple. Just imagine we have one screen that is as high as our physical screen is and as wide as you need it to be. Don’t worry if it doesn’t fit on the screen: people can scroll to the right and left. Don’t combine up/down and left/right scrolling: it’s confusing. Next to that, since most users will hold their device in landscape mode it’s very natural to scroll horizontal. So let’s use that when we have a flat model. The same applies to the hierarchical model. Try to have at most three levels. If you need more space, find a way to group the items in such a way that you can fit it in three, very wide lanes. At the highest level we have the so called hub level. This is the entry point of the app and as such it should give the user an immediate feeling of what the app is all about. If your app has categories if items then you might show these categories here. And while you’re at it: also show 2 or 3 of the items itself here to give the user a taste of what lies beneath. If the user selects a category you go to the section part. Here you show several sections (again, go as wide as you need) with again some detail examples. After that: the details layer shows each item. By giving some samples of the underlaying layer you achieve several things: you make the layer attractive by showing several different things, you show some highlights so the user sees actual content and you provide a shortcut to the layers underneath. The image below is borrowed from the http://design.windows.com website which has tons and tons of examples: For our app we’ll use this layout. So what will we show? Well, let’s see what sorts of features our app has to offer. I’ll repeat them here: Note planes Add pictures of that plane Notify friends of new spots Share new spots on social media Write down arrival times Write down departure times Write down the runway they take I am sure you can think of some more items but for now we'll use these. In the hub we’ll show something that represents “Spots”, “Friends”, “Social”. Apparently we have an inner list of spotter-friends that are in the app, while we also have to whole world in social. In the layer below we show something else, depending on what the user choose. When they choose “Spots” we’ll display the last spots, last spots by our friends (so we can actually jump from this category to the one next to it) and so on. When they choose a “spot” (or press the + icon in the App bar, which I’ll talk about next time) they go to the lowest and final level that shows details about that spot, including a picture, date and time and the notes belonging to that entry. You’d be amazed at how easy it is to organize your app this way. If you don’t have enough room in these three layers you probably could easily get away with grouping items. Take a look at our hub: we have three completely different things in one place. If you still can’t fit it all in in a logical and consistent way, chances are you are trying to do too much in this app. Go back to your mission statement, determine if it is specific enough and if your feature list helps that statement or makes it unclear. Go ahead. Give it a go! Next time we’ll talk about the look and feel, the charms and the app-bar….

    Read the article

  • What needs updating when moving a bootable Windows 7 (or Vista) partition?

    - by SuperTempel
    When I move a bootable NTFS partition with Windows on it to a different block offset, what needs updating to make it bootable again? In particular, here's what I tried: I have a disk with several partitions, one of which is the NTFS partition with Windows on it, and the disk uses the plain old MBR block 0 for the partitions layout (no more than 4 partitions). Now I format and partition a new, larger, disk. There I make room for the NTFS partition and copy the contents from the old disk's NTFS Windows partition into. And I make the partition "active". However, when I try to boot from this disk, I get a "read error" message immediately and the booting stops, the exact text is: A disk read error occurred Press Ctrl+Alt+Del to restart I verified that both disks have the same boot sector code in block 0. It seems to me that something else might need updating. I guess that somewhere there's a absolute block reference that I need to update, probably pointing to the next level loader or to the NT kernel. Update: I found this article going quite into the depth of what I want to know. However, it says to modify boot.ini, but I have Windows 7 installed here, where such things appear to have changed: No boot.ini but a folder called System Volume Information with GUID and other data in it that sounds related to my problem. Going to keep digging... Update 2: Thanks to the terrible looking but very informative website by starman, I was able to figure out the first step: The NTFS boot sector has a field for "hidden" sectors. This feld has to contain the sector number of the boot sector. This solves the "read error" message. Now, however, I get a "BOOTMGR is missing" error instead. Looks like there's another place where a block number has to be adjusted, but I can't find anything in the code listing about this. I do find a lot of help sites suggesting Windows tools for fixing this "BOOTMGR is missing" problem, but none seem to know what goes on behind the scenes. Kind of like suggesting to re-install Windows when there's a little problem with it. At least, those fixes seem to work, mostly involving the Bcdedit and Bootrec tools. Now, who knows what they do, especially the latter, in regards to a moved partition? Update 3: After lots of trial-and-error attempts, I believe now that the solution lies in the BCD-Template registry file, residing usually inside \Windows\System32\config. If I get this updated using the "bcdboot" command, Windows starts up from it. I am now in the middle of figuring out what information this registry contains relevant to the above question. Any pointers to the contents of this registry are welcome. Update 4: Turns out that while the BCD-Template file gets rewritten and has different binary contents than its predecessor, the values inside do not change. So it must be something else that bcdboot.exe writes. I had previously already checked if it changes the first 32 boot blocks of the partition, but they appear to remain unchanged. Parititon map doesn't get changed, either. So what is it that bcdboot modifies besides the BCD registry? Any tips on how I can trace that? Are there low level tools that show me what files a program writes to? Update 5: The answer seems to be: c:\Boot\BCD is also changed, and that appears to be the key file for the boot manager's process. I'll investigate this later... Update 6: It seems to be an important detail that I had originally two partitions created when I installed Windows 7: A small partition of 204800 sectors which appears to be a bootstrap partition, followed by the actual, large, partition containing the Windows system (drive C:). When I tried to transfer this installation to a new, larger, disk, I had kept the same two partitions intact on the new drive, although they ended up at a different offset. This alone led to the "BOOTMGR is missing" message. Since then, I've used bcdboot.exe only on the Windows partition, which added the \Boot\BCD file on that partition. That file (and folder) did originally only exist on the smaller partition. Hence, this problem may be more complicated in my case as one partition (the boot strapper) referred to another partition (the one containing the OS), whereas other people may only have to deal with one partition containing both, and maybe there the solution is simpler. Update 7: Found one more detail: The \Boot\BCD file records the MBR's serial number. If that number doesn't match, the system won't boot. Next I'll test if there's also an absolute block reference stored in there.

    Read the article

  • Android-Java: Constructing a triangle based on Coordinates on a map and your bearing

    - by Aidan
    Hi Guys, I'm constructing a geolocation based application and I'm trying to figure out a way to make my application realise when a user is facing the direction of the given location (a particular long / lat co-ord). I've got the math figured, I just have the triangle to construct. //UPDATE So I've figured out a good bit of this... Below is a method which takes in a long / lat value and attempts to compute a triangle finding a point 700 meters away and one to its left + right. It'd then use these to construct the triangle. It computes the correct longitude but the latitude ends up somewhere off the coast of east Africa. (I'm in Ireland!). public void drawtri(double currlng,double currlat, double bearing){ bearing = (bearing < 0 ? -bearing : bearing); System.out.println("RUNNING THE DRAW TRIANGLE METHOD!!!!!"); System.out.println("CURRENT LNG" + currlng); System.out.println("CURRENT LAT" + currlat); System.out.println("CURRENT BEARING" + bearing); //Find point X(x,y) double distance = 0.7; //700 meters. double R = 6371.0; //The radius of the earth. //Finding X's y value. Math.toRadians(currlng); Math.toRadians(currlat); Math.toRadians(bearing); distance = distance/R; Global.Alat = Math.asin(Math.sin(currlat)*Math.cos(distance)+ Math.cos(currlat)*Math.sin(distance)*Math.cos(bearing)); System.out.println("CURRENT ALAT!!: " + Global.Alat); //Finding X's x value. Global.Alng = currlng + Math.atan2(Math.sin(bearing)*Math.sin(distance) *Math.cos(currlat), Math.cos(distance)-Math.sin(currlat)*Math.sin(Global.Alat)); Math.toDegrees(Global.Alat); Math.toDegrees(Global.Alng); //Co-ord of Point B(x,y) // Note: Lng = X axis, Lat = Y axis. Global.Blat = Global.Alat+ 00.007931; Global.Blng = Global.Alng; //Co-ord of Point C(x,y) Global.Clat = Global.Alat - 00.007931; Global.Clng = Global.Alng; } From debugging I've determined the problem lies with the computation of the latitude done here.. Global.Alat = Math.asin(Math.sin(currlat)*Math.cos(distance)+ Math.cos(currlat)*Math.sin(distance)*Math.cos(bearing)); I have no idea why though and don't know how to fix it. I got the formula from this site.. http://www.movable-type.co.uk/scripts/latlong.html It appears correct and I've tested multiple things... I've tried converting to Radians then post computations back to degrees, etc. etc. Anyone got any ideas how to fix this method so that it will map the triangle ONLY 700 meters in from my current location in the direction that I am facing? Thanks, EDIT/// Converting the outcome to radians gives me a lat of 5.6xxxxxxxxxxxxxx .I have a feeling this bug has something to do with conversions but its not THAT simple. The equation is correct, it just.. outputs wrong..

    Read the article

  • Android 1.5 - 2.1 Search Activity affects Parent Lifecycle

    - by pacoder
    Behavior seems consistent in Android 1.5 to 2.1 Short version is this, it appears that when my (android search facility) search activity is fired from the android QSR due to either a suggestion or search, UNLESS my search activity in turn fires off a VISIBLE activity that is not the parent of the search, the search parents life cycle changes. It will NOT fire onDestroy until I launch a visible activity from it. If I do, onDestroy will fire fine. I need a way to get around this behavior... The long version: We have implemented a SearchSuggestion provider and a Search activity in our application. The one thing about it that is very odd is that if the SearchManager passes control to our custom Search activity, AND that activity does not create a visible Activity the Activity which parented the search does not destroy (onDestroy doesn't run) and it will not until we call a visible Activity from the parent activity. As long as our Search Activity fires off another Activity that gets focus the parent activity will fire onDestroy when I back out of it. The trick is that Activity must have a visual component. I tried to fake it out with a 'pass through' Activity so that my Search Activity could fire off another Intent and bail out but that didn't work either. I have tried setting our SearchActivity to launch singleTop and I also tried setting its noHistory attribute to true, tried setResult(RESULT_OK) in SearchACtivity prior to finish, bunch of other things, nothing is working. This is the chunk of code in our Search Activity onCreate. Couple of notes about it: If Intent is Action_Search (user typed in their own search and didn't pick a suggestion), we display a list of results as our Search Activity is a ListActivity. In this case when the item is picked, the Search Activity closes and our parent Activity does fire onDestroy() when we back out. If Intent is Action_View (user picked a suggestion) when type is "action" we fire off an Intent that creates a new visible Activity. In this case same thing, when we leave that new activity and return to the parent activity, the back key does cause the parent activity to fire onDestroy when leaving. If Intent is Action_View (user picked a suggestion) when type is "pitem" is where the problem lies. It works fine (the method call focuses an item on the parent activity), but when the back button is hit on the parent activity onDestroy is NOT called. IF after this executes I pick an option in the parent activity that fires off another activity and return to the parent then back out it will fire onDestroy() in the parent activity. Note that the "action" intent ends up running the exact same method call as "pitem", it just bring up a new visual Activity first. Also I can take out the method call from "pitem" and just finish() and the behavior is the same, the parent activity doesn't fire onDestroy() when backed out of. if (Intent.ACTION_SEARCH.equals(queryAction)) { this.setContentView(_layoutId); String searchKeywords = queryIntent.getStringExtra(SearchManager.QUERY); init(searchKeywords); } else if(Intent.ACTION_VIEW.equals(queryAction)){ Bundle bundle = queryIntent.getExtras(); String key = queryIntent.getDataString(); String userQuery = bundle.getString(SearchManager.USER_QUERY); String[] keyValues = key.split("-"); if(keyValues.length == 2) { String type = keyValues[0]; String value = keyValues[1]; if(type.equals("action")) { Intent intent = new Intent(this, EventInfoActivity.class); Long longKey = Long.parseLong(value); intent.putExtra("vo_id", longKey); startActivity(intent); finish(); } else if(type.equals("pitem")) { Integer id = Integer.parseInt(value); _application._servicesManager._mapHandlerSelector.selectInfoItem(id); finish(); } } } It just seems like something is being held onto and I can't figure out what it is, in all cases the Search Activity fires onDestroy() when finish() is called so it is definitely going away. If anyone has any suggestions I'd be most appreciative. Thanks, Sean Overby

    Read the article

  • "Access is denied" JavaScript error when trying to access the document object of a programmatically-

    - by Bungle
    I have project in which I need to create an <iframe> element using JavaScript and append it to the DOM. After that, I need to insert some content into the <iframe>. It's a widget that will be embedded in third-party websites. I don't set the "src" attribute of the <iframe> since I don't want to load a page; rather, it is used to isolate/sandbox the content that I insert into it so that I don't run into CSS or JavaScript conflicts with the parent page. I'm using JSONP to load some HTML content from a server and insert it in this <iframe>. I have this working fine, with one serious exception - if the document.domain property is set in the parent page (which it may be in certain environments in which this widget is deployed), Internet Explorer (probably all versions, but I've confirmed in 6, 7, and 8) gives me an "Access is denied" error when I try to access the document object of this <iframe> I've created. It doesn't happen in any other browsers I've tested in (all major modern ones). This makes some sense, since I'm aware that Internet Explorer requires you to set the document.domain of all windows/frames that will communicate with each other to the same value. However, I'm not aware of any way to set this value on a document that I can't access. Is anyone aware of a way to do this - somehow set the document.domain property of this dynamically created <iframe>? Or am I not looking at it from the right angle - is there another way to achieve what I'm going for without running into this problem? I do need to use an <iframe> in any case, as the isolated/sandboxed window is crucial to the functionality of this widget. Here's my test code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Document.domain Test</title> <script type="text/javascript"> document.domain = 'onespot.com'; // set the page's document.domain </script> </head> <body> <p>This is a paragraph above the &lt;iframe&gt;.</p> <div id="placeholder"></div> <p>This is a paragraph below the &lt;iframe&gt;.</p> <script type="text/javascript"> var iframe = document.createElement('iframe'), doc; // create <iframe> element document.getElementById('placeholder').appendChild(iframe); // append <iframe> element to the placeholder element setTimeout(function() { // set a timeout to give browsers a chance to recognize the <iframe> doc = iframe.contentWindow || iframe.contentDocument; // get a handle on the <iframe> document alert(doc); if (doc.document) { // HEREIN LIES THE PROBLEM doc = doc.document; } doc.body.innerHTML = '<h1>Hello!</h1>'; // add an element }, 10); </script> </body> </html> I've hosted it at: http://troy.onespot.com/static/access_denied.html As you'll see if you load this page in IE, at the point that I call alert(), I do have a handle on the window object of the <iframe>; I just can't get any deeper, into its document object. Thanks very much for any help or suggestions! I'll be indebted to whomever can help me find a solution to this.

    Read the article

  • Help with SVN+SSH permissions with CentOS/WHM setup

    - by Furiam
    Hi Folks, I'll try my best to explain how I'm trying to set up this system. Imagine a production server running WHM with various sites. We'll call these sites... site1, site2, site2 Now, with the WHM setup, each site has a user/group defined for them, we'll keep these users/groups called site1,site2 for simplicity reasons. Now, updating these sites is accomplished using SVN, and through the use of a post commit script to auto update these sites (With .svn blocked through the apache configuration). There are two regular maintainers of these sites, we'll call them Joe and Bob. Joe and Bob both have commandline access to the server through thier respective limited accounts. So I've done the easy bit, managed to get SVN working with these "maintainers" so that when an SVN commit occurs, the changes are checked out and go live perfectly. Here's the cavet, and ultimately my problem. User permissions. Through my testing of this setup, I've only managed to get it working by giving what is being updated permissions of 777, so that Joe and Bob can both read and write access to webfront directories for each of the sites. So, an example of how it's set up now: Joe and Bob both belong to a group called "Dev". I have the master /svn folders set up for both read and write access to this group, and it works great. Post commit triggers, updates the site, and then sets 777 on each file within the webfront. I then changed this to try and factor in group permission updates, instead of straight 777. Each folder in /home/site1/public_html intially gets given a chmod of 664, and each folder 775 Which looks a little something like this drwxrwxr-x . drwxrwxr-x .. drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- site1 site1 my_test_file So site1 is sthe owner and group owner of those files and folders. So I then added site1 to Joe and Bobs secondary groups so that the SVN update will correctly allow access to these files. Herein lies the problem now. When I wish to add a file or folder to /home/site1, say Bobs_file, it then looks like this drwxrwxr-x . drwxrwxr-x .. drwxr-xr-x Bob dev bobs_folder drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- Bob dev bobs_file -rw-rw-r-- site1 site1 my_test_file How can I get it so that with the set of user permissions Bob has available, to change the owner and group owner of that file to reflect "site1" "site1". As Bob belongs to Dev I can set the permissions correctly with CHMOd, but It appears CHGRP is throwing back operation errors. Now this was long winded enough to give an overview of exactly what I'm trying to accomplish, just incase I'm going about this arse-over-tit and there's a far easier solution. Here's my goals 2 people to update multiple user accounts specified given the structure of WHM Trying to maintain master user/group permissions of file and folders to the original user account, and not the account of the updatee. I like the security of SVN+SSH over just SVN. Don't want to run all this over root. I hope this made sense, and thanks in advance :)

    Read the article

  • Understanding the memory consumption on iPhone

    - by zoul
    Hello! I am working on a 2D iPhone game using OpenGL ES and I keep hitting the 24 MB memory limit – my application keeps crashing with the error code 101. I tried real hard to find where the memory goes, but the numbers in Instruments are still much bigger than what I would expect. I ran the application with the Memory Monitor, Object Alloc, Leaks and OpenGL ES instruments. When the application gets loaded, free physical memory drops from 37 MB to 23 MB, the Object Alloc settles around 7 MB, Leaks show two or three leaks a few bytes in size, the Gart Object Size is about 5 MB and Memory Monitor says the application takes up about 14 MB of real memory. I am perplexed as where did the memory go – when I dig into the Object Allocations, most of the memory is in the textures, exactly as I would expect. But both my own texture allocation counter and the Gart Object Size agree that the textures should take up somewhere around 5 MB. I am not aware of allocating anything else that would be worth mentioning, and the Object Alloc agrees. Where does the memory go? (I would be glad to supply more details if this is not enough.) Update: I really tried to find where I could allocate so much memory, but with no results. What drives me wild is the difference between the Object Allocations (~7 MB) and real memory usage as shown by Memory Monitor (~14 MB). Even if there were huge leaks or huge chunks of memory I forget about, the should still show up in the Object Allocations, shouldn’t they? I’ve already tried the usual suspects, ie. the UIImage with its caching, but that did not help. Is there a way to track memory usage “debugger-style”, line by line, watching each statement’s impact on memory usage? What I have found so far: I really am using that much memory. It is not easy to measure the real memory consumption, but after a lot of counting I think the memory consumption is really that high. My fault. I found no easy way to measure the memory used. The Memory Monitor numbers are accurate (these are the numbers that really matter), but the Memory Monitor can’t tell you where exactly the memory goes. The Object Alloc tool is almost useless for tracking the real memory usage. When I create a texture, the allocated memory counter goes up for a while (reading the texture into the memory), then drops (passing the texture data to OpenGL, freeing). This is OK, but does not always happen – sometimes the memory usage stays high even after the texture has been passed on to OpenGL and freed from “my” memory. This means that the total amount of memory allocated as shown by the Object Alloc tool is smaller than the real total memory consumption, but bigger than the real consumption minus textures (real – textures < object alloc < real). Go figure. I misread the Programming Guide. The memory limit of 24 MB applies to textures and surfaces, not the whole application. The actual red line lies a bit further, but I could not find any hard numbers. The consensus is that 25–30 MB is the ceiling. When the system gets short on memory, it starts sending the memory warning. I have almost nothing to free, but other applications do release some memory back to the system, especially Safari (which seems to be caching the websites). When the free memory as shown in the Memory Monitor goes zero, the system starts killing. I had to bite the bullet and rewrite some parts of the code to be more efficient on memory, but I am probably still pushing it. I

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32  | Next Page >