Search Results

Search found 6023 results on 241 pages for 'grid computing'.

Page 229/241 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • 5 Android Keyboard Replacements to Help You Type Faster

    - by Chris Hoffman
    Android allows developers to replace its keyboard with their own keyboard apps. This has led to experimentation and great new features, like the gesture-typing feature that’s made its way into Android’s official keyboard after proving itself in third-party keyboards. This sort of customization isn’t possible on Apple’s iOS or even Microsoft’s modern Windows environments. Installing a third-party keyboard is easy — install it from Google Play, launch it like another app, and it will explain how to enable it. Google Keyboard Google Keyboard is Android’s official keyboard, as seen on Google’s Nexus devices. However, there’s a good chance your Android smartphone or tablet comes with a keyboard designed by its manufacturer instead. You can install the Google Keyboard from Google Play, even if your device doesn’t come with it. This keyboard offers a wide variety of features, including a built-in gesture-typing feature, as popularized by Swype. It also offers prediction, including full next-word prediction based on your previous word, and includes voice recognition that works offline on modern versions of Android. Google’s keyboard may not offer the most accurate swiping feature or the best autocorrection, but it’s a great keyboard that feels like it belongs in Android. SwiftKey SwiftKey costs $4, although you can try it free for one month. In spite of its price, many people who rarely buy apps have been sold on SwiftKey. It offers amazing auto-correction and word-prediction features. Just mash away on your touch-screen keyboard, typing as fast as possible, and SwiftKey will notice your mistakes and type what you actually meant to type. SwiftKey also now has built-in support for gesture-typing via SwiftKey Flow, so you get a lot of flexibility. At $4, SwiftKey may seem a bit pricey, but give the month-long trial a try. A great keyboard makes all the typing you do everywhere on your phone better. SwiftKey is an amazing keyboard if you tap-to-type rather than swipe-to-type. Swype While other keyboards have copied Swype’s swipe-to-type feature, none have completely matched its accuracy. Swype has been designing a gesture-typing keyboard for longer than anyone else and its gesture feature still seems more accurate than its competitors’ gesture support. If you use gesture-typing all the time, you’ll probably want to use Swype. Swype can now be installed directly from Google Play without the old, tedious process of registering a beta account and sideloading the Swype app. Swype offers a month-long free trial and the full version is available for $1 afterwards. Minuum Minuum is a crowdfunded keyboard that is currently still in beta and only supports English. We include it here because it’s so interesting — it’s a great example of the kind of creativity and experimentation that happens when you allow developers to experiment with their own forms of keyboard. Minuum uses a tiny, minimum keyboard that frees up your screen space, so your touch-screen keyboard doesn’t hog your device’s screen. Rather than displaying a full keyboard on your screen, Minuum displays a single row of letters.  Each letter is small and may be difficult to hit, but that doesn’t matter — Minuum’s smart autocorrection algorithms interpret what you intended to type rather than typing the exact letters you press. Just swipe to the right to type a space and accept Minuum’s suggestion. At $4 for a beta version with no trial, Minuum may seem a bit pricy. But it’s a great example of the flexibility Android allows. If there’s a problem with this keyboard, it’s that it’s a bit late — in an age of 5″ smartphones with 1080p screens, full-size keyboards no longer feel as cramped. MessagEase MessagEase is another example of a new take on text input. Thankfully, this keyboard is available for free. MessagEase presents all letters in a nine-button grid. To type a common letter, you’d tap the button. To type an uncommon letter, you’d tap the button, hold down, and swipe in the appropriate direction. This gives you large buttons that can work well as touch targets, especially when typing with one hand. Like any other unique twist on a traditional keyboard, you’d have to give it a few minutes to get used to where the letters are and the new way it works. After giving it some practice, you may find this is a faster way to type on a touch-screen — especially with one hand, as the targets are so large. Google Play is full of replacement keyboards for Android phones and tablets. Keyboards are just another type of app that you can swap in. Leave a comment if you’ve found another great keyboard that you prefer using. Image Credit: Cheon Fong Liew on Flickr     

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • FAQ&ndash;Highlight GridView Row on Click and Retain Selected Row on Postback

    - by Vincent Maverick Durano
    A couple of months ago I’ve written a simple demo about “Highlighting GridView Row on MouseOver”. I’ve noticed many members in the forums (http://forums.asp.net) are asking how to highlight row in GridView and retain the selected row across postbacks. So I’ve decided to write this post to demonstrate how to implement it as reference to others who might need it. In this demo I going to use a combination of plain JavaScript and jQuery to do the client-side manipulation. I presumed that you already know how to bind the grid with data because I will not include the codes for populating the GridView here. For binding the gridview you can refer this post: Binding GridView with Data the ADO.Net way or this one: GridView Custom Paging with LINQ. To get started let’s implement the highlighting of GridView row on row click and retain the selected row on postback.  For simplicity I set up the page like this: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>You have selected Row: (<asp:Label ID="Label1" runat="server" />)</h2> <asp:HiddenField ID="hfCurrentRowIndex" runat="server"></asp:HiddenField> <asp:HiddenField ID="hfParentContainer" runat="server"></asp:HiddenField> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Trigger Postback" /> <asp:GridView ID="grdCustomer" runat="server" AutoGenerateColumns="false" onrowdatabound="grdCustomer_RowDataBound"> <Columns> <asp:BoundField DataField="Company" HeaderText="Company" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="Title" HeaderText="Title" /> <asp:BoundField DataField="Address" HeaderText="Address" /> </Columns> </asp:GridView> </asp:Content>   Note: Since the action is done at the client-side, when we do a postback like (clicking on a button) the page will be re-created and you will lose the highlighted row. This is normal because the the server doesn't know anything about the client/browser not unless if you do something to notify the server that something has changed. To persist the settings we will use some HiddenFields control to store the data so that when it postback we can reference the value from there. Now here’s the JavaScript functions below: <asp:content id="Content1" runat="server" contentplaceholderid="HeadContent"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script type="text/javascript">       var prevRowIndex;       function ChangeRowColor(row, rowIndex) {           var parent = document.getElementById(row);           var currentRowIndex = parseInt(rowIndex) + 1;                 if (prevRowIndex == currentRowIndex) {               return;           }           else if (prevRowIndex != null) {               parent.rows[prevRowIndex].style.backgroundColor = "#FFFFFF";           }                 parent.rows[currentRowIndex].style.backgroundColor = "#FFFFD6";                 prevRowIndex = currentRowIndex;                 $('#<%= Label1.ClientID %>').text(currentRowIndex);                 $('#<%= hfParentContainer.ClientID %>').val(row);           $('#<%= hfCurrentRowIndex.ClientID %>').val(rowIndex);       }             $(function () {           RetainSelectedRow();       });             function RetainSelectedRow() {           var parent = $('#<%= hfParentContainer.ClientID %>').val();           var currentIndex = $('#<%= hfCurrentRowIndex.ClientID %>').val();           if (parent != null) {               ChangeRowColor(parent, currentIndex);           }       }          </script> </asp:content>   The ChangeRowColor() is the function that sets the background color of the selected row. It is also where we set the previous row and rowIndex values in HiddenFields.  The $(function(){}); is a short-hand for the jQuery document.ready function. This function will be fired once the page is posted back to the server that’s why we call the function RetainSelectedRow(). The RetainSelectedRow() function is where we referenced the current selected values stored from the HiddenFields and pass these values to the ChangeRowColor) function to retain the highlighted row. Finally, here’s the code behind part: protected void grdCustomer_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { e.Row.Attributes.Add("onclick", string.Format("ChangeRowColor('{0}','{1}');", e.Row.ClientID, e.Row.RowIndex)); } } The code above is responsible for attaching the javascript onclick event for each row and call the ChangeRowColor() function and passing the e.Row.ClientID and e.Row.RowIndex to the function. Here’s the sample output below:   That’s it! I hope someone find this post useful! Technorati Tags: jQuery,GridView,JavaScript,TipTricks

    Read the article

  • Training a 'replacement', how to enforce standards?

    - by Mohgeroth
    Not sure that this is the right stack exchange site to ask this of, but here goes... Scope I work for a small company that employs a few hundred people. The development team for the company is small and works out of visual foxpro. A specific department in the company hired me in as a 'lone gunman' to fix and enhance a pre-existing invoicing system. I've successfully taken an Access application that suffered from a lot of risks and limitations and converted it into a C# application driven off of a SQL server backend. I have recently obtained my undergraduate and am no expert by any means. To help make up for that I've felt that earning microsoft certifications will force me to understand more about .net and how it functions. So, after giving my notice with 9 months in advance, 3 months ago a replacement finally showed up. Their role is to learn what I have been designing to an attempt to support the applications designed in C#. The Replacement Fresh out of college with no real-world work experience, the first instinct for anything involving data was and still is listboxes... any time data is mentioned the list box is the control of choice for the replacement. This has gotten to the point, no matter how many times I discuss other controls, where I've seen 5 listboxes on a single form. Classroom experience was almost all C++ console development. So, an example of where I have concern is in a winforms application: Users need to key Reasons into a table to select from later. Given that I know that a strongly typed data set exists, I can just drag the data source from the toolbox and it would create all of this for me. I realize this is a simple example but using databinding is the key. For the past few months now we have been talking about the strongly typed dataset, how to use it and where it interacts with other controls. Data sets, how they work in relation to binding sources, adapters and data grid views. After handing this project off I expected questions about how to implement these since for me this is the way to do it. What happened next simply floors me: An instance of an adapter from the strongly typed dataset was created in the activate event of the form, a table was created and filled with data. Then, a loop was made to manually add rows to a listbox from this table. Finally, a variable was kept to do lookups to figure out what ID the record was for updates if required. How do they modify records you ask? That was my first question too. You won't believe how simple it is, all you do it double click and they type into a pop-up prompt the new value to change it to. As a data entry operator, all the modal popups would drive me absolutely insane. The final solution exceeds 100 lines of code that must be maintained. So my concern is that none of this is sinking in... the department is only allowed 20 hours a week of their time. Up until last week, we've only been given 4-5 hours a week if I'm lucky. The past week or so, I've been lucky to get 10. Question WHAT DO I DO?! I have 4 weeks left until I leave and they fully 'support' this application. I love this job and the opportunity it has given me but it's time for me to spread my wings and find something new. I am in no way, shape or form convinced that they are ready to take over. I do feel that the replacement has the technical ability to 'figure it out' but instead of learning they just write code to do all of this stuff manually. If the replacement wants to code differently in the end, as long as it works I'm fine with that as horrifiying at it looks. However to support what I have designed they MUST to understand how it works and how I have used controls and the framework to make 'magic' happen. This project has about 40 forms, a database with over 30 some odd tables, triggers and stored procedures. It relates labor to invoices to contracts to projections... it's not as simple as it was three years ago when I began this project and the department is now in a position where they cannot survive without it. How in the world can I accomplish any of the following?: Enforce standards or understanding in constent design when the department manager keeps telling them they can do it however they want to Find a way to engage the replacement in active learning of the framework and system design that support must be given for Gracefully inform sr. management that 5-9 hours a week is simply not enough time to learn about the department, pre-existing processes, applications that need to be supported AND determine where potential enhancements to the system go... Yes I know this is a wall of text, thanks for reading through me but I simply don't know what I should be doing. For me, this job is a monster of a reference and things would look extremely bad if I left and things fell apart. How do I handle this?

    Read the article

  • jqGrid - dynamically load different drop down values for different rows depending on another column value

    - by Renso
    Goal: As we all know the jqGrid examples in the demo and the Wiki always refer to static values for drop down boxes. This of course is a personal preference but in dynamic design these values should be populated from the database/xml file, etc, ideally JSON formatted. Can you do this in jqGrid, yes, but with some custom coding which we will briefly show below (refer to some of my other blog entries for a more detailed discussion on this topic). What you CANNOT do in jqGrid, referrign here up and to version 3.8.x, is to load different drop down values for different rows in the jqGrid. Well, not without some trickery, which is what this discussion is about. Issue: Of course the issue is that jqGrid has been designed for high performance and thus I have no issue with them loading a  reference to a single drop down values list for every column. This way if you have 500 rows or one, each row only refers to a single list for that particuolar column. Nice! SO how easy would it be to simply traverse the grid once loaded on gridComplete or loadComplete and simply load the select tag's options from scratch, via ajax, from memory variable, hard coded etc? Impossible! Since their is no embedded SELECT tag within each cell containing the drop down values (remeber it only has a reference to that list in memory), all you will see when you inspect the cell prior to clicking on it, or even before and on beforeEditCell, is an empty <TD></TD>. When trying to load that list via a click event on that cell will temporarily load the list but jqGrid's last internal callback event will remove it and replace it with the old one, and you are back to square one. Solution: Yes, after spending a few hours on this found a solution to the problem that does not require any updates to jqGrid source code, thank GOD! Before we get into the coding details, the solution here can of course be customized to suite your specific needs, this one loads the entire drop down list that would be needed across all rows once into global variable. I then parse this object that contains all the properties I need to filter the rows depending on which ones I want the user to see based off of another cell value in that row. This only happens when clicking the cell, so no performance penalty. You may of course to load it via ajax when the user clicks the cell, but I found it more effecient to load the entire list as part of jqGrid's normal editoptions: { multiple: false, value: listingStatus } colModel options which again keeps only a reference to the sinlge list, no duplciation. Lets get into the meat and potatoes of it.         var acctId = $('#Id').val();         var data = $.ajax({ url: $('#ajaxGetAllMaterialsTrackingLookupDataUrl').val(), data: { accountId: acctId }, dataType: 'json', async: false, success: function(data, result) { if (!result) alert('Failure to retrieve the Alert related lookup data.'); } }).responseText;         var lookupData = eval('(' + data + ')');         var listingCategory = lookupData.ListingCategory;         var listingStatus = lookupData.ListingStatus;         var catList = '{';         $(lookupData.ListingCategory).each(function() {             catList += this.Id + ':"' + this.Name + '",';         });         catList += '}';         var lastsel;         var ignoreAlert = true;         $(item)         .jqGrid({             url: listURL,             postData: '',             datatype: "local",             colNames: ['Id', 'Name', 'Commission<br />Rep', 'Business<br />Group', 'Order<br />Date', 'Edit', 'TBD', 'Month', 'Year', 'Week', 'Product', 'Product<br />Type', 'Online/<br />Magazine', 'Materials', 'Special<br />Placement', 'Logo', 'Image', 'Text', 'Contact<br />Info', 'Everthing<br />In', 'Category', 'Status'],             colModel: [                 { name: 'Id', index: 'Id', hidden: true, hidedlg: true },                 { name: 'AccountName', index: 'AccountName', align: "left", resizable: true, search: true, width: 100 },                 { name: 'OnlineName', index: 'OnlineName', align: 'left', sortable: false, width: 80 },                 { name: 'ListingCategoryName', index: 'ListingCategoryName', width: 85, editable: true, hidden: false, edittype: "select", editoptions: { multiple: false, value: eval('(' + catList + ')') }, editrules: { required: false }, formatoptions: { disabled: false} }             ],             jsonReader: {                 root: "List",                 page: "CurrentPage",                 total: "TotalPages",                 records: "TotalRecords",                 userdata: "Errors",                 repeatitems: false,                 id: "0"             },             rowNum: $rows,             rowList: [10, 20, 50, 200, 500, 1000, 2000],             imgpath: jQueryImageRoot,             pager: $(item + 'Pager'),             shrinkToFit: true,             width: 1455,             recordtext: 'Traffic lines',             sortname: 'OrderDate',             viewrecords: true,             sortorder: "asc",             altRows: true,             cellEdit: true,             cellsubmit: "remote",             cellurl: editURL + '?rows=' + $rows + '&page=1',             loadComplete: function() {               },             gridComplete: function() {             },             loadError: function(xhr, st, err) {             },             afterEditCell: function(rowid, cellname, value, iRow, iCol) {                 var select = $(item).find('td.edit-cell select');                 $(item).find('td.edit-cell select option').each(function() {                     var option = $(this);                     var optionId = $(this).val();                     $(lookupData.ListingCategory).each(function() {                         if (this.Id == optionId) {                                                       if (this.OnlineName != $(item).getCell(rowid, 'OnlineName')) {                                 option.remove();                                 return false;                             }                         }                     });                 });             },             search: true,             searchdata: {},             caption: "List of all Traffic lines",             editurl: editURL + '?rows=' + $rows + '&page=1',             hiddengrid: hideGrid   Here is the JSON data returned via the ajax call during the jqGrid function call above (NOTE it must be { async: false}: {"ListingCategory":[{"Id":29,"Name":"Document Imaging & Management","OnlineName":"RF Globalnet"} ,{"Id":1,"Name":"Ancillary Department Hardware","OnlineName":"Healthcare Technology Online"} ,{"Id":2,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":3,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":4,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":5,"Name":"Document Imaging & Management","OnlineName":"Healthcare Technology Online"} ,{"Id":6,"Name":"Document Imaging & Management","OnlineName":"Healthcare Technology Online"} ,{"Id":7,"Name":"EMR/EHR Software","OnlineName":"Healthcare Technology Online"}]} I only need the Id and Name for the drop down list, but the third column in the JSON object is important, it is the only that I match up with the OnlineName in the jqGrid column, and then in the loop during afterEditCell simply remove the ones I don't want the user to see. That's it!

    Read the article

  • Projected Results

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Monica Mehta Yasser Mahmud has seen a revolution in project management over the past decade. During that time, the former Primavera product strategist (who joined Oracle when his company was acquired in 2008) has not only observed a transformation in the way IT systems support corporate projects but the role project portfolio management (PPM) plays in the enterprise. “15 years ago project management was the domain of project management office (PMO),” Mahmud recalls of earlier days. “But over the course of the past decade, we've seen it transform into a mission critical enterprise discipline, that has made Primavera indispensable in the board room. Now, as a senior manager, a board member, or a C-level executive you have direct and complete visibility into what’s kind of going on in the organization—at a level of detail that you're going to consume that information.” Now serving as Oracle’s vice president of product strategy and industry marketing, Mahmud shares his thoughts on how Oracle’s Primavera solutions have evolved and how best-in-class project portfolio management systems can help businesses stay competitive. Profit: What do you feel are the market dynamics that are changing project management today? Mahmud: First, the data explosion. We're generating data at twice the rate at which we can actually store it. The same concept applies for project-intensive organizations. A lot of data is gathered, but what are we really doing with it? Are we turning data into insight? Are we using that insight and turning it into foresight with analytics tools? This is a key driver that will separate the very good companies—the very competitive companies—from those that are not as competitive. Another trend is centered on the explosion of mobile computing. By the year 2013, an estimated 35 percent of the world’s workforce is going to be mobile. That’s one billion people. So the question is not if you're going to go mobile, it’s how fast you are going to go mobile. What kind of impact does that have on how the workforce participates in projects? What worked ten to fifteen years ago is not going to work today. It requires a real rethink around the interfaces and how data is actually presented. Profit: What is the role of project management in this new landscape? Mahmud: We recently conducted a PPM study with the Economist Intelligence Unit centered to determine how important project management is considered within organizations. Our target was primarily CFOs, CIOs, and senior managers and we discovered that while 95 percent of participants believed it critical to their business, only six percent were confident that projects were delivered on time and on budget. That’s a huge gap. Most organizations are looking for efficiency, especially in these volatile financial times. But senior management can’t keep track of every project in a large organization. As a result, executives are attempting to inventory the work being conducted under their watch. What is often needed is a very high-level assessment conducted at the board level to say, “Here are the 50 initiatives that we have underway. How do they line up with our strategic drivers?” This line of questioning can provide early warning that work and strategy are out of alignment; finding the gap between what the business needs to do and the actual performance scorecard. That’s low-hanging fruit for any executive looking to increase efficiency and save money. But it can only be obtained through proper assessment of existing projects—and you need a project system of record to get that done. Over the next decade or so, project management is going to transform into holistic work management. Business leaders will want make sure key projects align with corporate strategy, but also the ability to drill down into daily activity and smaller projects to make sure they line up as well. Keeping employees from working on tasks—even for a few hours—that don’t line up with corporate goals will, in many ways, become a competitive differentiator. Profit: How do all of these market challenges and shifting trends impact Oracle’s Primavera solutions and meeting customers’ needs? Mahmud: For Primavera, it’s a transformation from being a project management application to a PPM system in the enterprise. Also making that system a mission-critical application by connecting to other key applications within the ecosystem, such as the enterprise resource planning (ERP), supply chain, and CRM systems. Analytics have also become a huge component. Business analytics have made Oracle’s Primavera applications pertinent in the boardroom. Now, as a senior manager, a board member, a CXO, CIO, or CEO, you have direct visibility into what’s going on in the organization at a level that you're able to consume that information. In addition, all of this information pairs up really well with your financials and other data. Certainly, when you're an Oracle shop, you have that visibility that you didn’t have before from a project execution perspective. Profit: What new strategies and tools are being implemented to create a more efficient workplace for users? Mahmud: We believe very strongly that just because you call something an enterprise project portfolio management system doesn’t make it so—you have to get people to want to participate in the system. This can’t be mandated down from the top. It simply doesn’t work that way. A truly adoptable solution is one that makes it super easy for all types users to participate, by providing them interfaces where they live. Keeping that in mind, a major area of development has been alternative user interfaces. This is increasingly resulting in the creation of lighter weight, targeted interfaces such as iOS applications, and smartphones interfaces such as for iPhone and Android platform. Profit: How does this translate into the development of Oracle’s Primavera solutions? Mahmud: Let me give you a few examples. We recently announced the launch of our Primavera P6 Team Member application, which is a native iOS application for the iPhone. This interface makes it easier for team members to do their jobs quickly and effectively. Similarly, we introduced the Primavera analytics application, which can be consumed via mobile devices, and when married with Oracle Spatial capabilities, users can get a geographical view of what’s going on and which projects are occurring in various locations around the world. Lastly, we introduced advanced email integration that allows project team members to status work via E-mail. This functionality leverages the fact that users are in E-mail system throughout the day and allows them to status their work without the need to launch the Primavera application. It comes back to a mantra: provide as many alternative user interfaces as possible, so you can give people the ability to work, to participate, to raise issues, to create projects, in the places where they live. Do it in such a way that it’s non-intrusive, do it in such a way that it’s easy and intuitive and they can get it done in a short amount of time. If you do that, workers can get back to doing what they're actually getting paid for.

    Read the article

  • Visual Tree Enumeration

    - by codingbloke
    I feel compelled to post this blog because I find I’m repeatedly posting this same code in silverlight and windows-phone-7 answers in Stackoverflow. One common task that we feel we need to do is burrow into the visual tree in a Silverlight or Windows Phone 7 application (actually more recently I found myself doing this in WPF as well).  This allows access to details that aren’t exposed directly by some controls.  A good example of this sort of requirement is found in the “Restoring exact scroll position of a listbox in Windows Phone 7”  question on stackoverflow.  This required that the scroll position of the scroll viewer internal to a listbox be accessed. A caveat One caveat here is that we should seriously challenge the need for this burrowing since it may indicate that there is a design problem.  Burrowing into the visual tree or indeed burrowing out to containing ancestors could represent significant coupling between module boundaries and that generally isn’t a good idea. Why isn’t this idea just not cast aside as a no-no?  Well the whole concept of a “Templated Control”, which are in extensive use in these applications, opens the coupling between the content of the visual tree and the internal code of a control.   For example, I can completely change the appearance and positioning of elements that make up a ComboBox.  The ComboBox control relies on specific template parts having set names of a specified type being present in my template.  Rightly or wrongly this does kind of give license to writing code that has similar coupling. Hasn’t this been done already? Yes it has.  There are number of blogs already out there with similar solutions.  In fact if you are using Silverlight toolkit the VisualTreeExtensions class already provides this feature.  However I prefer my specific code because of the simplicity principle I hold to.  Only write the minimum code necessary to give all the features needed.  In this case I add just two extension methods Ancestors and Descendents, note I don’t bother with “Get” or “Visual” prefixes.  Also I haven’t added Parent or Children methods nor additional “AndSelf” methods because all but Children is achievable with the addition of some other Linq methods.  I decided to give Descendents an additional overload for depth hence a depth of 1 is equivalent to Children but this overload is a little more flexible than simply Children. So here is the code:- VisualTreeEnumeration public static class VisualTreeEnumeration {     public static IEnumerable<DependencyObject> Descendents(this DependencyObject root, int depth)     {         int count = VisualTreeHelper.GetChildrenCount(root);         for (int i = 0; i < count; i++)         {             var child = VisualTreeHelper.GetChild(root, i);             yield return child;             if (depth > 0)             {                 foreach (var descendent in Descendents(child, --depth))                     yield return descendent;             }         }     }     public static IEnumerable<DependencyObject> Descendents(this DependencyObject root)     {         return Descendents(root, Int32.MaxValue);     }     public static IEnumerable<DependencyObject> Ancestors(this DependencyObject root)     {         DependencyObject current = VisualTreeHelper.GetParent(root);         while (current != null)         {             yield return current;             current = VisualTreeHelper.GetParent(current);         }     } }   Usage examples The following are some examples of how to combine the above extension methods with Linq to generate the other axis scenarios that tree traversal code might require. Missing Axis Scenarios var parent = control.Ancestors().Take(1).FirstOrDefault(); var children = control.Descendents(1); var previousSiblings = control.Ancestors().Take(1)     .SelectMany(p => p.Descendents(1).TakeWhile(c => c != control)); var followingSiblings = control.Ancestors().Take(1)     .SelectMany(p => p.Descendents(1).SkipWhile(c => c != control).Skip(1)); var ancestorsAndSelf = Enumerable.Repeat((DependencyObject)control, 1)     .Concat(control.Ancestors()); var descendentsAndSelf = Enumerable.Repeat((DependencyObject)control, 1)     .Concat(control.Descendents()); You might ask why I don’t just include these in the VisualTreeEnumerator.  I don’t on the principle of only including code that is actually needed.  If you find that one or more of the above  is needed in your code then go ahead and create additional methods.  One of the downsides to Extension methods is that they can make finding the method you actually want in intellisense harder. Here are some real world usage scenarios for these methods:- Real World Scenarios //Gets the internal scrollviewer of a ListBox ScrollViewer sv = someListBox.Descendents().OfType<ScrollViewer>().FirstOrDefault(); // Get all text boxes in current UserControl:- var textBoxes = this.Descendents().OfType<TextBox>(); // All UIElement direct children of the layout root grid:- var topLevelElements = LayoutRoot.Descendents(0).OfType<UIElement>(); // Find the containing `ListBoxItem` for a UIElement:- var container = elem.Ancestors().OfType<ListBoxItem>().FirstOrDefault(); // Seek a button with the name "PinkElephants" even if outside of the current Namescope:- var pinkElephantsButton = this.Descendents()     .OfType<Button>()     .FirstOrDefault(b => b.Name == "PinkElephants"); //Clear all checkboxes with the name "Selector" in a Treeview foreach (CheckBox checkBox in elem.Descendents()     .OfType<CheckBox>().Where(c => c.Name == "Selector")) {     checkBox.IsChecked = false; }   The last couple of examples above demonstrate a common requirement of finding controls that have a specific name.  FindName will often not find these controls because they exist in a different namescope. Hope you find this useful, if not I’m just glad to be able to link to this blog in future stackoverflow answers.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Centered Content using panelGridLayout

    - by Duncan Mills
    A classic layout conundrum,  which I think pretty much every ADF developer may have faced at some time or other, is that of truly centered (centred) layout. Typically this requirement comes up in relation to say displaying a login type screen or similar. Superficially the  problem seems easy, but as my buddy Eduardo explained when discussing this subject a couple of years ago it's actually a little more complex than you might have thought. If fact, even the "solution" provided in that posting is not perfect and suffers from a several issues (not Eduardo's fault, just limitations of panelStretch!) The top, bottom, end and start facets all need something in them The percentages you apply to the topHeight, startWidth etc. are calculated as part of the whole width.  This means that you have to guestimate the correct percentage based on your typical screen size and the sizing of the centered content. So, at best, you will in fact only get approximate centering, and the more you tune that centering for a particular browser size the more it will fail if the user resizes. You can't attach styles to the panelStretchLayout facets so to provide things like background color or fixed sizing you need to embed another container that you can apply styles to, typically a panelgroupLayout   For reference here's the code to print a simple 100px x 100px red centered square  using the panelStretchLayout solution, approximately tuned to a 1980 x 1080 maximized browser (IDs omitted for brevity): <af:panelStretchLayout startWidth="45%" endWidth="45%"                        topHeight="45%"  bottomHeight="45%" >   <f:facet name="center">     <af:panelGroupLayout inlineStyle="height:100px;width:100px;background-color:red;"                          layout="vertical"/>   </f:facet>   <f:facet name="top">     <af:spacer height="1" width="1"/>   </f:facet>   <f:facet name="bottom">     <af:spacer height="1" width="1"/>   </f:facet>   <f:facet name="start">     <af:spacer height="1" width="1"/>   </f:facet>   <f:facet name="end">     <af:spacer height="1" width="1"/>    </f:facet> </af:panelStretchLayout>  And so to panelGridLayout  So here's the  good news, panelGridLayout makes this really easy and it works without the caveats above.  The key point is that percentages used in the grid definition are evaluated after the fixed sizes are taken into account, so rather than having to guestimate what percentage will "more, or less", center the content you can just say "allocate half of what's left" to the flexible content and you're done. Here's the same example using panelGridLayout: <af:panelGridLayout> <af:gridRow height="50%"/> <af:gridRow height="100px"> <af:gridCell width="50%" /> <af:gridCell width="100px" halign="stretch" valign="stretch"  inlineStyle="background-color:red;"> <af:spacer width="1" height="1"/> </af:gridCell> <af:gridCell width="50%" /> </af:gridRow> <af:gridRow height="50%"/> </af:panelGridLayout>  So you can see that the amount of markup is somewhat smaller (as is, I should mention, the generated DOM structure in the browser), mainly because we don't need to introduce artificial components to ensure that facets are actually observed in the final result.  But the key thing here is that the centering is no longer approximate and it will work as expected as the user resizes the browser screen.  By far this is a more satisfactory solution and although it's only a simple example, it will hopefully open your eyes to the potential of panelGridLayout as your number one, go-to layout container. Just a reminder though, right now, panelGridLayout is only available in 11.1.2.2 and above.

    Read the article

  • PanelGridLayout - A Layout Revolution

    - by Duncan Mills
    With the most recent 11.1.2 patchset (11.1.2.3) there has been a lot of excitement around ADF Essentials (and rightly so), however, in all the fuss I didn't want an even more significant change to get missed - yes you read that correctly, a more significant change! I'm talking about the new panelGridLayout component, I can confidently say that this one of the most revolutionary components that we've introduced in 11g, even though it sounds rather boring. To be totally accurate, panelGrid was introduced in 11.1.2.2 but without any presence in the component palette or other design time support, so it was largely missed unless you read the release notes. However in this latest patchset it's finally front and center. Its time to explore - we (really) need to talk about layout.  Let's face it,with ADF Faces rich client, layout is a rather arcane pursuit, once you are a layout master, all bow before you, but it's more of an art than a science, and it is often, in fact, way too difficult to achieve what should (apparently) be a pretty simple. Here's a great example, it's a homework assignment I set for folks I'm teaching this stuff to:  The requirements for this layout are: The header is 80px high, the footer is 30px. These are both fixed.  The first section of the header containing the logo is 180px wide The logo is centered within the top left hand corner of the header  The title text is start aligned in the center zone of the header and will wrap if the browser window is narrowed. It should be aligned in the center of the vertical space  The about link is anchored to the right hand side of the browser with a 20px gap and again is center aligned vertically. It will move as the browser window is reduced in width. The footer has a right aligned copyright statement, again middle aligned within a 30px high footer region and with a 20px buffer to the right hand edge. It will move as the browser window is reduced in width. All remaining space is given to a central zone, which, in this case contains a panelSplitter. Expect that at some point in time you'll need a separate messages line in the center of the footer.  In the homework assigment I set I also stipulate that no inlineStyles can be used to control alignment or margins and no use of other taglibs (e.g. JSF HTML or Trinidad HTML). So, if we take this purist approach, that basic page layout (in my stock solution) requires 3 panelStretchLayouts, 5 panelGroupLayouts and 4 spacers - not including the spacer I use for the logo and the contents of the central zone splitter - phew! The point is that even a seemingly simple layout needs a bit of thinking about, particulatly when you consider strechting and browser re-size behavior. In fact, this little sample actually teaches you much of what you need to know to become vaguely competant at layouts in the framework. The underlying result of "the way things are" is that most of us reach for panelStretchLayout before even finishing the first sip of coffee as we embark on a new page design. In fact most pages you will see in any moderately complex ADF page will basically be nested panelStretchLayouts and panelGroupLayouts, sometimes many, many levels deep. So this is a problem, we've known this for some time and now we have a good solution. (I should point out that the oft-used Trinidad trh tags are not a particularly good solution as you're tie-ing yourself to an HTML table based layout in that case with a host of attendent issues in resize and bi-di behavior, but I digress.) So, tadaaa, I give to you panelGridLayout. PanelGrid, as the name suggests takes a grid like (dare I say slightly gridbag-like) approach to layout, dividing your layout into rows and colums with margins, sizing, stretch behaviour, colspans and rowspans all rolled in, all without the use of inlineStyle. As such, it provides for a much more powerful and consise way of defining a layout such as the one above that is actually simpler and much more logical to design. The basic building blocks are the panelGridLayout itself, gridRow and gridCell. Your content sits inside the cells inside the rows, all helpfully allowing both streching, valign and halign definitions without the need to nest further panelGroupLayouts. So much simpler!  If I break down the homework example above my nested comglomorate of 12 containers and spacers can be condensed down into a single panelGrid with 3 rows and 5 cell definitions (39 lines of source reduced to 24 in the case of the sample). What's more, the actual runtime representation in the browser DOM is much, much simpler, and clean, with basically one DIV per cell (Note that just because the panelGridLayout semantics looks like an HTML table does not mean that it's rendered that way!) . Another hidden benefit is the runtime cost. Because we can use a single layout to achieve much more complex geometries the client side layout code inside the browser is having to work a lot less. This will be a real benefit if your application needs to run on lower powered clients such as netbooks or tablets. So, it's time, if you're on 11.1.2.2 or above, to smile warmly at your panelStretchLayouts, wrap the blanket around it's knees and wheel it off to the Sunset Retirement Home for a well deserved rest. There's a new kid on the block and it wants to be your friend. 

    Read the article

  • Why do we use Pythagoras in game physics?

    - by Starkers
    I've recently learned that we use Pythagoras a lot in our physics calculations and I'm afraid I don't really get the point. Here's an example from a book to make sure an object doesn't travel faster than a MAXIMUM_VELOCITY constant in the horizontal plane: MAXIMUM_VELOCITY = <any number>; SQUARED_MAXIMUM_VELOCITY = MAXIMUM_VELOCITY * MAXIMUM_VELOCITY; function animate(){ var squared_horizontal_velocity = (x_velocity * x_velocity) + (z_velocity * z_velocity); if( squared_horizontal_velocity <= SQUARED_MAXIMUM_VELOCITY ){ scalar = squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY; x_velocity = x_velocity / scalar; z_velocity = x_velocity / scalar; } } Let's try this with some numbers: An object is attempting to move 5 units in x and 5 units in z. It should only be able to move 5 units horizontally in total! MAXIMUM_VELOCITY = 5; SQUARED_MAXIMUM_VELOCITY = 5 * 5; SQUARED_MAXIMUM_VELOCITY = 25; function animate(){ var x_velocity = 5; var z_velocity = 5; var squared_horizontal_velocity = (x_velocity * x_velocity) + (z_velocity * z_velocity); var squared_horizontal_velocity = 5 * 5 + 5 * 5; var squared_horizontal_velocity = 25 + 25; var squared_horizontal_velocity = 50; // if( squared_horizontal_velocity <= SQUARED_MAXIMUM_VELOCITY ){ if( 50 <= 25 ){ scalar = squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY; scalar = 50 / 25; scalar = 2.0; x_velocity = x_velocity / scalar; x_velocity = 5 / 2.0; x_velocity = 2.5; z_velocity = z_velocity / scalar; z_velocity = 5 / 2.0; z_velocity = 2.5; // new_horizontal_velocity = x_velocity + z_velocity // new_horizontal_velocity = 2.5 + 2.5 // new_horizontal_velocity = 5 } } Now this works well, but we can do the same thing without Pythagoras: MAXIMUM_VELOCITY = 5; function animate(){ var x_velocity = 5; var z_velocity = 5; var horizontal_velocity = x_velocity + z_velocity; var horizontal_velocity = 5 + 5; var horizontal_velocity = 10; // if( horizontal_velocity >= MAXIMUM_VELOCITY ){ if( 10 >= 5 ){ scalar = horizontal_velocity / MAXIMUM_VELOCITY; scalar = 10 / 5; scalar = 2.0; x_velocity = x_velocity / scalar; x_velocity = 5 / 2.0; x_velocity = 2.5; z_velocity = z_velocity / scalar; z_velocity = 5 / 2.0; z_velocity = 2.5; // new_horizontal_velocity = x_velocity + z_velocity // new_horizontal_velocity = 2.5 + 2.5 // new_horizontal_velocity = 5 } } Benefits of doing it without Pythagoras: Less lines Within those lines, it's easier to read what's going on ...and it takes less time to compute, as there are less multiplications Seems to me like computers and humans get a better deal without Pythagoras! However, I'm sure I'm wrong as I've seen Pythagoras' theorem in a number of reputable places, so I'd like someone to explain me the benefit of using Pythagoras to a maths newbie. Does this have anything to do with unit vectors? To me a unit vector is when we normalize a vector and turn it into a fraction. We do this by dividing the vector by a larger constant. I'm not sure what constant it is. The total size of the graph? Anyway, because it's a fraction, I take it, a unit vector is basically a graph that can fit inside a 3D grid with the x-axis running from -1 to 1, z-axis running from -1 to 1, and the y-axis running from -1 to 1. That's literally everything I know about unit vectors... not much :P And I fail to see their usefulness. Also, we're not really creating a unit vector in the above examples. Should I be determining the scalar like this: // a mathematical work-around of my own invention. There may be a cleverer way to do this! I've also made up my own terms such as 'divisive_scalar' so don't bother googling var divisive_scalar = (squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY); var divisive_scalar = ( 50 / 25 ); var divisive_scalar = 2; var multiplicative_scalar = (divisive_scalar / (2*divisive_scalar)); var multiplicative_scalar = (2 / (2*2)); var multiplicative_scalar = (2 / 4); var multiplicative_scalar = 0.5; x_velocity = x_velocity * multiplicative_scalar x_velocity = 5 * 0.5 x_velocity = 2.5 Again, I can't see why this is better, but it's more "unit-vector-y" because the multiplicative_scalar is a unit_vector? As you can see, I use words such as "unit-vector-y" so I'm really not a maths whiz! Also aware that unit vectors might have nothing to do with Pythagoras so ignore all of this if I'm barking up the wrong tree. I'm a very visual person (3D modeller and concept artist by trade!) and I find diagrams and graphs really, really helpful so as many as humanely possible please!

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Using EUSM to manage EUS mappings in OUD

    - by Sylvain Duloutre
    EUSM is a command line tool that can be used to manage the EUS settings starting with the 11.1 release of Oracle. In the 11.1 release the tool is not yet documented in the Oracle EUS documentation, but this is planned for a coming release. The same commands used by EUSM can be performed from the Database Console GUI or from Grid Control*. For more details, search for the document ID 1085065.1 on OTN. The examples below don't include all the EUSM options, only the options that are used by EUS. EUSM is user friendly and intuitive. Typing eusm help <option> lists the parameters to be used for any of the available options. Here are the options related to connectivity with OUD : ldap_host="gnb.fr.oracle.com" - name of the OUD server. ldap_port=1389 - nonSSL (SASL) port used for OUD connections.  ldap_user_dn="cn=directory manager" - OUD administrator nameldap_user_password="welcome1" - OUD administrator password Find below common commands: To List Enterprise roles in OUD eusm listEnterpriseRoles domain_name=<Domain> realm_dn=<realm> ldap_host=<hostname> ldap_port=<port> ldap_user_dn=<oud administrator> ldap_user_password=<oud admin password> To List Mappings eusm listMappings domain_name=<Domain> realm_dn=<realm> ldap_host=<hostname> ldap_port=<port> ldap_user_dn=<oud admin> ldap_user_password=<oud admin password> To List Enterprise Role Info eusm listEnterpriseRoleInfo enterprise_role=<rdn of enterprise role> domain_name=<Domain> realm_dn=<realm> ldap_host=<hostname> ldap_port=<port> ldap_user_dn="<oud admin>" ldap_user_password=<oud admin password> To Create Enterprise Role eusm createRole enterprise_role=<rdn of the enterprise role> domain_name=<Domain> realm_dn=<realm> ldap_host=<hostname> ldap_port=<port> ldap_user_dn="<oud admin>" ldap_user_password=<oud admin password> To Create User-Schema Mapping eusm createMapping database_name=<SID of target database> realm_dn="<realm>" map_type=<ENTRY/SUBTREE> map_dn="<dn of enterprise user>" schema="<name of the shared schema>" ldap_host=<oud hostname> ldap_port=<port> ldap_user_dn="<oud admin>" ldap_user_password="<oud admin password>" To Create Proxy Permission eusm createProxyPerm proxy_permission=<Name of the proxypermission> domain_name=<Domain> realm_dn="<realm>" ldap_host=<hostname> ldap_port=<port> ldap_user_dn="<oud admin>" ldap_user_password=<oud admin password> To Grant Proxy permission to Proxy group eusm grantProxyPerm proxy_permission=<Name of the proxy permission> domain_name=<Domain> realm_dn="<realm>" ldap_host=<hostname> ldap_port=<port> ldap_user_dn="<oud admin>" ldap_user_password=<password> group_dn="<dn of the enterprise group>" To Map proxy permission to proxy user in DB eusm addTargetUser proxy_permission=<Name of the proxy permission> domain_name=<Domain> realm_dn="<realm>" ldap_host=<hostname> ldap_port=<port> ldap_user_dn="<oud admin>" ldap_user_password=<oud admin password> database_name=<SID of the target database> target_user=<target database user> dbuser=<Database user with DBA privileges> dbuser_password=<database user password> dbconnect_string=<database_host>:<port>:<DBSID> Enterprise role to Global role mapping eusm addGlobalRole enterprise_role=<rdn of the enterprise role> domain_name=<Domain> realm_dn="<realm>" database_name=<SID of the target database> global_role=<name of the global role defined in the target database> dbuser=<database user> dbuser_password=<database user password> dbconnect_string=<database_host>:<port>:<DBSID> ldap_host=<oid_hostname> ldap_port=<port> ldap_user_dn="<oud admin>" ldap_user_password=<oud admin password>

    Read the article

  • Why do we use the Pythagorean theorem in game physics?

    - by Starkers
    I've recently learned that we use Pythagorean theorem a lot in our physics calculations and I'm afraid I don't really get the point. Here's an example from a book to make sure an object doesn't travel faster than a MAXIMUM_VELOCITY constant in the horizontal plane: MAXIMUM_VELOCITY = <any number>; SQUARED_MAXIMUM_VELOCITY = MAXIMUM_VELOCITY * MAXIMUM_VELOCITY; function animate(){ var squared_horizontal_velocity = (x_velocity * x_velocity) + (z_velocity * z_velocity); if( squared_horizontal_velocity <= SQUARED_MAXIMUM_VELOCITY ){ scalar = squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY; x_velocity = x_velocity / scalar; z_velocity = x_velocity / scalar; } } Let's try this with some numbers: An object is attempting to move 5 units in x and 5 units in z. It should only be able to move 5 units horizontally in total! MAXIMUM_VELOCITY = 5; SQUARED_MAXIMUM_VELOCITY = 5 * 5; SQUARED_MAXIMUM_VELOCITY = 25; function animate(){ var x_velocity = 5; var z_velocity = 5; var squared_horizontal_velocity = (x_velocity * x_velocity) + (z_velocity * z_velocity); var squared_horizontal_velocity = 5 * 5 + 5 * 5; var squared_horizontal_velocity = 25 + 25; var squared_horizontal_velocity = 50; // if( squared_horizontal_velocity <= SQUARED_MAXIMUM_VELOCITY ){ if( 50 <= 25 ){ scalar = squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY; scalar = 50 / 25; scalar = 2.0; x_velocity = x_velocity / scalar; x_velocity = 5 / 2.0; x_velocity = 2.5; z_velocity = z_velocity / scalar; z_velocity = 5 / 2.0; z_velocity = 2.5; // new_horizontal_velocity = x_velocity + z_velocity // new_horizontal_velocity = 2.5 + 2.5 // new_horizontal_velocity = 5 } } Now this works well, but we can do the same thing without Pythagoras: MAXIMUM_VELOCITY = 5; function animate(){ var x_velocity = 5; var z_velocity = 5; var horizontal_velocity = x_velocity + z_velocity; var horizontal_velocity = 5 + 5; var horizontal_velocity = 10; // if( horizontal_velocity >= MAXIMUM_VELOCITY ){ if( 10 >= 5 ){ scalar = horizontal_velocity / MAXIMUM_VELOCITY; scalar = 10 / 5; scalar = 2.0; x_velocity = x_velocity / scalar; x_velocity = 5 / 2.0; x_velocity = 2.5; z_velocity = z_velocity / scalar; z_velocity = 5 / 2.0; z_velocity = 2.5; // new_horizontal_velocity = x_velocity + z_velocity // new_horizontal_velocity = 2.5 + 2.5 // new_horizontal_velocity = 5 } } Benefits of doing it without Pythagoras: Less lines Within those lines, it's easier to read what's going on ...and it takes less time to compute, as there are less multiplications Seems to me like computers and humans get a better deal without Pythagorean theorem! However, I'm sure I'm wrong as I've seen Pythagoras' theorem in a number of reputable places, so I'd like someone to explain me the benefit of using Pythagorean theorem to a maths newbie. Does this have anything to do with unit vectors? To me a unit vector is when we normalize a vector and turn it into a fraction. We do this by dividing the vector by a larger constant. I'm not sure what constant it is. The total size of the graph? Anyway, because it's a fraction, I take it, a unit vector is basically a graph that can fit inside a 3D grid with the x-axis running from -1 to 1, z-axis running from -1 to 1, and the y-axis running from -1 to 1. That's literally everything I know about unit vectors... not much :P And I fail to see their usefulness. Also, we're not really creating a unit vector in the above examples. Should I be determining the scalar like this: // a mathematical work-around of my own invention. There may be a cleverer way to do this! I've also made up my own terms such as 'divisive_scalar' so don't bother googling var divisive_scalar = (squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY); var divisive_scalar = ( 50 / 25 ); var divisive_scalar = 2; var multiplicative_scalar = (divisive_scalar / (2*divisive_scalar)); var multiplicative_scalar = (2 / (2*2)); var multiplicative_scalar = (2 / 4); var multiplicative_scalar = 0.5; x_velocity = x_velocity * multiplicative_scalar x_velocity = 5 * 0.5 x_velocity = 2.5 Again, I can't see why this is better, but it's more "unit-vector-y" because the multiplicative_scalar is a unit_vector? As you can see, I use words such as "unit-vector-y" so I'm really not a maths whiz! Also aware that unit vectors might have nothing to do with Pythagorean theorem so ignore all of this if I'm barking up the wrong tree. I'm a very visual person (3D modeller and concept artist by trade!) and I find diagrams and graphs really, really helpful so as many as humanely possible please!

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • ADF Partner Community News Session - Open Invitation: "ADF as a basis of Fusion Apps - the biggest ADF project ever (in English)"

    - by Frank Nimphius
    After a successful guest performance of Ted Farrell in 2011, this year's international ADF speaker to speak during an ADF News session is Chris Muir from Oracle.  ADF News Session - Friday September 14, 8:30 AM - 9.00 AM (CET) - Topic: ADF as a basis of Fusion Apps - the biggest ADF project ever (in English) +++ this webcast will be conducted in English +++ dial-in numbers conc. ADF News Session, Sep. 14 2012 You are invited to join the next ADF News Session, that is going to take place September 14 2012 speaker:  Chris Muir / Oracle time:         8:30 AM (CET) duration:  30 minutes topic:        ADF as a basis of Fusion Apps - the biggest ADF project ever (in English) dial-in webconf: https://oraclemeetings.webex.com conf ID:      595 484 157 confkey:    123456 Please enter your name and an abbreviation of you company name when dialing in (please don´t use blanks and special characters). Please notice that this information will be visible to all participants of the webcast. Thank you. dial-in telco:           +49 (0)69 2222 16 106 or +49 (0)800 66 485 15           ConfCode: 208 503 9           SecurityPasscode: 112233  Other toll-free dial in numbers for EMEA countries are listed below (information is supplied without liability): Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid windowtext 1.0pt; mso-border-alt:solid windowtext .5pt; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid windowtext; mso-border-insidev:.5pt solid windowtext; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Austria 0800005967 Belgium 080048331 Croatia 0800222323 Czech Republic 800701080 Denmark  80889099 Estonia 8000111325 Egypt 08000000213 Finland 0800112073 France 0805632866 Greece 00800127897 Hungary 0680011201 Iceland 8008779 Ireland 1800932479 Israel 1809452571 Italy 800897629 Latvia 80002397 Luxembourg 80026598 Netherlands 08000235028 Norway 80010796 Poland 8001213557 Portugal 800814990 Romania 0800895563 Russia 81080029351012 Saudi Arabia 8008444320 Slovak Republic 0800001586 Slovenia 080080466 South Africa 0800980961 Spain 800098600 Sweden 856619465 Switzerland 0800650026 Turkey 00800 44632129 Ukraine 0800500166 United Arab Emirates 8000440344 United Kingdom 08006948154  

    Read the article

  • EXTJS 3.2.1 EditorGridPanel - ComboBox with jsonstore

    - by Yoong Kim
    Hi, I am using EXTJS with an editorgridpanel and I am trying to to insert a combobox, populated with JsonStore. Here is a snapshot of my code: THE STORE: kmxgz.ordercmpappro.prototype.getCmpapproStore = function(my_url) { var myStore = new Ext.data.Store({ proxy: new Ext.data.HttpProxy({ url: my_url , method: 'POST' }) , reader: new Ext.data.JsonReader({ root: 'rows', totalProperty: 'total', id: 'list_cmpappro_id', fields: [ {name: 'list_cmpappro_id', mapping: 'list_cmpappro_id'} , {name: 'list_cmpappro_name', mapping: 'list_cmpappro_name'} ] }) , autoLoad: true , id: 'cmpapproStore' , listeners: { load: function(store, records, options){ //store is loaded, now you can work with it's records, etc. console.info('store load, arguments:', arguments); console.info('Store count = ', store.getCount()); } } }); return myStore; }; THE COMBO: kmxgz.ordercmpappro.prototype.getCmpapproCombo = function(my_store) { var myCombo = new Ext.form.ComboBox({ typeAhead: true, lazyRender:false, forceSelection: true, allowBlank: true, editable: true, selectOnFocus: true, id: 'cmpapproCombo', triggerAction: 'all', fieldLabel: 'CMP Appro', valueField: 'list_cmpappro_id', displayField: 'list_cmpappro_name', hiddenName: 'cmpappro_id', valueNotFoundText: 'Value not found.', mode: 'local', store: my_store, emptyText: 'Select a CMP Appro', loadingText: 'Veuillez patienter ...', listeners: { // 'change' will be fired when the value has changed and the user exits the ComboBox via tab, click, etc. // The 'newValue' and 'oldValue' params will be from the field specified in the 'valueField' config above. change: function(combo, newValue, oldValue){ console.log("Old Value: " + oldValue); console.log("New Value: " + newValue); }, // 'select' will be fired as soon as an item in the ComboBox is selected with mouse, keyboard. select: function(combo, record, index){ console.log(record.data.name); console.log(index); } } }); return myCombo; }; The combobox is inserted in an editorgridpanel. There's a renderer like this: Ext.util.Format.comboRenderer = function(combo){ return function(value, metadata, record){ alert(combo.store.getCount()); <== always 0!! var record = combo.findRecord(combo.valueField || combo.displayField, value); return record ? record.get(combo.displayField) : combo.valueNotFoundText; } }; When the grid is displayed the first time, instead of have the displayField, I have : "Value not found." And I have the alert : 0 (alert(combo.store.getCount())) from the renderer. But I can see in the console that the data have been correctly loaded! Even if I try to reload the store from the renderer (combo.store.load();), I still have the alert (0)! But when I select the combo to change the value, I can see the data and when I change the value, I can see the displayFiel! I don't understand what's the problem? Since now several days, I already tried all the solutions I found...but still nothing! Any advice is welcome! Yoong

    Read the article

  • WPF Debugging AvalonEdit binding to Document property.

    - by kubal5003
    Hello, all day long I am sitting and trying to find out why binding to AvalonEdits Document property isn't working. AvalonEdit is an advanced WPF text editor - part of the SharpDevelop project.(it's going to be used in SharpDevelop v4 Mirador). So when I set up a simple project - one TextEditor (that's the AvalonEdits real name in the library) and made a simple class that has one property - Document and it returns a dummy object with some static text the binding is working perfectly. However in real life solution I'm binding a collection of SomeEditor objects to TabControl. TabControl has DataTemplate for SomeEditor and there's the TextEditor object. <TabControl Grid.Column="1" x:Name="tabControlFiles" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" > <TabControl.Resources> <DataTemplate DataType="{x:Type m:SomeEditor}"> <a:TextEditor Document="{Binding Path=Document, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged, Converter={StaticResource NoopConverter}, IsAsync=True}" x:Name="avalonEdit"></a:TextEditor> </DataTemplate> </TabControl.Resources> <TabControl.ItemContainerStyle> <Style BasedOn="{StaticResource TabItemStyle}" TargetType="{x:Type TabItem}"> <Setter Property="IsSelected" Value="{Binding IsSelected}"></Setter> </Style> </TabControl.ItemContainerStyle> </TabControl> This doesn't work. What I've investigated so far: DataContext of TextEditor is set to the proper instance of SomeEditor TextEditors Document property is set to some other instance than SomeEditor.Document property when I set breakpoint to no-op converter that is attached to that binding it shows me the correct value for Document (the converter is used!) I also dug through the VisualTree to obtain reference to TextEditor and called GetBindingExpression(TextEditor.DocumentProperty) and this did return nothing WPF produces the following information: System.Windows.Data Information: 10 : Cannot retrieve value using the binding and no valid fallback value exists; using default instead. BindingExpression:Path=Document; DataItem='SomeEditor' (HashCode=26280264); target element is 'TextEditor' (Name='avalonEdit'); target property is 'Document' (type 'TextDocument') SomeEditor instance that is bound to already has a created and cached copy of Document before the binding occurs. The getter is never called. Anyone can tell me what might be wrong? Why BindingExpression isn't set ? Why property getter is never called?

    Read the article

  • Use Dojo Drag and Drop together with Dojo Moveable

    - by Select0r
    Hi, I'm using Dojo.dnd to transfer items between to areas. The problem is: the items will snap into place once I drop them, but I'd like to have them stay where I drop them, but only for one area. Here's a little code to explain this better: <div id="dropZone" class="dropZone"> <div id="itemNodes"></div> <div id="targetZone" dojoType="dojo.dnd.Source"></div> </div> "dropZone" is a DIV that contains two dojo.dnd.Source-areas, "itemNodes" (created programmatically) and "targetZone". Items (DIVs with images) should be dragged from a simple list out of "itemNodes" into "targetZone" and stay where they are dropped. As soon as they are dragged out of "targetZone" they should snap back to the list inside "itemNodes". Here's the code I use to create the items: var nodelist = new dojo.dnd.Source("itemNodes"); {Smarty-Loop} nodelist.insertNodes(false, ['<img class="dragItem" src="{$items->info.itemtext}" alt="{$items->info.itemtext}" border="0" />']); {/Smarty-Loop} But this way I just have two lists of items, the items dropped into "targetZone" won't stay where I dropped them. I've tried a loop dojo.query(".dojoDndItem").forEach(function(node) to grab all items and change them to a "moveable"-type: using dojo.dnd.move.constrainedMoveable will change the items so they can always be moved around (even in "itemNodes") using dojo.dnd.move.boxConstrainedMoveable and defining the "box" to the borders of "targetZone" makes it possible to just move the items around inside "targetZone", but as soon as I drop them, I can't grab and move them back out So here's the question: is it possible to create two dnd.Sources where I can move items back and forth and let the items be "moveable" only in one of the sources?Or is there a workaround like making the items moveable and if they're not dropped into "targetZone" they'll be moved back to the list in "itemNodes" automatically? Once the page is submitted, I have to save the position of every item that has been placed into "targetZone". (The next step will be positioning the items inside "targetZone" on page load if the grid has already been filled before, but I'd be happy to just get the thing working in the first place.) Any hint is appreciated. Greetings, Select0r

    Read the article

  • WPF Formatting Issues - Automatically stretching and resizing?

    - by Adam S
    I'm very new to WPF and XAML. I am trying to design a basic data entry form. I have used a stack panel holding four more stack panels to get the layout I want. Perhaps a grid would be better for this, I am not sure. Here is an image of my form in action: http://yfrog.com/7gscreenshot1impp And here is the XAML code that generates it: <Window x:Class="Test1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="224" Width="536.762"> <StackPanel Height="Auto" Name="stackPanel1" Width="Auto" Orientation="Horizontal"> <StackPanel Height="Auto" Name="stackPanel2" Width="Auto"> <Label Height="Auto" Name="label1" Width="Auto">Patient Name:</Label> <Label Height="Auto" Name="label2" Width="Auto">Physician:</Label> <Label Height="Auto" Name="label3" Width="Auto">Insurance:</Label> <Label Height="Auto" Name="label4" Width="Auto">Therapy Goals:</Label> </StackPanel> <StackPanel Height="Auto" Name="stackPanel3" Width="Auto"> <TextBox Height="Auto" Name="textBox1" Width="Auto" Padding="3" Margin="1" /> <TextBox Height="Auto" Name="textBox2" Width="Auto" Padding="3" Margin="1" /> <TextBox Height="Auto" Name="textBox3" Width="Auto" Padding="3" Margin="1" /> <TextBox Height="Auto" Name="textBox4" Width="Auto" Padding="3" Margin="1" /> </StackPanel> <StackPanel Height="Auto" Name="stackPanel4" Width="Auto"> <Label Height="Auto" Name="label5" Width="Auto">Date:</Label> <Label Height="Auto" Name="label6" Width="Auto">Patient Phone:</Label> <Label Height="Auto" Name="label7" Width="Auto">Facility:</Label> <Label Height="Auto" Name="label8" Width="Auto">Referring Physician:</Label> </StackPanel> <StackPanel Height="Auto" Name="stackPanel5" Width="Auto"> <TextBox Height="Auto" Name="textBox5" Width="Auto" Padding="3" Margin="1" /> <TextBox Height="Auto" Name="textBox6" Width="Auto" Padding="3" Margin="1" /> <TextBox Height="Auto" Name="textBox7" Width="Auto" Padding="3" Margin="1" /> <TextBox Height="Auto" Name="textBox8" Width="Auto" Padding="3" Margin="1" /> </StackPanel> </StackPanel> </Window> What I really want is for the text boxes to stretch equally to fill up the space horizontally. I would also like for the controls in each vertical stackpanel to 'spread out' evenly as the window is resized vertically. Can any of you experts out there help me out?

    Read the article

  • Silverlight Update/Trigger IValueConverter in Listbox DataTemplate in a DataGrid

    - by LJ
    Hi I am building an application to display a datagrid bound to an ObservableCollection of Records, where each record has a Course Object and an ObservableCollection of Results Objects. The course is changed using an autocomplete box. The results collection is displayed in a Listbox with an IValueConverter implementation to change the colour of the ellipse template based on criteria of the course currently selected. It works great on loading, but subsequent updates to the course selection via the autocomplete does not trigger a recalculation/refresh of the value converter. Is there a way to trigger the refresh in XAML. I added UpdateSource=Property changed to the binding of the list box - but this caused a stack overflow (haha). Here is the code: <data:DataGrid x:Name="MyDatGrid"> <data:DataGrid.Columns> <data:DataGridTemplateColumn Header="Results"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <ListBox ItemsSource="{Binding ListOfResults}"> <ListBox.ItemsPanel> <ItemsPanelTemplate> <StackPanel Orientation="Horizontal"/> </ItemsPanelTemplate> </ListBox.ItemsPanel> <ListBox.ItemTemplate> <DataTemplate> <Ellipse Width="20" Height="20" Fill="{Binding Converter={StaticResource resultToBrushConverter} }" Stroke="Black" StrokeThickness="1" /> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> <data:DataGridTemplateColumn Header="Course" > <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Border> <input:AutoCompleteBox ItemsSource="{Binding Courses, Source={StaticResource coursesSource}}"/> </Border> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> I managed to subscribe to the LostFocus Event on the autocomplete box and reset a filter that I already have on the datagrid. But isn;t this very inefficient ? Refreshing the view on the datagrid does not have any effect in that method. Any steps in the right direction are greatly appreciated. Trying to prevent myself going anymore grey :) Had thoughts of getting the binding expression of the list in the grid and updating it, but no clue ? Thanks guys

    Read the article

  • Delphi - restore actual row in DBGrid

    - by durumdara
    Hi! D6 prof. Formerly we used DBISAM and DBISAMTable. That handle the RecNo, and it is working good with modifications (Delete, edit, etc). Now we replaced with ElevateDB, that don't handle RecNo, and many times we use Queries, not Tables. Query must reopen to see the modifications. But if we Reopen the Query, we need to repositioning to the last record. Locate isn't enough, because Grid is show it in another Row. This is very disturbing thing, because after the modification record is moving into another row, you hard to follow it, and users hate this. We found this code: function TBaseDBGrid.GetActRow: integer; begin Result := -1 + Row; end; procedure TBasepDBGrid.SetActRow(aRow: integer); var bm : TBookMark; begin if IsDataSourceValid(DataSource) then with DataSource.DataSet do begin bm := GetBookmark; DisableControls; try MoveBy(-aRow); MoveBy(aRow); //GotoBookmark(bm); finally FreebookMark(bm); EnableControls; end; end; end; The original example is uses moveby. This working good with Queries, because we cannot see that Query reopened in the background, the visual control is not changed the row position. But when we have EDBTable, or Live/Sensitive Query, the MoveBy is dangerous to use, because if somebody delete or append a new row, we can relocate into wrong record. Then I tried to use the BookMark (see remark). But this technique isn't working, because it is show the record in another Row position... So the question: how to force both the row position and record in DBGrid? Or what kind of DBGrid can relocate to the record/row after the underlying DataSet refreshed? I search for user friendly solution, I understand them, because I tried to use this jump-across DBGrid, and very bad to use, because my eyes are getting out when try to find the original record after update... :-( Thanks for your every help, link, info: dd

    Read the article

  • VS2010. Dropdownlist Autopostback works in IDE, not when deployed

    - by George
    I have a VS2010 RC ASP.NET web page,when a user changes the drop down selection on an auto postback dropdown, it refreshes a small grid and a few labels in various places on the page. I know wrapping a whole page in a big UpdatePanel control will cause horror from many of you, but that's what I did. I really didn't want a full page refresh and I didn't know how to update a table on the client side using Javascript and I figured it would be a big change. Suggestions for avoiding this are welcomed, but my main desire is to understand teh error I am getting. When I do the auto postbacks in the IDE, everything works fine, but if I deploy the code (IIS 5.5 on XP), the second auto postback works but the seconds one gives me his error. Ajax is one big nasty blackbox to me. Can someone help, please? Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) ; InfoPath.1; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.2; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MS-RTC LM 8; MS-RTC EA 2; OfficeLiveConnector.1.4; OfficeLivePatch.1.3; .NET4.0C; .NET4.0E) Timestamp: Sun, 28 Mar 2010 17:23:23 UTC Message: Sys.WebForms.PageRequestManagerServerErrorException: Object reference not set to an instance of an object. Line: 796 Char: 13 Code: 0 URI: http://localhost/BESI/ScriptResource.axd?d=3HKc1zGdeSk2WM7LpI9tTpMQUN7bCfQaPKi6MHy3P9dace9kFGR5G-jymRLHm0uxZ0SqWlVSWl9vAWK5JiPemjSRfdtUq34Dd5fQ3FoIbiyQ-hcum21C-j06-c0YF7hE0&t=5f011aa5 Message: Sys.WebForms.PageRequestManagerServerErrorException: Object reference not set to an instance of an object. Line: 796 Char: 13 Code: 0 URI: http://localhost/BESI/ScriptResource.axd?d=3HKc1zGdeSk2WM7LpI9tTpMQUN7bCfQaPKi6MHy3P9dace9kFGR5G-jymRLHm0uxZ0SqWlVSWl9vAWK5JiPemjSRfdtUq34Dd5fQ3FoIbiyQ-hcum21C-j06-c0YF7hE0&t=5f011aa5 Message: Sys.WebForms.PageRequestManagerServerErrorException: Object reference not set to an instance of an object. Line: 796 Char: 13 Code: 0 URI: http://localhost/BESI/ScriptResource.axd?d=3HKc1zGdeSk2WM7LpI9tTpMQUN7bCfQaPKi6MHy3P9dace9kFGR5G-jymRLHm0uxZ0SqWlVSWl9vAWK5JiPemjSRfdtUq34Dd5fQ3FoIbiyQ-hcum21C-j06-c0YF7hE0&t=5f011aa5

    Read the article

  • .Net Custom Configuration Section and Saving Changes within PropertyGrid

    - by Paul
    If I load the My.Settings object (app.config) into a PropertyGrid, I am able to edit the property inside the propertygrid and the change is automatically saved. PropertyGrid1.SelectedObject = My.Settings I want to do the same with a Custom Configuration Section. Following this code example (from here http://www.codeproject.com/KB/vb/SerializePropertyGrid.aspx), he is doing explicit serialization to disk when a "Save" button is pushed. Public Class Form1 'Load AppSettings Dim _appSettings As New AppSettings() Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click _appSettings = AppSettings.Load() ' Actually change the form size Me.Size = _appSettings.WindowSize PropertyGrid1.SelectedObject = _appSettings End Sub Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click _appSettings.Save() End Sub End Class In my code, my custom section Inherits from ConfigurationSection (see below) Question: Is there something built into ConfigurationSection class that does the autosave? If not, what is the best way to handle this, should it be in the PropertyGrid.PropertyValueChagned? (how does the My.Settings handle this internally?) Here is the example Custom Class that I am trying to get to auto-save and how I load into property grid. Dim config As System.Configuration.Configuration = _ ConfigurationManager.OpenExeConfiguration( _ ConfigurationUserLevel.None) PropertyGrid2.SelectedObject = config.GetSection("CustomSection") Public NotInheritable Class CustomSection Inherits ConfigurationSection ' The collection (property bag) that contains ' the section properties. Private Shared _Properties As ConfigurationPropertyCollection ' The FileName property. Private Shared _FileName As New ConfigurationProperty("fileName", GetType(String), "def.txt", ConfigurationPropertyOptions.IsRequired) ' The MasUsers property. Private Shared _MaxUsers _ As New ConfigurationProperty("maxUsers", _ GetType(Int32), 1000, _ ConfigurationPropertyOptions.None) ' The MaxIdleTime property. Private Shared _MaxIdleTime _ As New ConfigurationProperty("maxIdleTime", _ GetType(TimeSpan), TimeSpan.FromMinutes(5), _ ConfigurationPropertyOptions.IsRequired) ' CustomSection constructor. Public Sub New() _Properties = New ConfigurationPropertyCollection() _Properties.Add(_FileName) _Properties.Add(_MaxUsers) _Properties.Add(_MaxIdleTime) End Sub 'New ' This is a key customization. ' It returns the initialized property bag. Protected Overrides ReadOnly Property Properties() _ As ConfigurationPropertyCollection Get Return _Properties End Get End Property <StringValidator( _ InvalidCharacters:=" ~!@#$%^&*()[]{}/;'""|\", _ MinLength:=1, MaxLength:=60)> _ <EditorAttribute(GetType(System.Windows.Forms.Design.FileNameEditor), GetType(System.Drawing.Design.UITypeEditor))> _ Public Property FileName() As String Get Return CStr(Me("fileName")) End Get Set(ByVal value As String) Me("fileName") = value End Set End Property <LongValidator(MinValue:=1, _ MaxValue:=1000000, ExcludeRange:=False)> _ Public Property MaxUsers() As Int32 Get Return Fix(Me("maxUsers")) End Get Set(ByVal value As Int32) Me("maxUsers") = value End Set End Property <TimeSpanValidator(MinValueString:="0:0:30", _ MaxValueString:="5:00:0", ExcludeRange:=False)> _ Public Property MaxIdleTime() As TimeSpan Get Return CType(Me("maxIdleTime"), TimeSpan) End Get Set(ByVal value As TimeSpan) Me("maxIdleTime") = value End Set End Property End Class 'CustomSection

    Read the article

  • Jqgrid search option not working and edit popups not closed after submit

    - by Sajith
    i have facing two problem in Jqgrid. Search option not working and there is not closed editpopups after submit. my code below <table id="jQGridpending" style="width:auto"> </table> <div id="jQGridpendingPager"> </div> <table id="searchpending"></table> <div id="filterpending"></div> jQuery("#jQGridpending").jqGrid({ url: '@Url.Action("DiscountRequest", "Admin")', datatype: "json", mtype: "POST", colNames: [ "Id", "ClientName", "BpName", "Pdt", "DiscountReq", "DiscountAllowed", "Status", ], colModel: [ { name: "Id", width: 100, key: true, formatter: "integer", sorttype: "integer", hidden: true }, { name: "ClientName", width: 150, sortable: true,search:true,stype:'text', editrules: { required: false } }, { name: "BpName", width: 200, sortable: true, editable: false, editrules: { required: false } }, { name: "Pdt", width: 150, sortable: true, editable: false, editrules: { required: false } }, { name: "DiscountReq", width: 150, sortable: false, editable: false, editrules: { required: false } }, { name: "DiscountAllowed", width: 200, sortable: true, editable: true, editrules: { required: true } }, { name: 'Status', index: 'Status', width: 200, sortable: false, editable: true, formatter: 'select', edittype: 'select', editoptions: { value: "pending:pending;approved:approved;rejected:rejected" } }, @* { name: "Status", width: 200, sortable: false, editable: true, editrules: { required: true, minValue: 1, }, edittype: "select", editoptions: { async: false, dataUrl: "@Url.Action("GetStatus", "Admin")", buildSelect: function (response) { var s = "<select>"; s += '<option value="0">--Select--</option><option value="pending">pending</option>'; return s + "</select>"; } } },*@ //{ name: "Status", width: 150, sortable: true, editable: true, editrules: { required: true } }, //{ name: "Created", width: 120, formatter: "date", formatoptions: { srcformat: "ISO8601Long", newformat: "n/j/Y g:i:s A" }, align: "center", sorttype: "date" }, ], loadtext: "Processing pending request data please wait...", rowNum: 10, gridview: true, autoencode: true, loadonce: true, height: "auto", rownumbers: true, prmNames: { id: "Id" }, rowList: [10, 20, 30], pager: '#jQGridpendingPager', sortname: 'id', sortorder: "asc", viewrecords: true, jqModal: true, caption: "Pending List", reloadAfterSubmit: true, editurl: '@Url.Action("UpdateDiscount", "Admin")', }); jQuery("#jQGridpending").jqGrid('navGrid', '#jQGridpendingPager', { search: true,recreateFilter: true, add: false, searchtext: "Search", edittext: "Edit", deltext: "Delete", }, {//EDIT url: '@Url.Action("UpdateDiscount", "Admin")', width: "auto", jqModal: true, closeOnEscape: true, closeAfterEdit: true, reloadAfterSubmit: true, afterSubmit: function () { // Reload grid records after edit a entry in the db. $(this).jqGrid('setGridParam', { datatype: 'json' }); return [true, '', false]; }, }, {//DELETE url: '@Url.Action("DelDiscount", "Admin")', closeOnEscape: true }, {//SEARCH closeOnEscape: true, searchOnEnter: true, multipleSearch: true, //overlay: 0, width: "auto", height: "auto", });

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >