Search Results

Search found 9552 results on 383 pages for 'row henson'.

Page 122/383 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • To Bit or Not To Bit

    - by Johnm
    'Twas a long day of troubleshooting and firefighting and now, with most of the office vacant, you face a blank scripting window to create a new table in his database. Many questions circle your mind like dirty water gurgling down the bathtub drain: "How normalized should this table be?", "Should I use an identity column?", "NVarchar or Varchar?", "Should this column be NULLABLE?", "I wonder what apple blue cheese bacon cheesecake tastes like?" Well, there are times when the mind goes it's own direction. A Bit About Bit At some point during your table creation efforts you will encounter the decision of whether to use the bit data type for a column. The bit data type is an integer data type that recognizes only the values of 1, 0 and NULL as valid. This data type is often utilized to store yes/no or true/false values. An example of its use would be a column called [IsGasoline] which would be intended to contain the value of 1 if the row's subject (a car) had a gasoline engine and a 0 if the subject did not have a gasoline engine. The bit data type can even be found in some of the system tables of SQL Server. For example, the sysssispackages table in the msdb database which contains SQL Server Integration Services Package information for the packages stored in SQL Server. This table contains a column called [IsEncrypted]. A value of 1 indicates that the package has been encrypted while the value of 0 indicates that it is not. I have learned that the most effective way to disperse the crowd that surrounds the office coffee machine is to engage into SQL Server debates. The bit data type has been one of the most reoccurring, as well as the most enjoyable, of these topics. It contains a practical side and a philosophical side. Practical Consideration This data type certainly has its place and is a valuable option for database design; but it is often used in situations where the answer is really not a pure true/false response. In addition, true/false values are not very informative or scalable. Let's use the previously noted [IsGasoline] column for illustration. While on the surface it appears to be a rather simple question when evaluating a car: "Does the car have a gasoline engine?" If the person entering data is entering a row for a Jeep Liberty, the response would be a 1 since it has a gasoline engine. If the person is entering data is entering a row for a Chevrolet Volt, the response would be a 0 since it is an electric engine. What happens when a person is entering a row for the gasoline/electric hybrid Toyota Prius? Would one person's conclusion be consistent with another person's conclusion? The argument could be made that the current intent for the database is to be used only for pure gasoline and pure electric engines; but this is where the scalability issue comes into play. With the use of a bit data type a database modification and data conversion would be required if the business decided to take on hybrid engines. Whereas, alternatively, if the int data type were used as a foreign key to a reference table containing the engine type options, the change to include the hybrid option would only require an entry into the reference table. Philosophical Consideration Since the bit data type is often used for true/false or yes/no data (also called Boolean) it presents a philosophical conundrum of what to do about the allowance of the NULL value. The inclusion of NULL in a true/false or yes/no response simply violates the logical principle of bivalence which states that "every proposition is either true or false". If NULL is not true, then it must be false. The mathematical laws of Boolean logic support this concept by stating that the only valid values of this scenario are 1 and 0. There is another way to look at this conundrum: NULL is also considered to be the absence of a response. In other words, it is the equivalent to "undecided". Anyone who watches the news can tell you that polls always include an "undecided" option. This could be considered a valid option in the world of yes/no/dunno. Through out all of these considerations I have discovered one absolute certainty: When you have found a person, or group of persons, who are willing to entertain a philosophical debate of the bit data type, you have found some true friends.

    Read the article

  • Anatomy of a .NET Assembly - CLR metadata 2

    - by Simon Cooper
    Before we look any further at the CLR metadata, we need a quick diversion to understand how the metadata is actually stored. Encoding table information As an example, we'll have a look at a row in the TypeDef table. According to the spec, each TypeDef consists of the following: Flags specifying various properties of the class, including visibility. The name of the type. The namespace of the type. What type this type extends. The field list of this type. The method list of this type. How is all this data actually represented? Offset & RID encoding Most assemblies don't need to use a 4 byte value to specify heap offsets and RIDs everywhere, however we can't hard-code every offset and RID to be 2 bytes long as there could conceivably be more than 65535 items in a heap or more than 65535 fields or types defined in an assembly. So heap offsets and RIDs are only represented in the full 4 bytes if it is required; in the header information at the top of the #~ stream are 3 bits indicating if the #Strings, #GUID, or #Blob heaps use 2 or 4 bytes (the #US stream is not accessed from metadata), and the rowcount of each table. If the rowcount for a particular table is greater than 65535 then all RIDs referencing that table throughout the metadata use 4 bytes, else only 2 bytes are used. Coded tokens Not every field in a table row references a single predefined table. For example, in the TypeDef extends field, a type can extend another TypeDef (a type in the same assembly), a TypeRef (a type in a different assembly), or a TypeSpec (an instantiation of a generic type). A token would have to be used to let us specify the table along with the RID. Tokens are always 4 bytes long; again, this is rather wasteful of space. Cutting the RID down to 2 bytes would make each token 3 bytes long, which isn't really an optimum size for computers to read from memory or disk. However, every use of a token in the metadata tables can only point to a limited subset of the metadata tables. For the extends field, we only need to be able to specify one of 3 tables, which we can do using 2 bits: 0x0: TypeDef 0x1: TypeRef 0x2: TypeSpec We could therefore compress the 4-byte token that would otherwise be needed into a coded token of type TypeDefOrRef. For each type of coded token, the least significant bits encode the table the token points to, and the rest of the bits encode the RID within that table. We can work out whether each type of coded token needs 2 or 4 bytes to represent it by working out whether the maximum RID of every table that the coded token type can point to will fit in the space available. The space available for the RID depends on the type of coded token; a TypeOrMethodDef coded token only needs 1 bit to specify the table, leaving 15 bits available for the RID before a 4-byte representation is needed, whereas a HasCustomAttribute coded token can point to one of 18 different tables, and so needs 5 bits to specify the table, only leaving 11 bits for the RID before 4 bytes are needed to represent that coded token type. For example, a 2-byte TypeDefOrRef coded token with the value 0x0321 has the following bit pattern: 0 3 2 1 0000 0011 0010 0001 The first two bits specify the table - TypeRef; the other bits specify the RID. Because we've used the first two bits, we've got to shift everything along two bits: 000000 1100 1000 This gives us a RID of 0xc8. If any one of the TypeDef, TypeRef or TypeSpec tables had more than 16383 rows (2^14 - 1), then 4 bytes would need to be used to represent all TypeDefOrRef coded tokens throughout the metadata tables. Lists The third representation we need to consider is 1-to-many references; each TypeDef refers to a list of FieldDef and MethodDef belonging to that type. If we were to specify every FieldDef and MethodDef individually then each TypeDef would be very large and a variable size, which isn't ideal. There is a way of specifying a list of references without explicitly specifying every item; if we order the MethodDef and FieldDef tables by the owning type, then the field list and method list in a TypeDef only have to be a single RID pointing at the first FieldDef or MethodDef belonging to that type; the end of the list can be inferred by the field list and method list RIDs of the next row in the TypeDef table. Going back to the TypeDef If we have a look back at the definition of a TypeDef, we end up with the following reprensentation for each row: Flags - always 4 bytes Name - a #Strings heap offset. Namespace - a #Strings heap offset. Extends - a TypeDefOrRef coded token. FieldList - a single RID to the FieldDef table. MethodList - a single RID to the MethodDef table. So, depending on the number of entries in the heaps and tables within the assembly, the rows in the TypeDef table can be as small as 14 bytes, or as large as 24 bytes. Now we've had a look at how information is encoded within the metadata tables, in the next post we can see how they are arranged on disk.

    Read the article

  • Master Page: Dynamically Adding Rows in ASP Table on Button Click event

    - by Vincent Maverick Durano
    In my previous post here, I wrote an example that demonstrates how are we going to generate table rows dynamically using ASP Table on click of the Button control. Now based on some comments in my previous example and in the forums they wanted to implement it within Masterpage. Unfortunately the code in my previous example doesn't work in Masterpage for the following main reasons: The Table is dynamically added within the Form tag and so the TextBox control will not be generated correcty in the page. The data will not be retained on each and every postbacks because the SetPreviousData() method is looking for the Table element within the Page and not on the MasterPage. The Request.Form key value should be set correctly since all controls within the master page are prefixed with the naming containter ID to prevent duplicate ids on the final rendered HTML. For example the TextBox control with the ID of TextBoxRow will turn to ID to this ctl00$MainBody$TextBoxRow. In order for the previous example to work within Masterpage then we will have to correct those three main reasons above and this post will guide you how to correct it. Suppose we have this content page declaration below:   <asp:Content ID="Content1" ContentPlaceHolderID="MainHead" Runat="Server"> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainBody" Runat="Server"> <asp:PlaceHolder ID="PlaceHolder1" runat="server"> <asp:Button ID="BTNAdd" runat="server" Text="Add New Row" OnClick="BTNAdd_Click" /> </asp:PlaceHolder> </asp:Content> As you notice I've added a PlaceHolder control within the MainBody ContentPlaceHolder. This is because we are going to generate the Table in the PlaceHolder instead of generating it within the Form element. Now since issue #1 is already corrected then let's proceed to the code beind part. Here are the full code blocks below:     using System; using System.Web.UI; using System.Web.UI.WebControls; public partial class DynamicControlDemo : System.Web.UI.Page { private int numOfRows = 1; protected void Page_Load(object sender, EventArgs e) { //Generate the Rows on Initial Load if (!Page.IsPostBack) { GenerateTable(numOfRows); } } protected void BTNAdd_Click(object sender, EventArgs e) { if (ViewState["RowsCount"] != null) { numOfRows = Convert.ToInt32(ViewState["RowsCount"].ToString()); GenerateTable(numOfRows); } } private void SetPreviousData(int rowsCount, int colsCount) { Table table = (Table)this.Page.Master.FindControl("MainBody").FindControl("Table1"); // **** if (table != null) { for (int i = 0; i < rowsCount; i++) { for (int j = 0; j < colsCount; j++) { //Extracting the Dynamic Controls from the Table TextBox tb = (TextBox)table.Rows[i].Cells[j].FindControl("TextBoxRow_" + i + "Col_" + j); //Use Request object for getting the previous data of the dynamic textbox tb.Text = Request.Form["ctl00$MainBody$TextBoxRow_" + i + "Col_" + j];//***** } } } } private void GenerateTable(int rowsCount) { //Creat the Table and Add it to the Page Table table = new Table(); table.ID = "Table1"; PlaceHolder1.Controls.Add(table);//****** //The number of Columns to be generated const int colsCount = 3;//You can changed the value of 3 based on you requirements // Now iterate through the table and add your controls for (int i = 0; i < rowsCount; i++) { TableRow row = new TableRow(); for (int j = 0; j < colsCount; j++) { TableCell cell = new TableCell(); TextBox tb = new TextBox(); // Set a unique ID for each TextBox added tb.ID = "TextBoxRow_" + i + "Col_" + j; // Add the control to the TableCell cell.Controls.Add(tb); // Add the TableCell to the TableRow row.Cells.Add(cell); } // And finally, add the TableRow to the Table table.Rows.Add(row); } //Set Previous Data on PostBacks SetPreviousData(rowsCount, colsCount); //Sore the current Rows Count in ViewState rowsCount++; ViewState["RowsCount"] = rowsCount; } }   As you observed the code is pretty much similar to the previous example except for the highlighted lines above. That's it! I hope someone find this post usefu! Technorati Tags: Dynamic Controls,ASP.NET,C#,Master Page

    Read the article

  • Storing Attendance Data in database

    - by Ali Abbas
    So i have to store daily attendance of employees of my organisation from my application . The part where I need some help is, the efficient way to store attendance data. After some research and brain storming I came up with some approaches . Could you point me out which one is the best and any unobvious ill effects of the mentioned approaches. The approaches are as follows Create a single table for whole organisation and store empid,date,presentstatus as a row for every employee everyday. Create a single table for whole organisation and store a single row for each day with a comma delimited string of empids which are absent. I will generate the string on my application. Create different tables for each department and follow the 1 method. Please share your views and do mention any other good methods

    Read the article

  • What's the best way to manage list item sort order with Drag & Drop UI?

    - by Reddy S R
    I have a list of Students that I should display to user on a web page in tabular format. The items are stored in DB along with SortOrder information. On the web page, user can rearrange the list order by dragging and dropping the items to their desired sort order, similar to this post. Below is a screenshot of my test page. In the above example, each row has sort order info attached to it. When I drop John Doe (Student Id 10) above the Student Id 1 row, the list order should now be: 2, 10, 1, 8, 11. What's the optimistic (less resource hungry) way to store and update Sort Order information? My only idea for now is, for every change in the list's sort order, every object's SortOrder value should be updated, which in my opinion is very resource hungry. Just FYI: I might have at most 25 rows in my table.

    Read the article

  • Partner Blog Series: PwC Perspectives - The Gotchas, The Do's and Don'ts for IDM Implementations

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Verdana","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Arial Narrow","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} It is generally accepted among business communities that technology by itself is not a silver bullet to all problems, but when it is combined with leading practices, strategy, careful planning and execution, it can create a recipe for success. This post attempts to highlight some of the best practices along with dos & don’ts that our practice has accumulated over the years in the identity & access management space in general, and also in the context of R2, in particular. Best Practices The following section illustrates the leading practices in “How” to plan, implement and sustain a successful OIM deployment, based on our collective experience. Planning is critical, but often overlooked A common approach to planning an IAM program that we identify with our clients is the three step process involving a current state assessment, a future state roadmap and an executable strategy to get there. It is extremely beneficial for clients to assess their current IAM state, perform gap analysis, document the recommended controls to address the gaps, align future state roadmap to business initiatives and get buy in from all stakeholders involved to improve the chances of success. When designing an enterprise-wide solution, the scalability of the technology must accommodate the future growth of the enterprise and the projected identity transactions over several years. Aligning the implementation schedule of OIM to related information technology projects increases the chances of success. As a baseline, it is recommended to match hardware specifications to the sizing guide for R2 published by Oracle. Adherence to this will help ensure that the hardware used to support OIM will not become a bottleneck as the adoption of new services increases. If your Organization has numerous connected applications that rely on reconciliation to synchronize the access data into OIM, consider hosting dedicated instances to handle reconciliation. Finally, ensure the use of clustered environment for development and have at least three total environments to help facilitate a controlled migration to production. If your Organization is planning to implement role based access control, we recommend performing a role mining exercise and consolidate your enterprise roles to keep them manageable. In addition, many Organizations have multiple approval flows to control access to critical roles, applications and entitlements. If your Organization falls into this category, we highly recommend that you limit the number of approval workflows to a small set. Most Organizations have operations managed across data centers with backend database synchronization, if your Organization falls into this category, ensure that the overall latency between the datacenters when replicating the databases is less than ten milliseconds to ensure that there are no front office performance impacts. Ingredients for a successful implementation During the development phase of your project, there are a number of guidelines that can be followed to help increase the chances for success. Most implementations cannot be completed without the use of customizations. If your implementation requires this, it’s a good practice to perform code reviews to help ensure quality and reduce code bottlenecks related to performance. We have observed at our clients that the development process works best when team members adhere to coding leading practices. Plan for time to correct coding defects and ensure developers are empowered to report their own bugs for maximum transparency. Many organizations struggle with defining a consistent approach to managing logs. This is particularly important due to the amount of information that can be logged by OIM. We recommend Oracle Diagnostics Logging (ODL) as an alternative to be used for logging. ODL allows log files to be formatted in XML for easy parsing and does not require a server restart when the log levels are changed during troubleshooting. Testing is a vital part of any large project, and an OIM R2 implementation is no exception. We suggest that at least one lower environment should use production-like data and connectors. Configurations should match as closely as possible. For example, use secure channels between OIM and target platforms in pre-production environments to test the configurations, the migration processes of certificates, and the additional overhead that encryption could impose. Finally, we ask our clients to perform database backups regularly and before any major change event, such as a patch or migration between environments. In the lowest environments, we recommend to have at least a weekly backup in order to prevent significant loss of time and effort. Similarly, if your organization is using virtual machines for one or more of the environments, it is recommended to take frequent snapshots so that rollbacks can occur in the event of improper configuration. Operate & sustain the solution to derive maximum benefits When migrating OIM R2 to production, it is important to perform certain activities that will help achieve a smoother transition. At our clients, we have seen that splitting the OIM tables into their own tablespaces by categories (physical tables, indexes, etc.) can help manage database growth effectively. If we notice that a client hasn’t enabled the Oracle-recommended indexing in the applicable database, we strongly suggest doing so to improve performance. Additionally, we work with our clients to make sure that the audit level is set to fit the organization’s auditing needs and sometimes even allocate UPA tables and indexes into their own table-space for better maintenance. Finally, many of our clients have set up schedules for reconciliation tables to be archived at regular intervals in order to keep the size of the database(s) reasonable and result in optimal database performance. For our clients that anticipate availability issues with target applications, we strongly encourage the use of the offline provisioning capabilities of OIM R2. This reduces the provisioning process for a given target application dependency on target availability and help avoid broken workflows. To account for this and other abnormalities, we also advocate that OIM’s monitoring controls be configured to alert administrators on any abnormal situations. Within OIM R2, we have begun advising our clients to utilize the ‘profile’ feature to encapsulate multiple commonly requested accounts, roles, and/or entitlements into a single item. By setting up a number of profiles that can be searched for and used, users will spend less time performing the same exact steps for common tasks. We advise our clients to follow the Oracle recommended guides for database and application server tuning which provides a good baseline configuration. It offers guidance on database connection pools, connection timeouts, user interface threads and proper handling of adapters/plug-ins. All of these can be important configurations that will allow faster provisioning and web page response times. Many of our clients have begun to recognize the value of data mining and a remediation process during the initial phases of an implementation (to help ensure high quality data gets loaded) and beyond (to support ongoing maintenance and business-as-usual processes). A successful program always begins with identifying the data elements and assigning a classification level based on criticality, risk, and availability. It should finish by following through with a remediation process. Dos & Don’ts Here are the most common dos and don'ts that we socialize with our clients, derived from our experience implementing the solution. Dos Don’ts Scope the project into phases with realistic goals. Look for quick wins to show success and value to the stake holders. Avoid “boiling the ocean” and trying to integrate all enterprise applications in the first phase. Establish an enterprise ID (universal unique ID across the enterprise) earlier in the program. Avoid major UI customizations that require code changes. Have a plan in place to patch during the project, which helps alleviate any major issues or roadblocks (product and database). Avoid publishing all the target entitlements if you don't anticipate their usage during access request. Assess your current state and prepare a roadmap to address your operations, tactical and strategic goals, align it with your business priorities. Avoid integrating non-production environments with your production target systems. Defer complex integrations to the later phases and take advantage of lessons learned from previous phases Avoid creating multiple accounts for the same user on the same system, if there is an opportunity to do so. Have an identity and access data quality initiative built into your plan to identify and remediate data related issues early on. Avoid creating complex approval workflows that would negative impact productivity and SLAs. Identify the owner of the identity systems with fair IdM knowledge and empower them with authority to make product related decisions. This will help ensure overcome any design hurdles. Avoid creating complex designs that are not sustainable long term and would need major overhaul during upgrades. Shadow your internal or external consulting resources during the implementation to build the necessary product skills needed to operate and sustain the solution. Avoid treating IAM as a point solution and have appropriate level of communication and training plan for the IT and business users alike. Conclusion In our experience, Identity programs will struggle with scope, proper resourcing, and more. We suggest that companies consider the suggestions discussed in this post and leverage them to help enable their identity and access program. This concludes PwC blog series on R2 for the month and we sincerely hope that the information we have shared thus far has been beneficial. For more information or if you have questions, you can reach out to Rex Thexton, Senior Managing Director, PwC and or Dharma Padala, Director, PwC. We look forward to hearing from you. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL).

    Read the article

  • Using Stored Procedures in SSIS

    - by dataintegration
    The SSIS Data Flow components: the source task and the destination task are the easiest way to transfer data in SSIS. Some data transactions do not fit this model, they are procedural tasks modeled as stored procedures. In this article we show how you can call stored procedures available in RSSBus ADO.NET Providers from SSIS. In this article we will use the CreateJob and the CreateBatch stored procedures available in RSSBus ADO.NET Provider for Salesforce, but the same steps can be used to call a stored procedure in any of our data providers. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow window. Step 3: Open the Data Flow Task and add a Script Component to the data flow pane. A dialog box will pop-up allowing you to select the Script Component Type: pick the source type as we will be outputting columns from our stored procedure. Step 4: Double click the Script Component to open the editor. Step 5: In the "Inputs and Outputs" settings, enter all the columns you want to output to the data flow. Ensure the correct data type has been set for each output. You can check the data type by selecting the output and then changing the "DataType" property from the property editor. In our example, we'll add the column JobID of type String. Step 6: Select the "Script" option in the left-hand pane and click the "Edit Script" button. This will open a new Visual Studio window with some boiler plate code in it. Step 7: In the CreateOutputRows() function you can add code that executes the stored procedures included with the Salesforce Component. In this example we will be using the CreateJob and CreateBatch stored procedures. You can find a list of the available stored procedures along with their inputs and outputs in the product help. //Configure the connection string to your credentials String connectionString = "Offline=False;user=myusername;password=mypassword;access token=mytoken;"; using (SalesforceConnection conn = new SalesforceConnection(connectionString)) { //Create the command to call the stored procedure CreateJob SalesforceCommand cmd = new SalesforceCommand("CreateJob", conn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add(new SalesforceParameter("ObjectName", "Contact")); cmd.Parameters.Add(new SalesforceParameter("Action", "insert")); //Execute CreateJob //CreateBatch requires JobID as input so we store this value for later SalesforceDataReader rdr = cmd.ExecuteReader(); String JobID = ""; while (rdr.Read()) { JobID = (String)rdr["JobID"]; } //Create the command for CreateBatch, for this example we are adding two new rows SalesforceCommand batCmd = new SalesforceCommand("CreateBatch", conn); batCmd.CommandType = CommandType.StoredProcedure; batCmd.Parameters.Add(new SalesforceParameter("JobID", JobID)); batCmd.Parameters.Add(new SalesforceParameter("Aggregate", "<Contact><Row><FirstName>Bill</FirstName>" + "<LastName>White</LastName></Row><Row><FirstName>Bob</FirstName><LastName>Black</LastName></Row></Contact>")); //Execute CreateBatch SalesforceDataReader batRdr = batCmd.ExecuteReader(); } Step 7b: If you had specified output columns earlier, you can now add data into them using the UserComponent Output0Buffer. For example, we had set an output column called JobID of type String so now we can set a value for it. We will modify the DataReader that contains the output of CreateJob like so:. while (rdr.Read()) { Output0Buffer.AddRow(); JobID = (String)rdr["JobID"]; Output0Buffer.JobID = JobID; } Step 8: Note: You will need to modify the connection string to include your credentials. Also ensure that the System.Data.RSSBus.Salesforce assembly is referenced and include the following using statements to the top of the class: using System.Data; using System.Data.RSSBus.Salesforce; Step 9: Once you are done editing your script, save it, and close the window. Click OK in the Script Transformation window to go back to the main pane. Step 10: If had any outputs from the Script Component you can use them in your data flow. For example we will use a Flat File Destination. Configure the Flat File Destination to output the results to a file, and you should see the JobId in the file. Step 11: Your project should be ready to run.

    Read the article

  • Which tips helped you learn touch-typing? [closed]

    - by julien
    I've been learning touch-typing for about two weeks now, and I'm really commited to mastering this skill. Eventhough I'm doing ok with prose already, I'm struggling with programming syntax and even more with keybindings. Those stray you away from the home row more than regular words, and aren't as easy to practice. So I often hunt and peck in order to just get it out, but when reverting to old habits like this, I find it hard to get back into the touch-typing mindframe quickly. One little trick that has helped me so far when getting lost is to reposition every finger on its home row key, and mentally visualize the layout bias of the keyboard, ie the backslash kind of alignment of key columns. It's hard to describe though and probably a bit weird... Hope you guys have better tips !

    Read the article

  • Can't save data for a member in a data form

    - by RahulS
    Implied sharing is an old thing everyone knows the reasons and solutions of that, still little theory about that: With Essbase implied sharing, some members are shared even if you do not explicitly set them as shared. These members are implied shared members. When an implied share relationship is created, each implied member assumes the other member’s value. Essbase assumes (or implies) a shared member relationship in these situations: 1. A parent has only one child 2. A parent has only one child that consolidates to the parent In a Planning form that contains members with an implied sharing relationship, when a value is added for the parent, the child assumes the same value after the form is saved. Likewise, if a value is added for the child, the parent usually assumes the same value after a form is saved.For example, when a calculation script or load rule populates an implied share member, the other implied share member assumes the value of the member populated by the calculation script or load rule. The last value calculated or imported takes precedence. The result is the same whether you refer to the parent or the child as a variable in a calculation script. For more information have a look at: http://docs.oracle.com/cd/E17236_01/epm.1112/hp_admin_11122/ch14s11.html Now the issue which we are going to talk about is We loose data on save even when the parent is dynamic calc and has a single child. A dynamic calc parent to a single child:  If we design the form with following selection: In the data form we will find parent below the member and this is by design whenever you make a selection using commands to select all the member below parent, always children will appear before the parent: Lets try to enter data, Save it Now, try to change the way we selected members Here we go: Now the question again why this behavior: 1. Data from Planning data form passes to Essbase row by row, 2. Because in data form the child member appears before the parent, 3. First, data goes to Essbase for child (SingleStoreChild), 4. Then when Planning passes the data for parent there was #Missing or No data,  5. Over writes the data to #missing. PS: As we know that dynamic calc members are calculated on the fly they are not allocated with any memory in the Essbase, here the parent was dynamic calc and it was pointing to same memory as child in the background, when Planning was passing data to Essbase for second row it has updated the child with missing data.(Little confusing, let me know if you need more explanation) 6. As one of the solutions just change the order of appearance of parent and child. Cheers..!!! Rahul S. https://www.facebook.com/pages/HyperionPlanning/117320818374228

    Read the article

  • Using JDBC to asynchronously read large Oracle table

    - by Ben George
    What strategies can be used to read every row in a large Oracle table, only once, but as fast as possible with JDBC & Java ? Consider that each row has non-trivial amounts of data (30 columns, including large text in some columns). Some strategies I can think of are: Single thread and read table. (Too slow, but listed for clarity) Read the id's into ConcurrentLinkedQueue, use threads to consume queue and query by id in batches. Read id's into a JMS queue, use workers to consume queue and query by id in batches. What other strategies could be used ? For the purpose of this question assume processing of rows to be free.

    Read the article

  • 2D isometric picking

    - by Bikonja
    I'm trying to implement picking in my isometric 2D game, however, I am failing. First of all, I've searched for a solution and came to several, different equations and even a solution using matrices. I tried implementing every single one, but none of them seem to work for me. The idea is that I have an array of tiles, with each tile having it's x and y coordinates specified (in this simplified example it's by it's position in the array). I'm thinking that the tile (0, 0) should be on the left, (max, 0) on top, (0, max) on the bottom and (max, max) on the right. I came up with this loop for drawing, which googling seems to have verified as the correct solution, as has the rendered scene (ofcourse, it could still be wrong, also, forgive the messy names and stuff, it's just a WIP proof of concept code) // Draw code int col = 0; int row = 0; for (int i = 0; i < nrOfTiles; ++i) { // XOffset and YOffset are currently hardcoded values, but will represent camera offset combined with HUD offset Point tile = IsoToScreen(col, row, TileWidth / 2, TileHeight / 2, XOffset, YOffset); int x = tile.X; int y = tile.Y; spriteBatch.Draw(_tiles[i], new Rectangle(tile.X, tile.Y, TileWidth, TileHeight), Color.White); col++; if (col >= Columns) // Columns is the number of tiles in a single row { col = 0; row++; } } // Get selection overlay location (removed check if selection exists for simplicity sake) Point tile = IsoToScreen(_selectedTile.X, _selectedTile.Y, TileWidth / 2, TileHeight / 2, XOffset, YOffset); spriteBatch.Draw(_selectionTexture, new Rectangle(tile.X, tile.Y, TileWidth, TileHeight), Color.White); // End of draw code public Point IsoToScreen(int isoX, int isoY, int widthHalf, int heightHalf, int xOffset, int yOffset) { Point newPoint = new Point(); newPoint.X = widthHalf * (isoX + isoY) + xOffset; newPoint.Y = heightHalf * (-isoX + isoY) + yOffset; return newPoint; } This code draws the tiles correctly. Now I wanted to do picking to select the tiles. For this, I tried coming up with equations of my own (including reversing the drawing equation) and I tried multiple solutions I found on the internet and none of these solutions worked. Trying out lots of solutions, I came upon one that didn't work, but it seemed like an axis was just inverted. I fiddled around with the equations and somehow managed to get it to actually work (but have no idea why it works), but while it's close, it still doesn't work. I'm not really sure how to describe the behaviour, but it changes the selection at wrong places, while being fairly close (sometimes spot on, sometimes a tile off, I believe never more off than the adjacent tile). This is the code I have for getting which tile coordinates are selected: public Point? ScreenToIso(int screenX, int screenY, int tileHeight, int offsetX, int offsetY) { Point? newPoint = null; int nX = -1; int nY = -1; int tX = screenX - offsetX; int tY = screenY - offsetY; nX = -(tY - tX / 2) / tileHeight; nY = (tY + tX / 2) / tileHeight; newPoint = new Point(nX, nY); return newPoint; } I have no idea why this code is so close, especially considering it doesn't even use the tile width and all my attempts to write an equation myself or use a solution I googled failed. Also, I don't think this code accounts for the area outside the "tile" (the transparent part of the tile image), for which I intend to add a color map, but even if that's true, it's not the problem as the selection sometimes switches on approx 25% or 75% of width or height. I'm thinking I've stumbled upon a wrong path and need to backtrack, but at this point, I'm not sure what to do so I hope someone can shed some light on my error or point me to the right path. It may be worth mentioning that my goal is to not only pick the tile. Each main tile will be divided into 5x5 smaller tiles which won't be drawn seperately from the whole main tile, but they will need to be picked out. I think a color map of a main tile with different colors for different coordinates within the main tile should take care of that though, which would fall within using a color map for the main tile (for the transparent parts of the tile, meaning parts that possibly belong to other tiles).

    Read the article

  • Keyboard Navigation For ASP.NET GridView And TreeList Controls v2010 vol 1

    Great new keyboard navigation feature! With the DXperience v2010.1 release, you can enable keyboard navigation by changing a single property, set KeyboardSupport to true. Once KeyboardSupport is enabled, your users can: Focus On Grid Using Control Activation Key Specify an access key for your grid controls and allow your end-users to press CTRL+SHIFT+AccessKey to change focus to the corresponding grid control. Focused Row Press the UP and DOWN arrow keys to move row focus....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Keyboard Navigation For ASP.NET GridView And TreeList Controls v2010 vol 1

    Great new keyboard navigation feature! With the DXperience v2010.1 release, you can enable keyboard navigation by changing a single property, set KeyboardSupport to true. Once KeyboardSupport is enabled, your users can: Focus On Grid Using Control Activation Key Specify an access key for your grid controls and allow your end-users to press CTRL+SHIFT+AccessKey to change focus to the corresponding grid control. Focused Row Press the UP and DOWN arrow keys to move row focus....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Download files from a SharePoint site using the RSSBus SSIS Components

    - by dataintegration
    In this article we will show how to use a stored procedure included in the RSSBus SSIS Components for SharePoint to download files from SharePoint. While the article uses the RSSBus SSIS Components for SharePoint, the same process will work for any of our SSIS Components. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task. Step 3: Add an RSSBus SharePoint Source to the Data Flow Task. Step 4: In the RSSBus SharePoint Source, add a new Connection Manager, and add your credentials for the SharePoint site. Step 5: Now from the Table or View dropdown, choose the name of the Document Library that you are going to back up and close the wizard. Step 6: Add a Script Component to the Data Flow Task and drag an output arrow from the 'RSSBus SharePoint Source' to it. Step 7: Open the Script Component, go to edit the Input Columns, and choose all the columns. Step 8: This will open a new Visual Studio instance, with a project in it. In this project add a reference to the RSSBus.SSIS2008.SharePoint assembly available in the RSSBus SSIS Components for SharePoint installation directory. Step 9: In the 'ScriptMain' class, add the System.Data.RSSBus.SharePoint namespace and go to the 'Input0_ProcessInputRow' method (this method's name may vary depending on the input name in the Script Component). Step 10: In the 'Input0_ProcessInputRow' method, you can add code to use the DownloadDocument stored procedure. Below we show the sample code: String connString = "Offline=False;Password=PASSWORD;User=USER;URL=SHAREPOINT-SITE"; String downloadDir = "C:\\Documents\\"; SharePointConnection conn = new SharePointConnection(connString); SharePointCommand comm = new SharePointCommand("DownloadDocument", conn); comm.CommandType = CommandType.StoredProcedure; comm.Parameters.Clear(); String file = downloadDir+Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@File", file)); String list = Row.ServerUrl.ToString().Split('/')[1].ToString(); comm.Parameters.Add(new SharePointParameter("@Library", list)); String remoteFile = Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@RemoteFile", remoteFile)); comm.ExecuteNonQuery(); After saving your changes to the Script Component, you can execute the project and find the downloaded files in the download directory. SSIS Sample Project To help you with getting started using the SharePoint Data Provider within SQL Server SSIS, download the fully functional sample package. You will also need the SharePoint SSIS Connector to make the connection. You can download a free trial here. Note: Before running the demo, you will need to change your connection details in both the 'Script Component' code and the 'Connection Manager'.

    Read the article

  • If all variables are a subset of the superkey, is the database design 5NF? [migrated]

    - by Lukazoid
    I have a table called LogMessages, which has the following columns: Level A numeric value which represents Trace, Debug, Info, Warning, Error or Fatal Time A UTC time Message Foreign key to a Messages table Source Foreign key to a Sources table User Foreign key to a Users table From what I can see, all of these columns are a part of the super key; if any single value differs to an existing row, a new row can be created. My question is, does this design comply to fifth normal form? I am unsure as some groups of data will be repeating, however I don't believe this violates 5NF? (correct me if I'm wrong)

    Read the article

  • Handling array passed to object at creation

    - by cecilli0n
    When creating my object I pass it an array of a row from my database. (everything in the array we will need, disregarding unnecessary elements at sql query level) When I need to access certain array elements from within my class, I do so like $this->row['element'] However, As I continue development, I sometimes forget what exactly is in this passed array.(this itself doesn't seem good) I am wondering if their is a professional approach to dealing with this, Or am I the only one who has these "I wonder whats in the array" thoughts. One approach to tackling this could be that when we originally pass the array, in the constructor, we assign each element of the array to its own variable, but is this considered professional practice? Additionally by doing this, we could make those variables constants, in a attempt at immutability. Overall I am trying to adhere to good software craftsmanship. Regards.

    Read the article

  • How to create a use case diagram for board game played on PC

    - by user970696
    I'm struggling with a task as I was given to practice UML and use cases. The problem is that I should model computer version of a board game so I am unsure about a few things. obviously it does not matter if you play against the PC or another player, the actions are the same. The game is simply like tic tac toe. E.g. Actor Player ---(Place a diamond)-----include----(Check for a row)---include--(Swap players) But the game is played on the PC, so is Check for row really a use case? And the same with Swap players? Because the system would do that. On the other hand, if it was not, how could I continue?

    Read the article

  • How to organize unit/integration test in BDD

    - by whatf
    So finally after reading a lot, I have understood that the difference between BDD and TDD is between T & B. But coming from basic TDD background, what I used to was, first write unittest for database models write test for views (at this point start with integration test as well, along with unittests) write more integration tests for testing UI stuff. What would be a correct way to approach BDD. Say I have a simple blog application. Given : When a user logs in. He should be shown list of all his posts. But for this, I need a model with a row user, another row blog posts. So how do we go about writing tests? when do we create fixtures? When do we write integration (selenium) tests?

    Read the article

  • How do I convert matrices intended for OpenGL to be compatible for DirectX?

    - by gardian06
    I have finished working through the book "Game Physics Engine Development 2nd Ed" by Millington, and have got it working, but I want to adapt it to work with DirectX. I understand that D3D9+ has the option to use either left handed, or right handed convention, but I am unsure about how to return my matrices to be usable by D3D. The source code gives returning OpenGL column major matrices (the transpose of the working transform matrix shown below), but DirectX is row major. For those unfamiliar for the organization of the matrices used in the book: [r11 r12 r13 t1] [r21 r22 r23 t2] [r31 r32 r33 t3] [ 0 0 0 1] r## meaning the value of that element in the rotation matrix, and t# meaning the translation value. So the question in short is: How do I convert the matrix above to be easily usable by D3D? All of the documentation that I have found simply states that D3D is row major, but not where to put what elements so that it is usable by D3D in terms of rotation, and translation elements.

    Read the article

  • How Would You Design This Table?

    - by sooprise
    I have to create a table where each row needs to store 50 number values. Each row will always need to store 50 number values. If this was a smaller number of values, I would just make fields for each of the values, but because there are 50, this approach seems a bit cumbersome (but since it will always be 50 values, maybe this is the correct approach?). Is there a way to store an array of values in a field? This seems like a nice solution, but the concept is almost identical to creating a relational database.

    Read the article

  • Retrieve data from an ASP.Net application using Ado.Net 2.0 disconnected model

    - by nikolaosk
    This is the second post in a series of posts regarding to ADO.Net 2.0. Have a look at the first post if you like. In this post I am going to investigate the "Disconnected" model. When I say "Disconnected" I mean Datasets . Datasets are in memory representations of tables in a particular database. A Dataset contains a Table collection and each Table collection contains a Row collection and each Row collection contains a Columns collection. So initially you connect to the database, get the data to...(read more)

    Read the article

  • Accessing Controls Within A Gridview

    - by Bunch
    Sometimes you need to access a control within a GridView, but it isn’t quite as straight forward as just using FindControl to grab the control like you can in a FormView. Since the GridView builds multiple rows the key is to specify the row. In this example there is a GridView with a control for a player’s errors. If the errors is greater than 9 the GridView should display the control (lblErrors) in red so it stands out. Here is the GridView: <asp:GridView ID="gvFielding" runat="server" DataSourceID="sqlFielding" DataKeyNames="PlayerID" AutoGenerateColumns="false" >     <Columns>         <asp:BoundField DataField="PlayerName" HeaderText="Player Name" />         <asp:BoundField DataField="PlayerNumber" HeaderText="Player Number" />         <asp:TemplateField HeaderText="Errors">             <ItemTemplate>                 <asp:Label ID="lblErrors" runat="server" Text='<%# EVAL("Errors") %>'  />             </ItemTemplate>         </asp:TemplateField>     </Columns> </asp:GridView> In the code behind you can add the code to change the label’s ForeColor property to red based on the amount of errors. In this case 10 or more errors triggers the color change. Protected Sub gvFielding_DataBound(ByVal sender As Object, ByVal e As System.EventArgs) Handles gvFielding.DataBound     Dim errorLabel As Label     Dim errors As Integer     Dim i As Integer = 0     For Each row As GridViewRow In gvFielding.Rows         errorLabel = gvFielding.Rows(i).FindControl("lblErrors")         If Not errorLabel.Text = Nothing Then             Integer.TryParse(errorLabel.Text, errors)             If errors > 9 Then                 errorLabel.ForeColor = Drawing.Color.Red             End If         End If         i += 1     Next End Sub The main points in the DataBound sub is use a For Each statement to loop through the rows and to increment the variable i so you loop through every row. That way you check each one and if the value is greater than 9 the label changes to red. The If Not errorLabel.Text = Nothing line is there as a check in case no data comes back at all for Errors. Technorati Tags: GridView,ASP.Net,VB.Net

    Read the article

  • Pending and Approval process

    - by zen
    So let's say I have a DB table with 8 columns, one is a unique auto-incrementing used as ID. So I have a page that pulls in the info for each row based on query string ID. I want to give my users the ability to propose changes. Kinda like a wiki setup. So I was thinking I should just have another duplicate table or maybe database altogether (without the auto-incrementing column and maybe with a date edited column) that keeps all proposed changes in queue and then when I approve them, the script can move the row from the proposed DB to the real DB. Does this sound good or is there a better process for this?

    Read the article

  • Recent EC Meetings - RIM forfeits EC seat

    - by heathervc
    Materials and minutes from the JCP EC Face-to-Face Meeting, held September 2012 in Prague, are now available on the EC Meeting Summaries page.  Topics included JCP.Next, a JCP 2.8 progress report, Inactive JSRs, and two Spec Lead presentations. In October 2011, new EC Standing Rules went into effect. The Rules include the following: "Missing five meetings in a row, or missing more than two-thirds of all meetings in any consecutive twelve-month period, results in loss of EC membership."  Last week, the JCP EC met for their October EC teleconference meeting.  RIM missed this meeting, and has now missed five meetings in a row (see the attendance chart); therefore, RIM has forfeited their EC membership. Results from the 2012 EC Elections will be available on 30 October.  The new merged EC will go into effect on 12 November.

    Read the article

  • Algorithm for determining grid based on variably sized "blocks"?

    - by Lite Byte
    I'm trying to convert a set of "blocks" in to a grid-like layout. The blocks have a width of either 25%, 33%, 50%, 66%, or 75% of their container and each row of the grid should try to fit as many blocks as possible, up to a total width of 100%. I've discovered that trying to do this while leaving no remaining blocks in the original set is very hard. Eventually, I think my solution will be to upgrade/downgrade various block sizes (based on their priority or something) so they all fit in to a row. Either case, before I do that, I thought I'd check if someone has some code (or a paper) demonstrating a solution to this problem already? And bonus points if the solution incorporates varying block heights in to its calculations :) Thanks!

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >