Search Results

Search found 7127 results on 286 pages for 'calculated columns'.

Page 72/286 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Elusive race condition in Java

    - by nasufara
    I am creating a graphing calculator. In an attempt to squeeze some more performance out of it, I added some multithreaded to the line calculator. Essentially what my current implementation does is construct a thread-safe Queue of X values, then start however many threads it needs, each one calculating a point on the line using the queue to get its values, and then ordering the points using a HashMap when the calculations are done. This implementation works great, and that's not where my race condition is (merely some background info). In examining the performance results from this, I found that the HashMap is a performance bottleneck, since I do that synchronously on one thread. So I figured that ordering each point as its calculated would work best. I tried a PriorityQueue, but that was slower than the HashMap. I ended up creating an algorithm that essentially works like this: I construct a list of X values to calculate, like in my current algorithm. I then copy that list of values into another class, unimaginatively and temporarily named BlockingList, which is responsible for ordering the points as they are calculated. BlockingList contains a put() method, which takes in two BigDecimals as parameters, the first the X value, the second the calculated Y value. put() will only accept a value if the X value is the next one on the list to be accepted in the list of X values, and will block until another thread gives it the next excepted value. For example, since that can be confusing, say I have two threads, Thread-1 and Thread-2. Thread-2 gets the X value 10.0 from the values queue, and Thread-1 gets 9.0. However, Thread-1 completes its calculations first, and calls put() before Thread-2 does. Because BlockingList is expecting to get 10.0 first, and not 9.0, it will block on Thread-1 until Thread-2 finishes and calls put(). Once Thread-2 gives BlockingList 10.0, it notify()s all waiting threads, and expects 9.0 next. This continues until BlockingList gets all of its expected values. (I apologise if that was hard to follow, if you need more clarification, just ask.) As expected by the question title, there is a race condition in here. If I run it without any System.out.printlns, it will sometimes lock because of conflicting wait() and notifyAll()s, but if I put a println in, it will run great. A small implementation of this is included below, and exhibits the same behavior: import java.math.BigDecimal; import java.util.concurrent.ConcurrentLinkedQueue; public class Example { public static void main(String[] args) throws InterruptedException { // Various scaling values, determined based on the graph size // in the real implementation BigDecimal xMax = new BigDecimal(10); BigDecimal xStep = new BigDecimal(0.05); // Construct the values list, from -10 to 10 final ConcurrentLinkedQueue<BigDecimal> values = new ConcurrentLinkedQueue<BigDecimal>(); for (BigDecimal i = new BigDecimal(-10); i.compareTo(xMax) <= 0; i = i.add(xStep)) { values.add(i); } // Contains the calculated values final BlockingList list = new BlockingList(values); for (int i = 0; i < 4; i++) { new Thread() { public void run() { BigDecimal x; // Keep looping until there are no more values while ((x = values.poll()) != null) { PointPair pair = new PointPair(); pair.realX = x; try { list.put(pair); } catch (Exception ex) { ex.printStackTrace(); } } } }.start(); } } private static class PointPair { public BigDecimal realX; } private static class BlockingList { private final ConcurrentLinkedQueue<BigDecimal> _values; private final ConcurrentLinkedQueue<PointPair> _list = new ConcurrentLinkedQueue<PointPair>(); public BlockingList(ConcurrentLinkedQueue<BigDecimal> expectedValues) throws InterruptedException { // Copy the values into a new queue BigDecimal[] arr = expectedValues.toArray(new BigDecimal[0]); _values = new ConcurrentLinkedQueue<BigDecimal>(); for (BigDecimal dec : arr) { _values.add(dec); } } public void put(PointPair item) throws InterruptedException { while (item.realX.compareTo(_values.peek()) != 0) { synchronized (this) { // Block until someone enters the next desired value wait(); } } _list.add(item); _values.poll(); synchronized (this) { notifyAll(); } } } } My question is can anybody help me find the threading error? Thanks!

    Read the article

  • user generated / user specific functions

    - by pedalpete
    I'm looking for the most elegant and secure method to do the following. I have a calendar, and groups of users. Users can add events to specific days on the calendar, and specify how long each event lasts for. I've had a few requests from users to add the ability for them to define that events of a specific length include a break, of a certain amount of time, or require that a specific amount of time be left between events. For example, if event is 2 hours, include a 20min break. for each event, require 30 minutes before start of next event. The same group that has asked for an event of 2 hours to include a 20 min break, could also require that an event 3 hours include a 30 minute break. In the end, what the users are trying to get is an elapsed time excluding breaks calculated for them. Currently I provide them a total elapsed time, but they are looking for a running time. However, each of these requests is different for each group. Where one group may want a 30 minute break during a 2 hour event, and another may want only 10 minutes for each 3 hour event. I was kinda thinking I could write the functions into a php file per group, and then include that file and do the calculations via php and then return a calculated total to the user, but something about that doesn't sit right with me. Another option is to output the groups functions to javascript, and have it run client-side, as I'm already returning the duration of the event, but where the user is part of more than one group with different rules, this seems like it could get rather messy. I currently store the start and end time in the database, but no 'durations', and I don't think I should be storing the calculated totals in the db, because if a group decides to change their calculations, I'd need to change it throughout the db. Is there a better way of doing this? I would just store the variables in mysql, but I don't see how I can then say to mysql to calculate based on those variables. I'm REALLY lost here. Any suggestions? I'm hoping somebody has done something similar and can provide some insight into the best direction. If it helps, my table contains eventid, user, group, startDate, startTime, endDate, endTime, type The json for the event which I return to the user is {"eventid":"'.$eventId.'", "user":"'.$userId.'","group":"'.$groupId.'","type":"'.$type.'","startDate":".$startDate.'","startTime":"'.$startTime.'","endDate":"'.$endDate.'","endTime":"'.$endTime.'","durationLength":"'.$duration.'", "durationHrs":"'.$durationHrs.'"} where for example, duration length is 2.5 and duration hours is 2:30.

    Read the article

  • magento - multiple tax rates

    - by Fiona
    I've built a magento site for a Canadian company where each state has a Retail sales tax and a federal tax rate and these rates differ! So, I set up the rates, and the Tax is being calculated correctly, ie the sum of the two tax rates is being calculated correctly. however on selecting the breakdown of tax mode in the admin panel, it would appear that there is something wrong with my setup. Eg. Subtotal $129.99 GST/HST Quebec (5%) $ 16.25 Provincial Sales Tax Quebec (7.5%) Tax $ 16.25 Grand Total $158.23 16.25 is 12% of $129.99 so the tax figure is correct. However it should be displaying as follows: Subtotal $129.99 GST/HST Quebec (5%) $ 6.50 Provincial Sales Tax Quebec (7.5%) $ 9.75 Tax $ 16.25 Grand Total $158.23 Anyone come across this before? Have any suggestions on how to fix it? Many thanks, Fiona

    Read the article

  • Filemaker - Getting field values from related table

    - by foobar
    I have the following setup in Filemaker Pro 10. Table1 with: id_table1, related_names Table2 with: id_table2, name, include and a joint-table with: id_table1, id_table2 Now I want either make related_names a calculated field or write a script that sets related_names to a comma separated list of all names which are connected through the joint-table and have Table2.include = True. So for example a data set could look like: Table1 id_table1, related_names 1, "foo,bar" 2, "foo" 3, "" joint-table id_table1, id_table2 1,1 1,2 1,3 2,1 Table2 id_table2, name, include 1, foo, True 2, bar, True 3, baz, False After searching the internet for a few hours the closest I came was a calculated field with list(join-table::id_table2) which gives me a list with a all the id_table2's. But now I would need to find the appropriate records in table2 and check the include field. I hope the problem is clear. any help is highly appreciated.

    Read the article

  • Automatically compute variable upon input in SPSS

    - by what
    I have two cells in SPSS, months is a number of months, years_from_months is the number of years, calculated from the months (decimal: months / 12). What I want is that years_from_months is calculated (and updated) automatically, when I finish entering the months in months, i.e. that I don't have to explicitly call compute manually through the menu. I have defined years_from_months to calculate the years, but at the moment the field only updates when I go to the menu option Transform Compute variable ... and click OK in the window that opens. How can I set up the cell to update automatically (like in Excel)?

    Read the article

  • Division to the nearest 1 decimal place without floating point math?

    - by John Sheares
    I am having some speed issues with my C# program and identified that this percentage calculation is causing a slow down. The calculation is simply n/d * 100. Both the numerator and denominator can be any integer number. The numerator can never be greater than the denominator and is never negative. Therefore, the result is always from 0-100. Right now, this is done by simply using floating point math and is somewhat slow, since it's being calculated tens of millions of times. I really don't need anything more accurate than to the nearest 0.1 percent. And, I just use this calculated value to see if it's bigger than a fixed constant value. I am thinking that everything should be kept as an integer, so the range with 0.1 accuracy would be 0-1000. Is there some way to calculate this percentage without floating point math?

    Read the article

  • Multiple Refinement/Clusters in a single search

    - by Brain Teasers
    Suppose I have options of searching are 1.occupation 2.education 3.religion 4.caste 5.country and many more. when i perform this search, i can get easily calculate refinements. problem is i need to calculate refinements in a way that for individual refinement is calculated in a manner that all search parameter is considered expect the calculated refinement check on the left side ... even i have choose the profession area, still search refinements of this is coming.... same functionality if of all refinements.Please help.I use sphinx for searching seen this type of refinements in http://ww2.shaadi.com/search

    Read the article

  • SSL: can the secret key be sniffed before the actual encryption begins?

    - by Jorre
    I was looking into SSL and some of the steps that are involved to set up an encrypted connection between a server and a client computer. I understand that a server key and certificate is sent to the browser, and that a secret code is being calculated, like they say in the following video: http://www.youtube.com/watch?v=iQsKdtjwtYI around 5:22, they talk about a master secret code that is being calculated to start talking in an encrypted way. My question now is: before the connection is actually encrypted (the handshake phase), all communication between the server and the client can be sniffed by a packet sniffer. Isn't it then possible to sniff the encryption key or other data that is used to set up a secure connection?

    Read the article

  • Exposing some live data on a website - New to ASP.NET, need guidelines

    - by Carlos
    I have a large .NET based system running within the company intranet, which allows winforms users to see some live calculated numbers and a custom winforms control drawn live (but not real-time). The Forms users can also affect the operation of the system. I would like to just show the live numbers on a website, along with the custom control. Nothing needs to come back from the web user, as the web app is meant to be just for monitoring. All the numbers can be calculated at the server. It's been a long time since I touched ASP.NET, and I need to know how to proceed. What are the steps in building and deploying such a website? Any caveats I need to look out for?

    Read the article

  • SQL Server Reporting Services Format Hours as Hours:Minutes

    - by Frank Schmitt
    I'm writing reports on SQL Server Reporting Server that have a number of hours grouped by, say, user, and a total calculated based on the sum of the values. Currently my query runs a stored proc that returns the hours as in HH:MM format, rather than decimal hours, as our users find that more intuitive. The problem occurs when I try and add up the column using an SSRS expression, because the SUM function isn't smart enough to handle adding up times in this format. Is there any way to: Display a time interval (in minutes or hours) in HH:MM format while having it calculated in decimal form? Or split up and calculate the total of the HH:MM text values to arrive at a total as an expression? I'd like to avoid having to write/run a second query just to get the total.

    Read the article

  • Can't fill a column of NULLs with actual values by making an association to the proper values in the

    - by UkraineTrain
    I have a table with separate columns for months, days and a varchar column for 6 hour increments for each day ('12AM', '6AM', '12PM', '6PM'). There's also a column that's supposed to have calculated numeric values for each of those 6 hour increments. These calculated values come from some reference table. This reference table contains values for each day for several months broken down by hour where each hour has its own column. So, basically, I have to add the values for each 6 hour increment. I have no idea how to associate the correct values in the reference table to those 6 hour increments. I will really appreciate any help on this.

    Read the article

  • Can I prevent a computed column from changing it's value if the formula changes?

    - by William Hurst
    I have a computed column in MS SQL 2005 that does some VAT calculations. The website uses invoices that can only be generated once and rely on the value in the computed column to work out the VAT. Unfortunately, a bug was found that means that the the VAT value calculated was off by a few cents. Not a huge problem but we can't change the values from all the previously computed values as these need to be honoured on the invoices. tldr; How do I change the calculation for a computed column without re-calculating the values that have already be calculated?

    Read the article

  • How might you unobtrusively enhance the jQuery Datepicker class?

    - by Teflon Ted
    You can pass special strings into jQuery's Datepicker class setDate() method like "+7" which will be translated into "7 days from today": http://docs.jquery.com/UI/Datepicker#method-setDate But you can't get that "+7" back out. When you call getDate() you get the calculated resulting date. I have a use case where I need to pull out the special string "+7" for propagation. One chunk of code is passing a special string into the Datepicker and passing the Datepicker over to another chunk of code which pulls out the date, but the latter sometimes needs to know the special string rather than the calculated date. So I need to enhance the Datepicker tool to (a) store the special code internally and (b) expose it via a method like getOriginallyPassedInDate() or some such. I'm not a jQuery/Javascript ninja so I could really use some guidance on how I might preferably-unobtrusively add the necessary functionality to the Datepicker class, the way you might monkey-patch an object in Ruby I'm guessing.

    Read the article

  • JPA @version - can it be used to calcualate version of a table entry

    - by OpenSource
    Hi, Please consider the following table (created using a corresponding entity) request ------- id requestor type version items 1 a t1 1 5 2 a t1 2 3 3 b t1 1 2 4 a t2 1 4 5 a t1 3 9 The above is what I want to achieve. The version field is a calculated field others are user provided. Basically the request's version needs to be calculated based on the combination of requestor and the type. The first occurance with a given combination will have a version 1 then version 2 and so on. I tried various things using @version on a different entity with just the three columns and joining the two entities using ManytoOne etc but I'm not able to get to the desired outcome. I dont want to confuse you with the things I tried. Since the objective is simple there should be an easier way I suppose? Can you please help? - any help greatly appreciated! thanks in advance

    Read the article

  • how to get the value of checksum in php header

    - by sumit
    I have a header in php which contains a link like <?php header("Location: "."https://abc.com/ppp/purchase.do?version=3.0&". "merchant_id=<23255>&merchant_site_id<21312>&total_amount=<69.99>&". "currency=<USD>&item_name_1=<incidentsupporttier1>&item_amount_1=<1>&". "time_stamp=<2010-06-14.14:34:33>&**checksum=<calculated_checksum>**"); ?> when i run this page the value of checksum is calculated and the link is opened now how checksum is calculated? calculated_checksum=md5(abc); md5 is an algorithm which calculates the value of checksum based on certain values inside the bracket. now i want to know how can i pass the value of checksum in the header url

    Read the article

  • PHP Form (post) Repeating input in a tabel

    - by Sef
    Hello, I have a form (with post method) that takes the following input: - a certain name - a number - 3 checkboxes All this input gets generated and calculated in a table.(html code within the php) Everything gets properly calculated and displayed in a table. So my question: How do i make it possible after giving all those input to give in more input? Meaning i have made a hyperlink that goes back to the form itself (where i can give the input). So i can give in new data, and after submiting that again the table now contains 2 rows of values insteed of just 1. Not really sure what exactly i need for this. Regards.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • How to manually patch Blogger template to use Disqus

    - by user317944
    I'm trying to add disqus to my blog and I tried following this guide to do so: http://disqus.com/docs/patch-blogger/ However their instructions are completely off with what I have on my custom template. Here is the template: <b:skin><![CDATA[/*----------------------------------------------- Blogger Template Style Name: Picture Window Designer: Josh Peterson URL: www.noaesthetic.com ----------------------------------------------- */ /* Variable definitions ==================== */ /* Content ----------------------------------------------- */ body { font: $(body.font); color: $(body.text.color); } html body .region-inner { min-width: 0; max-width: 100%; width: auto; } .content-outer { font-size: 90%; } a:link { text-decoration:none; color: $(link.color); } a:visited { text-decoration:none; color: $(link.visited.color); } a:hover { text-decoration:underline; color: $(link.hover.color); } .body-fauxcolumn-outer { background: $(body.background); } .content-outer { background: $(content.background); -moz-border-radius: $(content.border.radius); -webkit-border-radius: $(content.border.radius); -goog-ms-border-radius: $(content.border.radius); border-radius: $(content.border.radius); -moz-box-shadow: 0 0 $(content.shadow.spread) rgba(0, 0, 0, .15); -webkit-box-shadow: 0 0 $(content.shadow.spread) rgba(0, 0, 0, .15); -goog-ms-box-shadow: 0 0 $(content.shadow.spread) rgba(0, 0, 0, .15); box-shadow: 0 0 $(content.shadow.spread) rgba(0, 0, 0, .15); margin: $(content.margin) auto; } .content-inner { padding: $(content.padding); } /* Header ----------------------------------------------- */ .header-outer { background: $(header.background.color) $(header.background.gradient) repeat-x scroll top left; _background-image: none; color: $(header.text.color); -moz-border-radius: $(header.border.radius); -webkit-border-radius: $(header.border.radius); -goog-ms-border-radius: $(header.border.radius); border-radius: $(header.border.radius); } .Header img, .Header #header-inner { -moz-border-radius: $(header.border.radius); -webkit-border-radius: $(header.border.radius); -goog-ms-border-radius: $(header.border.radius); border-radius: $(header.border.radius); } .header-inner .Header .titlewrapper, .header-inner .Header .descriptionwrapper { padding-left: $(header.padding); padding-right: $(header.padding); } .Header h1 { font: $(header.font); text-shadow: 1px 1px 3px rgba(0, 0, 0, 0.3); } .Header h1 a { color: $(header.text.color); } .Header .description { font-size: 130%; } /* Tabs ----------------------------------------------- */ .tabs-inner { margin: .5em $(tabs.margin.sides) $(tabs.margin.bottom); padding: 0; } .tabs-inner .section { margin: 0; } .tabs-inner .widget ul { padding: 0; background: $(tabs.background.color) $(tabs.background.gradient) repeat scroll bottom; -moz-border-radius: $(tabs.border.radius); -webkit-border-radius: $(tabs.border.radius); -goog-ms-border-radius: $(tabs.border.radius); border-radius: $(tabs.border.radius); } .tabs-inner .widget li { border: none; } .tabs-inner .widget li a { display: block; padding: .5em 1em; margin-$endSide: $(tabs.spacing); color: $(tabs.text.color); font: $(tabs.font); -moz-border-radius: $(tab.border.radius) $(tab.border.radius) 0 0; -webkit-border-top-left-radius: $(tab.border.radius); -webkit-border-top-right-radius: $(tab.border.radius); -goog-ms-border-radius: $(tab.border.radius) $(tab.border.radius) 0 0; border-radius: $(tab.border.radius) $(tab.border.radius) 0 0; background: $(tab.background); border-$endSide: 1px solid $(tabs.separator.color); } .tabs-inner .widget li:first-child a { padding-$startSide: 1.25em; -moz-border-radius-top$startSide: $(tab.first.border.radius); -moz-border-radius-bottom$startSide: $(tabs.border.radius); -webkit-border-top-$startSide-radius: $(tab.first.border.radius); -webkit-border-bottom-$startSide-radius: $(tabs.border.radius); -goog-ms-border-top-$startSide-radius: $(tab.first.border.radius); -goog-ms-border-bottom-$startSide-radius: $(tabs.border.radius); border-top-$startSide-radius: $(tab.first.border.radius); border-bottom-$startSide-radius: $(tabs.border.radius); } .tabs-inner .widget li.selected a, .tabs-inner .widget li a:hover { position: relative; z-index: 1; background: $(tabs.selected.background.color) $(tab.selected.background.gradient) repeat scroll bottom; color: $(tabs.selected.text.color); -moz-box-shadow: 0 0 $(region.shadow.spread) rgba(0, 0, 0, .15); -webkit-box-shadow: 0 0 $(region.shadow.spread) rgba(0, 0, 0, .15); -goog-ms-box-shadow: 0 0 $(region.shadow.spread) rgba(0, 0, 0, .15); box-shadow: 0 0 $(region.shadow.spread) rgba(0, 0, 0, .15); } /* Headings ----------------------------------------------- */ h2 { font: $(widget.title.font); text-transform: $(widget.title.text.transform); color: $(widget.title.text.color); margin: .5em 0; } /* Main ----------------------------------------------- */ .main-outer { background: $(main.background); -moz-border-radius: $(main.border.radius.top) $(main.border.radius.top) 0 0; -webkit-border-top-left-radius: $(main.border.radius.top); -webkit-border-top-right-radius: $(main.border.radius.top); -webkit-border-bottom-left-radius: 0; -webkit-border-bottom-right-radius: 0; -goog-ms-border-radius: $(main.border.radius.top) $(main.border.radius.top) 0 0; border-radius: $(main.border.radius.top) $(main.border.radius.top) 0 0; -moz-box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); -webkit-box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); -goog-ms-box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); } .main-inner { padding: 15px $(main.padding.sides) 20px; } .main-inner .column-center-inner { padding: 0 0; } .main-inner .column-left-inner { padding-left: 0; } .main-inner .column-right-inner { padding-right: 0; } /* Posts ----------------------------------------------- */ h3.post-title { margin: 0; font: $(post.title.font); } .comments h4 { margin: 1em 0 0; font: $(post.title.font); } .post-outer { background-color: $(post.background.color); border: solid 1px $(post.border.color); -moz-border-radius: $(post.border.radius); -webkit-border-radius: $(post.border.radius); border-radius: $(post.border.radius); -goog-ms-border-radius: $(post.border.radius); padding: 15px 20px; margin: 0 $(post.margin.sides) 20px; } .post-body { line-height: 1.4; font-size: 110%; } .post-header { margin: 0 0 1.5em; color: $(post.footer.text.color); line-height: 1.6; } .post-footer { margin: .5em 0 0; color: $(post.footer.text.color); line-height: 1.6; } blog-pager { font-size: 140% } comments .comment-author { padding-top: 1.5em; border-top: dashed 1px #ccc; border-top: dashed 1px rgba(128, 128, 128, .5); background-position: 0 1.5em; } comments .comment-author:first-child { padding-top: 0; border-top: none; } .avatar-image-container { margin: .2em 0 0; } /* Widgets ----------------------------------------------- */ .widget ul, .widget #ArchiveList ul.flat { padding: 0; list-style: none; } .widget ul li, .widget #ArchiveList ul.flat li { border-top: dashed 1px #ccc; border-top: dashed 1px rgba(128, 128, 128, .5); } .widget ul li:first-child, .widget #ArchiveList ul.flat li:first-child { border-top: none; } .widget .post-body ul { list-style: disc; } .widget .post-body ul li { border: none; } /* Footer ----------------------------------------------- */ .footer-outer { color:$(footer.text.color); background: $(footer.background); -moz-border-radius: $(footer.border.radius.top) $(footer.border.radius.top) $(footer.border.radius.bottom) $(footer.border.radius.bottom); -webkit-border-top-left-radius: $(footer.border.radius.top); -webkit-border-top-right-radius: $(footer.border.radius.top); -webkit-border-bottom-left-radius: $(footer.border.radius.bottom); -webkit-border-bottom-right-radius: $(footer.border.radius.bottom); -goog-ms-border-radius: $(footer.border.radius.top) $(footer.border.radius.top) $(footer.border.radius.bottom) $(footer.border.radius.bottom); border-radius: $(footer.border.radius.top) $(footer.border.radius.top) $(footer.border.radius.bottom) $(footer.border.radius.bottom); -moz-box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); -webkit-box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); -goog-ms-box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); } .footer-inner { padding: 10px $(main.padding.sides) 20px; } .footer-outer a { color: $(footer.link.color); } .footer-outer a:visited { color: $(footer.link.visited.color); } .footer-outer a:hover { color: $(footer.link.hover.color); } .footer-outer .widget h2 { color: $(footer.widget.title.text.color); } ]] <b:template-skin> <b:variable default='930px' name='content.width' type='length' value='930px'/> <b:variable default='0' name='main.column.left.width' type='length' value='180px'/> <b:variable default='360px' name='main.column.right.width' type='length' value='180px'/> <![CDATA[ body { min-width: $(content.width); } .content-outer, .region-inner { min-width: $(content.width); max-width: $(content.width); _width: $(content.width); } .main-inner .columns { padding-left: $(main.column.left.width); padding-right: $(main.column.right.width); } .main-inner .fauxcolumn-center-outer { left: $(main.column.left.width); right: $(main.column.right.width); /* IE6 does not respect left and right together */ _width: expression(this.parentNode.offsetWidth - parseInt("$(main.column.left.width)") - parseInt("$(main.column.right.width)") + 'px'); } .main-inner .fauxcolumn-left-outer { width: $(main.column.left.width); } .main-inner .fauxcolumn-right-outer { width: $(main.column.right.width); } .main-inner .column-left-outer { width: $(main.column.left.width); right: $(main.column.left.width); margin-right: -$(main.column.left.width); } .main-inner .column-right-outer { width: $(main.column.right.width); margin-right: -$(main.column.right.width); } #layout { min-width: 0; } #layout .content-outer { min-width: 0; width: 800px; } #layout .region-inner { min-width: 0; width: auto; } ]]> </b:template-skin> <div class='main-cap-bottom cap-bottom'> <div class='cap-left'/> <div class='cap-right'/> </div> </div> <footer> <div class='footer-outer'> <div class='footer-cap-top cap-top'> <div class='cap-left'/> <div class='cap-right'/> </div> <div class='fauxborder-left footer-fauxborder-left'> <div class='fauxborder-right footer-fauxborder-right'/> <div class='region-inner footer-inner'> <macro:include id='footer-sections' name='sections'> <macro:param default='2' name='num' value='3'/> <macro:param default='footer' name='idPrefix'/> <macro:param default='foot' name='class'/> <macro:param default='false' name='includeBottom'/> </macro:include> <!-- outside of the include in order to lock Attribution widget --> <b:section class='foot' id='footer-3' showaddelement='no'> document.body.className = document.body.className.replace('loading', ''); <macro:if cond='data:col.num &gt;= 2'> <table border='0' cellpadding='0' cellspacing='0' mexpr:class='&quot;section-columns columns-&quot; + data:col.num'> <tbody> <tr> <td class='first columns-cell'> <b:section mexpr:class='data:col.class' mexpr:id='data:col.idPrefix + &quot;-2-1&quot;'/> </td> <td class='columns-cell'> <b:section mexpr:class='data:col.class' mexpr:id='data:col.idPrefix + &quot;-2-2&quot;'/> </td> <macro:if cond='data:col.num &gt;= 3'> <td class='columns-cell'> <b:section mexpr:class='data:col.class' mexpr:id='data:col.idPrefix + &quot;-2-3&quot;'/> </td> </macro:if> <macro:if cond='data:col.num &gt;= 4'> <td class='columns-cell'> <b:section mexpr:class='data:col.class' mexpr:id='data:col.idPrefix + &quot;-2-4&quot;'/> </td> </macro:if> </tr> </tbody> </table> <macro:if cond='data:col.includeBottom'> <b:section mexpr:class='data:col.class' mexpr:id='data:col.idPrefix + &quot;-3&quot;' showaddelement='no'/> </macro:if> </macro

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • Design pattern for parsing data that will be grouped to two different ways and flipped

    - by lewisblackfan
    I'm looking for an easily maintainable and extendable design model for a script to parse an excel workbook into two separate workbooks after pulling data from other locations like the command line, and a database. The high level details are as follows. I need to parse an excel workbook containing a sheet that lists unique question names, the only reliable information that can be parsed from the question name is the book code that identifies the title and edition of the textbook the question is associated with, the rest of the question name is not standardized well enough to be reliably parsed by computer. The general form of the question name is best described by the following regular expression. '^(\w+)\s(\w{1,2})\.(\w{1,2})\.(\w{1,3})\.(\w{1,3}\.)*$' The first sub-pattern is the book code, the second sub-pattern is 90% of the time the chapter, and the rest of the sub-patterns could be section, problem type, problem number, or question type information. There is no simple logic, at least not one I can find. There will be a minimum of three other columns in this spreadsheet; one column will be the chapter the question is associated with, the second will be the section within the chapter the question is associated with, and the third will be some kind of asset indicated by a uniform resource locator. 1 | 1 | qname1 | url | description | url | description ... 1 | 1 | qname2 | url | description 1 | 1 | qname3 | url | description | url | description | url | The asset can be indicated by a full or partial uniform resource locator, the partial url will need to be completed before it can be fed into the application. There theoretically could be no limit to the number of asset columns, the assets will be grouped in columns by type. Some times additional data will have to be retrieved from a database or combined with the book code before the asset url is complete and can be understood by the application that will be using the asset. The type is an abstraction, there are eight types right now, each with their own logic in how the uniform resource locator is handled and or completed, and I have to add a new type and its logic every three or four months. For each asset url there is the possibility of a description column, a character string for display in the application, but not always. (I've already worked out validating the description text, and squashing MSs obscure code page down to something 7-bit ascii can handle.) Now that all the details are filled-in I can get to the actual problem of parsing the file. I need to split the information in this excel workbook into two separate workbooks. The first workbook will group all the questions by section in rows. With the first cell being the section doublet and the rest of the cells in the row are the question names. 1.1 | qname1 | qname2 | qname3 | qname4 | 1.2 | qname1 | qname2 | qname3 | 1.3 | qname1 | qname2 | qname3 | qname4 | qname5 There is no set number of questions for each section as you can see from the above example. The second workbook is more complicated, there is one row per asset, and question names that have more than one asset will be duplicated. There will be four or five columns on this sheet. The first is the question name for the asset, the second is a media type used to select the correct icon for the asset in the application, the third is string representing the asset type, the four is the full and complete uniform resource locator for the asset, and the fifth columns is the optional text description for the asset. q1 | mtype1 | atype1 | url | description q1 | mtype2 | atype2 | url | description q1 | mtype2 | atype3 | url | description q2 | mtype1 | atype1 | url | description q2 | mtype2 | atype3 | url | description For the original six types I did have a script that parsed the source excel workbook into the other two excel workbooks, and I was able to add two more types until I ran aground on the implementation of the ninth type and tenth types. What broke my script was the fact that the ninth type is actually a sub-type of one of the original six, but with entirely different logic, and my mostly procedural script could not accommodate without duplicating a lot of code. I also had a lot of bugs in the script and will be writing the test first on this time around. I'm stuck with the format for the resulting two workbooks, this script is glue code, development went ahead with the project without bothering to get a complete spec from the sponsor. I work for the same company as the developers but in the editorial department, editorial is co-sponsor of the project, and am expected to fix pesky details like this (I'm foaming at the mouth as I type this). I've tried factories, I've tried different object models, but each resulting workbook is so different when I find a design that works for generating one workbook the code is not really usable for generating the other. What I would really like are ideas about a maintainable and extensible design for parsing the source workbook into both workbooks with maximum code reuse, and or sympathy.

    Read the article

  • How should I model the database for this problem? And which ORM can handle it?

    - by Kristof Claes
    I need to build some sort of a custom CMS for a client of ours. These are some of the functional requirements: Must be able to manage the list of Pages in the site Each Page can contain a number of ColumnGroups A ColumnGroup is nothing more than a list of Columns in a certain ColumnGroupLayout. For example: "one column taking up the entire width of the page", "two columns each taking up half of the width", ... Each Column can contain a number ContentBlocks Examples of a ContentBlock are: TextBlock, NewsBlock, PictureBlock, ... ContentBlocks can be given a certain sorting within a Column A ContentBlock can be put in different Columns so that content can be reused without having to be duplicated. My first quick draft of how this could look like in C# code (we're using ASP.NET 4.0 to develop the CMS) can be found at the bottom of my question. One of the technical requirements is that it must be as easy as possible to add new types of ContentBlocks to the CMS. So I would like model everything as flexible as possible. Unfortunately, I'm already stuck at trying to figure out how the database should look like. One of the problems I'm having has to do with sorting different types of ContentBlocks in a Column. I guess each type of ContentBlock (like TextBlock, NewsBlock, PictureBlock, ...) should have it's own table in the database because each has it's own different fields. A TextBlock might only have a field called Text whereas a NewsBlock might have fields for the Text, the Summary, the PublicationDate, ... Since one Column can have ContentBlocks located in different tables, I guess I'll have to create a many-to-many association for each type of ContentBlock. For example: ColumnTextBlocks, ColumnNewsBlocks and ColumnPictureBlocks. The problem I have with this setup is the sorting of the different ContentBlocks in a column. This could be something like this: TextBlock NewsBlock TextBlock TextBlock PictureBlock Where do I store the sorting number? If I store them in the associaton tables, I'll have to update a lot of tables when changing the sorting order of ContentBlocks in a Column. Is this a good approach to the problem? Basically, my question is: What is the best way to model this keeping in mind that it should be easy to add new types of ContentBlocks? My next question is: What ORM can deal with that kind of modeling? To be honest, we are ORM-virgins at work. I have been reading a bit about Linq-to-SQL and NHibernate, but we have no experience with them. Because of the IList in the Column class (see code below) I think we can rule out Linq-to-SQL, right? Can NHibernate handle the mapping of data from many different tables to one IList? Also keep in mind that this is just a very small portion of the domain. Other parts are Users belonging to a certain UserGroup having certain Permissions on Pages, ColumnGroups, Columns and ContentBlocks. The code (just a quick first draft): public class Page { public int PageID { get; set; } public string Title { get; set; } public string Description { get; set; } public string Keywords { get; set; } public IList<ColumnGroup> ColumnGroups { get; set; } } public class ColumnGroup { public enum ColumnGroupLayout { OneColumn, HalfHalf, NarrowWide, WideNarrow } public int ColumnGroupID { get; set; } public ColumnGroupLayout Layout { get; set; } public IList<Column> Columns { get; set; } } public class Column { public int ColumnID { get; set; } public IList<IContentBlock> ContentBlocks { get; set; } } public interface IContentBlock { string GetSummary(); } public class TextBlock : IContentBlock { public string GetSummary() { return "I am a piece of text."; } } public class NewsBlock : IContentBlock { public string GetSummary() { return "I am a news item."; } }

    Read the article

  • Hard to append a table with many records into another without generating duplicates

    - by Bill Mudry
    I may seem to be a bit wordy at first but for the hope it will be easier for all of you to understand what I am doing in the first place. I have an uncommon but enjoyable activity of collecting as many species of wood from around the world as I can (over 2,900 so far). Ok, that is the real world. Meanwhile I have spent over 8 years compiling over 5.8 meg of text data on all the woods of the world. That got so large that learning some basic PHP and MySQL was most welcome so I could build a new database driven home for all this research. I am still slow at it but getting there. The original premise was to find evidence of as many species of woods in the world I can. The more names identified, the more successful the project. I have named the project TAXA for ease of conversation (short for Taxonomy). You are most welcome to take a look at what I have so far at www.prowebcanada.com/taxa. It is 95% dynamically driven. So far I am reporting about 6,500 botanical wood names and, as said above, the more I can report, the more successful is the project. I have a file of all the woods in the second largest wood collection in the world, the Tervuren wood collection in the Netherlands with over 11,300 wood names even after cleaning out all duplicates. That is almost twice the number I am reporting now so porting all the new wood names from Tervuren to the 'species' table where I keep the reported data would be a major desirable advancement in the project. At one point I was able to add all the Tervuren records to the species table but over 3,000 duplicates also formed. They were not in the Tervuren file in the first place but represent the same wood names common to both files. It is common sense that there would be woods common to both that when merged would create new duplicates. At one point and with the help of others from another forum, I may very well have finally got the proper SQL statement. When I ran it, though, the system said (semi-amusingly at first) ----- that it had gone away! After looking up on the Net what could have have done this, one reason is that the MySQL timeout lapses and probably because of the large size of files I am running. I am running this on a rented account on Godaddy so I cannot go about trying to adjust any config file. For safety, I copied the tervuren.sql file as tervuren_target.sql and the species.sql file as species_master.sql tp use as working files just to make sure I protect the original files from destruction or damage. Later I can name the species_master back to just species.sql once I am happy all worked well. The species file has about 18 columns in it but only 5 columns match the columns in the Tervuren file (name for name and collation also). The rest of the columns are just along for the ride, so to speak. The common key in both is the 'species_name" columns in both. I am not sure it is at all proper to call one a primary key and the other a foreign key since there really is no relational connection to them. One is just more data for the other and can disappear after, never to be referred to the working code in the application. I have been very surprised and flabbergasted on how hard it can be to append records from one large table into another (with same column names plus others) without generating NEW duplicates in the first place. Watch out thinking that a SELECT DISTINCT statement may do the job because absolutely NO records in the species table must get destroyed in the process and there is no way (well, that I know of) to tell the 'DISTINCT" command this. Yes, the original 'species' table has duplicates in it even before all this but, trust me ---- they have to be removed the long hard way manually record by record or I will lose precious information. It is more important to just make sure no NEW duplicates form through bringing in new names in the tervuren_target.species_name into species.species_name. I am hoping and thinking that a straight SQL solution should work --- except for that nasty timeout. How do I get past that? Could it mean that I may have to turn to a PHP plus SQL method?? Or ..... would I have to break up the Tervuren files into a few smaller ones and run them independently (hope not....)" So far, what seems should be easy has proven to be unexpectedly tricky. I appreciate any help you can give but start from the assumption that this may be harder to do right than it may seem on the surface. By the way --- I am running a quad 64 bit system with Windows 7, so at least I have some fairly hefty power on the client end. I have a direct ethernet cable feeding a cable connection to the Internet. Once I get an algorithm and code working for this, I also have many other lists to process that could make the 'species' table grow even more. It could be equivalent to (ahem) lighting a rocket under my project (especially compared to do this record by record manually)! This is my first time in this forum, so I do not know how I can receive any replies. Do I have to to come back here periodically or are replies emailed out also? It would be great if you CC'd copies to me at billmudry at rogers.com :-) Much thanks for your patience and help, Bill Mudry Mississauga, Ontario Canada (next to Toronto).

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • Creating a Document Library with Content Type in code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/15/154360.aspxIn the past, I have shown how to create a list content type and add the content type to a list in code.  As a Developer, many of the artifacts which we create are widgets which have a List or Document Library as the back end.   We need to be able to create our applications (Web Part, etc.) without having the user involved except to enter the list item data.  Today, I will show you how to do the same with a document library.    A summary of what we will do is as follows:   1.   Create an Empty SharePoint Project in Visual Studio 2.   Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution 3.   Create a new Feature and add and event receiver  all the code will be in the event receiver 4.   Add the fields which will extend the built-in Document content type 5.   If the Content Type does not exist, Create it 6.   If the Document Library does not exist, Create it with the new Content Type inherited from the Document Content Type 7.   Delete the Document Content Type from the Library (as we have a new one which inherited from it) 8.   Add the fields which we want to be visible from the fields added to the new Content Type   Here we go:   Create an Empty SharePoint Project in Visual Studio      Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution       The Utilities and Extensions Library will be part of this project which I will provide a download link at the end of this post.  Drag and drop them into your project.  If Dragged and Dropped from windows explorer you will need to show all files and then include them in your project.  Change the Namespace to agree with your project.   Create a new Feature and add and event receiver  all the code will be in the event receiver.  Here We added a new Feature called “CreateDocLib”  and then right click to add an Event Receiver All of our code will be in this Event Receiver.  For this Demo I will only be using the Feature Activated Event.      From this point on we will be looking at code!    We are adding two constants for use columGroup (How we want SharePoint to Group them, usually Company Name) and ctName(ContentType Name)  using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; namespace CreateDocLib.Features.CreateDocLib { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("56e6897c-97c4-41ac-bc5b-5cd2c04f2dd1")] public class CreateDocLibEventReceiver : SPFeatureReceiver { const string columnGroup = "DJ"; const string ctName = "DJDocLib"; } }     Here we are creating the Feature Activated event.   Adding the new fields (Site Columns) ,  Testing if the Content Type Exists, if not adding it.  Testing if the document Library exists, if not adding it.   #region DocLib public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { //add the fields addFields(spWeb); //add content type SPContentType testCT = spWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(spWeb, ctName); } if ((spWeb.Lists.TryGetList("MyDocuments") == null)) { //create the list if it dosen't to exist CreateDocLib(spWeb); } } } #endregion The addFields method uses the utilities library to add site columns to the site. We can add as many fields within this method as we like. Here we are adding one for demonstration purposes. Icon as a Url type.  public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Icon", SPFieldType.URL, false, columnGroup); }The addContentType method add the new Content Type to the site Content Types. We have already checked to see that it does not exist. In addition, here is where we add the linkages from our site columns previously created to our new Content Type   private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Document"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } Here we are adding just one linkage as we only have one additional field in our Content Type public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Icon", ct); } Next we add the logic to create our new Document Library, which we have already checked to see if it exists.  We create the document library and turn on content types.  Add the new content type and then delete the old “Document” content types.   private void CreateDocLib(SPWeb web) { using (var site = new SPSite(web.Url)) { var web1 = site.RootWeb; var listId = web1.Lists.Add("MyDocuments", string.Empty, SPListTemplateType.DocumentLibrary); var lib = web1.Lists[listId] as SPDocumentLibrary; lib.ContentTypesEnabled = true; var docType = web.ContentTypes[ctName]; lib.ContentTypes.Add(docType); lib.ContentTypes.Delete(lib.ContentTypes["Document"].Id); lib.Update(); AddLibrarySettings(web1, lib); } }  Finally, we set some document library settings on our new document library with the AddLibrarySettings method. We then ensure that the new site column is visible when viewed in the browser.  private void AddLibrarySettings(SPWeb web, SPDocumentLibrary lib) { lib.OnQuickLaunch = true; lib.ForceCheckout = true; lib.EnableVersioning = true; lib.MajorVersionLimit = 5; lib.EnableMinorVersions = true; lib.MajorWithMinorVersionsLimit = 5; lib.Update(); var view = lib.DefaultView; view.ViewFields.Add("Icon"); view.Update(); } Okay, what's cool here: In a few lines of code, we have created site columns, A content Type, a document library. As a developer, I use this functionality all the time. For instance, I could now just add a web part to this same solutionwhich uses this document Library. I love SharePoint! Here is the complete solution: Create Document Library Code

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >