Search Results

Search found 4761 results on 191 pages for 'series'.

Page 68/191 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Summation loop program in Pascal

    - by user2526598
    I am having a bit of an issue with this problem. I am taking a Pascal programming class and this problem was in my logic book. I am required to have the user enter a series of (+) numbers and once he/she enters a (-) number, the program should find the sum of all the (+) numbers. I accomplished this, but now I am attempting part two of this problem, which requires me to utilize a nested loop to run the program x amount of times based on the user's input. The following code is what I have so far and honestly I am stumped: program summation; //Define main program's variables var num, sum, numRun : integer; //Design procedure that will promt user for number of runs procedure numRunLoop ( var numRun : integer ); begin writeln('How many times shall I run this program?'); readln(numRun); end; //Design procedure that will sum a series of numbers //based on user input procedure numPromptLoop( numRun : integer; var num : integer ); var count : integer; begin //Utilize for to establish run limit for count := 1 to numRun do begin //Use repeat to prompt user for numbers repeat writeln('Enter a number: '); readln(num); //Tells program when to sum if num >= 0 then sum := sum + num; until num < 0; end; end; //Design procedure that will display procedure addItion( sum : integer ); begin writeln('The sum is; ', sum); end; begin numRunLoop(numRun); numPromptloop(numRun, num); addItion(sum); readln(); end.

    Read the article

  • ExtJS 4 Chart Axis Display Issue in Chrome

    - by SerEnder
    I've run into an issue while using ExtJS 4.0.7. I'm trying to display a chart with two series on a Numeric/Category Chart. The chart displays correctly in Firefox, but while using Chrome (18.0.1025.142) the x axis labels are either all stacked upon each or else (when using rotate) rendered behind the chart in the specified angle. Any ideas would be appreciated. Screen shot in Firefox: Screen Shot in Chrome: And the code that's used to generate both: Ext.require(['Ext.chart.*']); Ext.onReady(function() { var iChartWidth = 800; // Defines chart width var iChartHeight = 550; // Defines chart height Ext.define('RulesCreatedModel',{ extend:'Ext.data.Model', fields:[ {name:'sHourName', type:'string'}, {name:'User_2', type:'number'}, {name:'User_1', type:'number'}, ] }); var RulesCreatedStore = Ext.create('Ext.data.Store',{ id:'RulesCreatedStore', model:'RulesCreateModel', fields: [ 'sHourName','dayNum','hour','User_2','User_1'], data:[{ 'sHourName':'3pm', 'User_1':82, 'User_2':56 },{ 'sHourName':'4pm', 'User_1':39, 'User_2':44 },{ 'sHourName':'5pm', 'User_1':80, 'User_2':14 },{ 'sHourName':'6pm', 'User_1':55, 'User_2':0, },{ 'sHourName':'7pm', 'User_1':36, 'User_2':0, },{ 'sHourName':'8pm', 'User_1':66, 'User_2':0 },{ 'sHourName':'9pm', 'User_1':39, 'User_2':0, },{ 'sHourName':'10pm', 'User_1':0, 'User_2':0 },{ 'sHourName':'11pm', 'User_1':0, 'User_2':0 },{ 'sHourName':'12am', 'User_1':0, 'User_2':0 },{ 'sHourName':'1am', 'User_1':0, 'User_2':0 },{ 'sHourName':'2am', 'User_1':0, 'User_2':0 },{ 'sHourName':'3am', 'User_1':0, 'User_2':0 },{ 'sHourName':'4am', 'User_1':0, 'User_2':0 },{ 'sHourName':'5am', 'User_1':0, 'User_2':0 },{ 'sHourName':'6am', 'User_1':0, 'User_2':0 },{ 'sHourName':'7am', 'User_1':0, 'User_2':1 },{ 'sHourName':'8am', 'User_1':0, 'User_2':99 },{ 'sHourName':'9am', 'User_1':0, 'User_2':28 },{ 'sHourName':'10am', 'User_1':0, 'User_2':28 },{ 'sHourName':'11am', 'User_1':0, 'User_2':153 },{ 'sHourName':'12pm', 'User_1':0, 'User_2':58 },{ 'sHourName':'1pm', 'User_1':0, 'User_2':42 },{ 'sHourName':'2pm', 'User_1':20, 'User_2':10 }] }); Ext.create('Ext.chart.Chart',{ id: 'RulesWrittenChart', renderTo: 'StatCharts_Display', width: iChartWidth, height: iChartHeight, animate: true, store: RulesCreatedStore, axes: [{ type: 'Numeric', position: 'left', fields: ['User_2','User_1'], title: 'Rules Written', grid: true },{ type: 'Category', position: 'bottom', fields: ['sHourName'], title: 'Hour', grid: false, label: { rotate: { degrees: 315 } } }], series: [{ type: 'line', axis: 'left', xField: 'sHourName', yField: 'User_2', highlight: { size: 3, radius: 3 } },{ type: 'line', axis: 'left', xField: 'sHourName', yField: 'User_1', highlight: { size: 3, radius: 3 } }] }); });

    Read the article

  • Creating WPF components in background thread

    - by mizipzor
    Im working on a reporting system, a series of DocumentPage are to be created through a DocumentPaginator. These documents include a number of WPF components that are to be instantiated so the paginator includes the correct things when later sent to the XpsDocumentWriter (which in turn is sent to the actual printer). My problem now is that the DocumentPage instances take quite a while to create (enough for Windows to mark the application as frozen) so I tried to create them in a background thread, which is problematic since WPF expects the attributes on them to be set from the GUI thread. I would also like to have a progress bar showing up, indicating how many pages have been created so far. Thus, it looks like Im trying to get two things to happen in parallell on the GUI. The problem is hard to explain and Im really not sure how to tackle it. In short: Create a series of DocumentPage's. These include WPF components These are to be created on a background thread, or use some other trick so the application isnt frozen. After each page is created, a WPF ProgressBar should be updated. If there is no decent way to do this, alternate solutions and approaches are more than welcome.

    Read the article

  • What exactly is the GNU tar ././@LongLink "trick"?

    - by Cheeso
    I read that a tar entry type of 'L' (76) is used by gnu tar and gnu-compliant tar utilities to indicate that the next entry in the archive has a "long" name. In this case the header block with the entry type of 'L' usually encodes the name ././@LongLink . My question is: where is the format of the next block described? The format of a tar archive is very simple: it is just a series of 512-byte blocks. In the normal case, each file in a tar archive is represented as a series of blocks. The first block is a header block, containing the file name, entry type, modified time, and other metadata. Then the raw file data follows, using as many 512-byte blocks as required. Then the next entry. If the filename is longer than will fit in the space allocated in the header block, gnu tar apparently uses what's known as "the ././@LongLink trick". I can't find a precise description for it. When the entry type is 'L', how do I know how long the "long" filename is? Is the long name limited to 512 bytes, in other words, whatever fits in one block? Most importantly: where is this documented?

    Read the article

  • How do I use Spreadsheet::WriteExcel to create a chart from numeric log data?

    - by yaohung
    I used csv2xls.pl to convert a text log into .xls format, and then I create a chart as in the following: my $chart3 = $workbook->add_chart( type => 'line' , embedded => 1); # Configure the series. $chart3->add_series( categories => '=Sheet1!$B$2:$B$64', values => '=Sheet1!$C$2:$C$64', name => 'Test data series 1', ); # Add some labels. $chart3->set_title( name => 'Bridge Rate Analysis' ); $chart3->set_x_axis( name => 'Packet Size ' ); $chart3->set_y_axis( name => 'BVI Rate' ); # Insert the chart into the main worksheet. $worksheet->insert_chart( 'G2', $chart3 ); I can see the chart in the .xls file. However, all the data are in text format, not numeric, so the chart looks wrong. How do I convert text into number before applying this create-chart function? Also, how do I sort the .xls file before creating the chart?

    Read the article

  • Avoiding seasonality assumption for stl() or decompose() in R

    - by user303922
    Hello everybody, I have high frequency commodity price data that I need to analyze. My objective is to not assume any seasonal component and just identify a trend. Here is where I run into problems with R. There are two main functions that I know of to analyze this time series: decompose() and stl(). The problem is that they both take a ts object type with a frequency parameter greater than or equal to 2. Is there some way I can assume a frequency of 1 per unit time and still analyze this time series using R? I'm afraid that if I assume frequency greater than 1 per unit time, and seasonality is calculated using the frequency parameter, then my forecasts are going to depend on that assumption. names(crude.data)=c('Date','Time','Price') names(crude.data) freq = 2 win.graph() plot(crude.data$Time,crude.data$Price, type="l") crude.data$Price = ts(crude.data$Price,frequency=freq) I want frequency to be 1 per unit time but then decompose() and stl() don't work! dim(crude.data$Price) decom = decompose(crude.data$Price) win.graph() plot(decom$random[2:200],type="line") acf(decom$random[freq:length(decom$random-freq)]) Thank you.

    Read the article

  • How to speed-up python nested loop?

    - by erich
    I'm performing a nested loop in python that is included below. This serves as a basic way of searching through existing financial time series and looking for periods in the time series that match certain characteristics. In this case there are two separate, equally sized, arrays representing the 'close' (i.e. the price of an asset) and the 'volume' (i.e. the amount of the asset that was exchanged over the period). For each period in time I would like to look forward at all future intervals with lengths between 1 and INTERVAL_LENGTH and see if any of those intervals have characteristics that match my search (in this case the ratio of the close values is greater than 1.0001 and less than 1.5 and the summed volume is greater than 100). My understanding is that one of the major reasons for the speedup when using NumPy is that the interpreter doesn't need to type-check the operands each time it evaluates something so long as you're operating on the array as a whole (e.g. numpy_array * 2), but obviously the code below is not taking advantage of that. Is there a way to replace the internal loop with some kind of window function which could result in a speedup, or any other way using numpy/scipy to speed this up substantially in native python? Alternatively, is there a better way to do this in general (e.g. will it be much faster to write this loop in C++ and use weave)? ARRAY_LENGTH = 500000 INTERVAL_LENGTH = 15 close = np.array( xrange(ARRAY_LENGTH) ) volume = np.array( xrange(ARRAY_LENGTH) ) close, volume = close.astype('float64'), volume.astype('float64') results = [] for i in xrange(len(close) - INTERVAL_LENGTH): for j in xrange(i+1, i+INTERVAL_LENGTH): ret = close[j] / close[i] vol = sum( volume[i+1:j+1] ) if ret > 1.0001 and ret < 1.5 and vol > 100: results.append( [i, j, ret, vol] ) print results

    Read the article

  • JavaScript Regex: Complicated input validation

    - by ScottSEA
    I'm trying to construct a regex to screen valid part and/or serial numbers in combination, with ranges. A valid part number is a two alpha, three digit pattern or /[A-z]{2}\d{3}/ i.e. aa123 or ZZ443 etc... A valid serial number is a five digit pattern, or /\d{5}/ 13245 or 31234 and so on. That part isn't the problem. I want combinations and ranges to be valid as well: 12345, ab123,ab234-ab245, 12346 - 12349 - the ultimate goal. Ranges and/or series of part and/or serial numbers in any combination. Note that spaces are optional when specifying a range or after a comma in a series. Note that a range of part numbers has the same two letter combination on both sides of the range (i.e. ab123 - ab239) I have been wrestling with this expression for two days now, and haven't come up with anything better than this: /^(?:[A-z]{2}\d{3}[, ]*)|(?:\d{5}[, ]*)|(?:([A-z]{2})\d{3} ?- ?\4\d{3}[, ]*)|(?:\d{5} ?- ?\d{5}[, ]*)$/ ... My Regex-Fu is weak.

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • using replace to produce javascript code, django

    - by durdenk
    I want to use highcharts with my django site but it requires a comlex javascript code such as below. So I wanted to get this script in my python code and replace apropriate portions then write it in my template, first question is, is this a dump way to do that for a person not knowing javascript. I can read it tough. Second question is, Why I cant replace this string. Lets say this string is a variable like this. lineChartsTemplate = """ ... ... """ if I try and do lineChartsTemplate .replace('dataCategory', dataCategory) it basically suppossed to change dataCategory text with my dataCategory variable, but no such luck. I need guidance here. thx. $(function () { var chart = new Highcharts.Chart({ chart: { renderTo: 'container', type: 'bar' }, xAxis: { categories: dataCategory }, yAxis: { }, legend: { layout: 'vertical', floating: true, backgroundColor: '#FFFFFF', align: 'right', verticalAlign: 'top', y: 60, x: -60 }, tooltip: { formatter: function() { return '<b>'+ this.series.name +'</b><br/>'+ this.x +': '+ this.y; } }, plotOptions: { }, series: [{ data: dataList , name : 'Satislar'}] }); });

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • Seeking on a Heap, and Two Useful DMVs

    - by Paul White
    So far in this mini-series on seeks and scans, we have seen that a simple ‘seek’ operation can be much more complex than it first appears.  A seek can contain one or more seek predicates – each of which can either identify at most one row in a unique index (a singleton lookup) or a range of values (a range scan).  When looking at a query plan, we will often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.  As you saw in the first post in this series, the number of hidden seeking operations can have an appreciable impact on performance. Measuring Seeks and Scans I mentioned in my last post that there is no way to tell from a graphical query plan whether you are seeing a singleton lookup or a range scan.  You can work it out – if you happen to know that the index is defined as unique and the seek predicate is an equality comparison, but there’s no separate property that says ‘singleton lookup’ or ‘range scan’.  This is a shame, and if I had my way, the query plan would show different icons for range scans and singleton lookups – perhaps also indicating whether the operation was one or more of those operations underneath the covers. In light of all that, you might be wondering if there is another way to measure how many seeks of either type are occurring in your system, or for a particular query.  As is often the case, the answer is yes – we can use a couple of dynamic management views (DMVs): sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats. Index Usage Stats The index usage stats DMV contains counts of index operations from the perspective of the Query Executor (QE) – the SQL Server component that is responsible for executing the query plan.  It has three columns that are of particular interest to us: user_seeks – the number of times an Index Seek operator appears in an executed plan user_scans – the number of times a Table Scan or Index Scan operator appears in an executed plan user_lookups – the number of times an RID or Key Lookup operator appears in an executed plan An operator is counted once per execution (generating an estimated plan does not affect the totals), so an Index Seek that executes 10,000 times in a single plan execution adds 1 to the count of user seeks.  Even less intuitively, an operator is also counted once per execution even if it is not executed at all.  I will show you a demonstration of each of these things later in this post. Index Operational Stats The index operational stats DMV contains counts of index and table operations from the perspective of the Storage Engine (SE).  It contains a wealth of interesting information, but the two columns of interest to us right now are: range_scan_count – the number of range scans (including unrestricted full scans) on a heap or index structure singleton_lookup_count – the number of singleton lookups in a heap or index structure This DMV counts each SE operation, so 10,000 singleton lookups will add 10,000 to the singleton lookup count column, and a table scan that is executed 5 times will add 5 to the range scan count. The Test Rig To explore the behaviour of seeks and scans in detail, we will need to create a test environment.  The scripts presented here are best run on SQL Server 2008 Developer Edition, but the majority of the tests will work just fine on SQL Server 2005.  A couple of tests use partitioning, but these will be skipped if you are not running an Enterprise-equivalent SKU.  Ok, first up we need a database: USE master; GO IF DB_ID('ScansAndSeeks') IS NOT NULL DROP DATABASE ScansAndSeeks; GO CREATE DATABASE ScansAndSeeks; GO USE ScansAndSeeks; GO ALTER DATABASE ScansAndSeeks SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE ScansAndSeeks SET AUTO_CLOSE OFF, AUTO_SHRINK OFF, AUTO_CREATE_STATISTICS OFF, AUTO_UPDATE_STATISTICS OFF, PARAMETERIZATION SIMPLE, READ_COMMITTED_SNAPSHOT OFF, RESTRICTED_USER ; Notice that several database options are set in particular ways to ensure we get meaningful and reproducible results from the DMVs.  In particular, the options to auto-create and update statistics are disabled.  There are also three stored procedures, the first of which creates a test table (which may or may not be partitioned).  The table is pretty much the same one we used yesterday: The table has 100 rows, and both the key_col and data columns contain the same values – the integers from 1 to 100 inclusive.  The table is a heap, with a non-clustered primary key on key_col, and a non-clustered non-unique index on the data column.  The only reason I have used a heap here, rather than a clustered table, is so I can demonstrate a seek on a heap later on.  The table has an extra column (not shown because I am too lazy to update the diagram from yesterday) called padding – a CHAR(100) column that just contains 100 spaces in every row.  It’s just there to discourage SQL Server from choosing table scan over an index + RID lookup in one of the tests. The first stored procedure is called ResetTest: CREATE PROCEDURE dbo.ResetTest @Partitioned BIT = 'false' AS BEGIN SET NOCOUNT ON ; IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; IF @Partitioned = 'true' BEGIN -- Enterprise, Trial, or Developer -- required for partitioning tests IF SERVERPROPERTY('EngineEdition') = 3 BEGIN EXECUTE (' DROP TABLE dbo.Example ; IF EXISTS ( SELECT 1 FROM sys.partition_schemes WHERE name = N''PS'' ) DROP PARTITION SCHEME PS ; IF EXISTS ( SELECT 1 FROM sys.partition_functions WHERE name = N''PF'' ) DROP PARTITION FUNCTION PF ; CREATE PARTITION FUNCTION PF (INTEGER) AS RANGE RIGHT FOR VALUES (20, 40, 60, 80, 100) ; CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) ; CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ON PS (key_col); '); END ELSE BEGIN RAISERROR('Invalid SKU for partition test', 16, 1); RETURN; END; END ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; END; GO The second stored procedure, ShowStats, displays information from the Index Usage Stats and Index Operational Stats DMVs: CREATE PROCEDURE dbo.ShowStats @Partitioned BIT = 'false' AS BEGIN -- Index Usage Stats DMV (QE) SELECT index_name = ISNULL(I.name, I.type_desc), scans = IUS.user_scans, seeks = IUS.user_seeks, lookups = IUS.user_lookups FROM sys.dm_db_index_usage_stats AS IUS JOIN sys.indexes AS I ON I.object_id = IUS.object_id AND I.index_id = IUS.index_id WHERE IUS.database_id = DB_ID(N'ScansAndSeeks') AND IUS.object_id = OBJECT_ID(N'dbo.Example', N'U') ORDER BY I.index_id ; -- Index Operational Stats DMV (SE) IF @Partitioned = 'true' SELECT index_name = ISNULL(I.name, I.type_desc), partitions = COUNT(IOS.partition_number), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; ELSE SELECT index_name = ISNULL(I.name, I.type_desc), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; END; The final stored procedure, RunTest, executes a query written against the example table: CREATE PROCEDURE dbo.RunTest @SQL VARCHAR(8000), @Partitioned BIT = 'false' AS BEGIN -- No execution plan yet SET STATISTICS XML OFF ; -- Reset the test environment EXECUTE dbo.ResetTest @Partitioned ; -- Previous call will throw an error if a partitioned -- test was requested, but SKU does not support it IF @@ERROR = 0 BEGIN -- IO statistics and plan on SET STATISTICS XML, IO ON ; -- Test statement EXECUTE (@SQL) ; -- Plan and IO statistics off SET STATISTICS XML, IO OFF ; EXECUTE dbo.ShowStats @Partitioned; END; END; The Tests The first test is a simple scan of the heap table: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example'; The top result set comes from the Index Usage Stats DMV, so it is the Query Executor’s (QE) view.  The lower result is from Index Operational Stats, which shows statistics derived from the actions taken by the Storage Engine (SE).  We see that QE performed 1 scan operation on the heap, and SE performed a single range scan.  Let’s try a single-value equality seek on a unique index next: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 32'; This time we see a single seek on the non-clustered primary key from QE, and one singleton lookup on the same index by the SE.  Now for a single-value seek on the non-unique non-clustered index: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32'; QE shows a single seek on the non-clustered non-unique index, but SE shows a single range scan on that index – not the singleton lookup we saw in the previous test.  That makes sense because we know that only a single-value seek into a unique index is a singleton seek.  A single-value seek into a non-unique index might retrieve any number of rows, if you think about it.  The next query is equivalent to the IN list example seen in the first post in this series, but it is written using OR (just for variety, you understand): EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32 OR data = 33'; The plan looks the same, and there’s no difference in the stats recorded by QE, but the SE shows two range scans.  Again, these are range scans because we are looking for two values in the data column, which is covered by a non-unique index.  I’ve added a snippet from the Properties window to show that the query plan does show two seek predicates, not just one.  Now let’s rewrite the query using BETWEEN: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data BETWEEN 32 AND 33'; Notice the seek operator only has one predicate now – it’s just a single range scan from 32 to 33 in the index – as the SE output shows.  For the next test, we will look up four values in the key_col column: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col IN (2,4,6,8)'; Just a single seek on the PK from the Query Executor, but four singleton lookups reported by the Storage Engine – and four seek predicates in the Properties window.  On to a more complex example: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WITH (INDEX([PK dbo.Example key_col])) WHERE key_col BETWEEN 1 AND 8'; This time we are forcing use of the non-clustered primary key to return eight rows.  The index is not covering for this query, so the query plan includes an RID lookup into the heap to fetch the data and padding columns.  The QE reports a seek on the PK and a lookup on the heap.  The SE reports a single range scan on the PK (to find key_col values between 1 and 8), and eight singleton lookups on the heap.  Remember that a bookmark lookup (RID or Key) is a seek to a single value in a ‘unique index’ – it finds a row in the heap or cluster from a unique RID or clustering key – so that’s why lookups are always singleton lookups, not range scans. Our next example shows what happens when a query plan operator is not executed at all: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 8 AND @@TRANCOUNT < 0'; The Filter has a start-up predicate which is always false (if your @@TRANCOUNT is less than zero, call CSS immediately).  The index seek is never executed, but QE still records a single seek against the PK because the operator appears once in an executed plan.  The SE output shows no activity at all.  This next example is 2008 and above only, I’m afraid: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WHERE key_col BETWEEN 1 AND 30', @Partitioned = 'true'; This is the first example to use a partitioned table.  QE reports a single seek on the heap (yes – a seek on a heap), and the SE reports two range scans on the heap.  SQL Server knows (from the partitioning definition) that it only needs to look at partitions 1 and 2 to find all the rows where key_col is between 1 and 30 – the engine seeks to find the two partitions, and performs a range scan seek on each partition. The final example for today is another seek on a heap – try to work out the output of the query before running it! EXECUTE dbo.RunTest @SQL = 'SELECT TOP (2) WITH TIES * FROM Example WHERE key_col BETWEEN 1 AND 50 ORDER BY $PARTITION.PF(key_col) DESC', @Partitioned = 'true'; Notice the lack of an explicit Sort operator in the query plan to enforce the ORDER BY clause, and the backward range scan. © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Win7 Task Scheduler Crashes with Error Description "RpcServerUseProtseq:ncacn_ip_tcp"

    - by BlaM
    I just installed my fresh Windows 7 and so far are not very happy with it. Much more problems than with Windows Vista. The latest in a series: My Task Scheduler crashes right at the moment when I boot my system. When I check the Event Viewer it tells me that the scheduler crashes with a critical error: EventID: 404 ErrorDescription: RpcServerUseProtseq:ncacn_ip_tcp ResultCode: 1721 I found this entry in TechNet, but there is no "Internet" Key in my registry: "Task Scheduler service has encountered RPC initialization error..."

    Read the article

  • SYSPART hidden folder in Windows 7

    - by BenGC
    We had a user with a Lenovo G-series laptop getting a STOP error on boot. We reinstalled Windows 7 Home Premium using non-OEM media and restored the user's files from backup. We are now seeing a hidden folder in the root of the C:\ drive called SYSPART which appears to contain a copy of the contents of the C:\ drive - so while the user has 160 GB of files, the drive is using 320 GB because of this folder. What is it, and is it safe to delete?

    Read the article

  • How to setup a static multicast ARP entry with Cisco SG300?

    - by Fredrik Hedberg
    We're running a Microsoft NLB cluster in multicast mode as a loadbalancer. Using our old Cisco IOS switches we propagate access to the cluster to our branches using a static ARP entry in the core router: arp 10.20.1.226 03bf.0a14.01e2 ARPA But how does one solve this using non-IOS based Cisco hardware such as the SG300 series? Adding a static ARP entry results in an error message telling the user that the hardware address needs to be a valid unicast MAC address.

    Read the article

  • Dell R510 can i go from E5620 to X5690

    - by NJinPHX
    I have a Dell R510 used for a SQl server. Can I go from E5620 to X5690? They are both in the same Intel Xeon 5600 series. But the i did not know if a X (Performance) is interchangeable E (Mainstream). If not then the fastest upgrade i can make will be to another E. thx Intel® Xeon® Processor E5620 (Current) http://ark.intel.com/products/47925 Intel® Xeon® Processor X5690 (Proposed Upgrade) http://ark.intel.com/products/52576 Intel® Xeon® Processor E5649 (Fall back Upgrade) (cant post link)

    Read the article

  • Batch rename folders?

    - by Margaret
    This is probably a super-simple already solved task, but: I have a series of folders containing eBooks in various formats. They have the folder name format: \Lastname, Firstname (n books)\ I want to rename each of the folders to be simply \Firstname Lastname\ which I'm guessing can be done with a batch file fairly easily, but it's been a very long time since I had to do string parsing so I have no recollection of how. Help? I'm using Windows 7.

    Read the article

  • ATI Eyefinity under linux

    - by Bryan Ward
    I know that the new 5xxx series cards from ATI are capable of powering up to 6 monitors, but I was curious if anyone had any such luck setting this up under linux. I actually only have three monitors that I am interested in using, but three is the point where the previous generation video cards started to get a little buggy as a result of needing multiple video cards. Is the linux support for this capability any good at this point, or is the Eyefinity support really only for windows at this time.

    Read the article

  • Login configuration script for Junos EX 2200 using minicom

    - by liv2hak
    I am connecting to Junos OS on Juniper EX-2200 switches using minicom as shown below minicom -C log_sw1 sw1 Now I have a series of commands that I need to execute on sw1.(example shown below) cli request system zeroize show config show interface edit delete protocols set system arp aging-timer 240 I want to avoid having to type these commands every time I log into the system.I want to put them in a config file and I want the it to be execute every time I log into the switch using minicom. Is there any way I can achieve this?

    Read the article

  • Migrating from VBA Excel 2003

    - by Krazy_Kaos
    I have a series of big excel files that work like a program, but I hate beeing tied up (stuck in VBA for excel 2003), so... Whats the best way to implement a gui over a excel vba program (office 2003)? (are there any tools for that... I want to move away from the office suite, but still have it in the background) Or what's the easiest alternative for migrating this code to a more open language. Any ideias?

    Read the article

  • Folder to save images with AutoHotkey

    - by horiageorg
    When I try to save an image from a Web page with a program written in AutoHotkey,it always chooses the last folder I had chosen manually to save Web images. This is the script (assuming I'm on Web page and I want to save the images of a series of Web pages): FileSelectFolder, OutputVar, , 3 if OutputVar = MsgBox, You didn't select a folder. else MsgBox, You selected folder "%OutputVar%". Sleep 10000 Loop 25 { MouseClick, left, 185, 250 Sleep 5000 Send {ENTER} Sleep 5000 MouseClick, left, 75, 397 Sleep 5000 }

    Read the article

  • Looking for software to create Step-By-Step Screenshot guides

    - by Chris
    I like to annotate a series of existing pictures like that one shown below. Each shows GUI elements and i want to add some circles, arrows and numbers to create a step by step guide. Is there a software that ease the job by providing a toolset like pre-defined circles, arrows etc.? Right now i draw the circles myself ... Thanks, Update: i dont want to create videos.

    Read the article

  • Remote Control Adapter for PC

    - by Kairan
    Does there exist a product out there that attaches to your PC (preferably USB) that enables Infrared signals to be received (by a regular remote control) ? It seems a novel idea to be able to get a programmable remote that can be used to manipulate windows (like the Harmony remote control series) along with some programmable software (preferably Windows supported) to convert any button on a regular remote to some command on the PC. Does this exist? What is it called? This particular instance you are REQUIRED to use the crappy supplied remote, and this is not what I am looking for

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >