Search Results

Search found 7127 results on 286 pages for 'calculated columns'.

Page 82/286 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Joining on NULLs

    - by Dave Ballantyne
    A problem I see on a fairly regular basis is that of dealing with NULL values.  Specifically here, where we are joining two tables on two columns, one of which is ‘optional’ ie is nullable.  So something like this: i.e. Lookup where all the columns are equal, even when NULL.   NULL’s are a tricky thing to initially wrap your mind around.  Statements like “NULL is not equal to NULL and neither is it not not equal to NULL, it’s NULL” can cause a serious brain freeze and leave you a gibbering wreck and needing your mummy. Before we plod on, time to setup some data to demo against. Create table #SourceTable ( Id integer not null, SubId integer null, AnotherCol char(255) not null ) go create unique clustered index idxSourceTable on #SourceTable(id,subID) go with cteNums as ( select top(1000) number from master..spt_values where type ='P' ) insert into #SourceTable select Num1.number,nullif(Num2.number,0),'SomeJunk' from cteNums num1 cross join cteNums num2 go Create table #LookupTable ( Id integer not null, SubID integer null ) go insert into #LookupTable Select top(100) id,subid from #SourceTable where subid is not null order by newid() go insert into #LookupTable Select top(3) id,subid from #SourceTable where subid is null order by newid() If that has run correctly, you will have 1 million rows in #SourceTable and 103 rows in #LookupTable.  We now want to join one to the other. First attempt – Lets just join select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and #LookupTable.SubID = #SourceTable.SubID OK, that’s a fail.  We had 100 rows back,  we didn’t correctly account for the 3 rows that have null values.  Remember NULL <> NULL and the join clause specifies SUBID=SUBID, which for those rows is not true. Second attempt – Lets deal with those pesky NULLS select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and isnull(#LookupTable.SubID,0) = isnull(#SourceTable.SubID,0) OK, that’s the right result, well done and 99.9% of the time that is where its left. It is a relatively trivial CPU overhead to wrap ISNULL around both columns and compare that result, so no problems.  But, although that’s true, this a relational database we are using here, not a procedural language.  SQL is a declarative language, we are making a request to the engine to get the results we want.  How we ask for them can make a ton of difference. Lets look at the plan for our second attempt, specifically the clustered index seek on the #SourceTable   There are 2 predicates. The ‘seek predicate’ and ‘predicate’.  The ‘seek predicate’ describes how SQLServer has been able to use an Index.  Here, it has been able to navigate the index to resolve where ID=ID.  So far so good, but what about the ‘predicate’ (aka residual probe) ? This is a row-by-row operation.  For each row found in the index matching the Seek Predicate, the leaf level nodes have been scanned and tested using this logical condition.  In this example [Expr1007] is the result of the IsNull operation on #LookupTable and that is tested for equality with the IsNull operation on #SourceTable.  This residual probe is quite a high overhead, if we can express our statement slightly differently to take full advantage of the index and make the test part of the ‘Seek Predicate’. Third attempt – X is null and Y is null So, lets state the query in a slightly manner: select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and ( #LookupTable.SubID = #SourceTable.SubID or (#LookupTable.SubID is null and #SourceTable.SubId is null) ) So its slightly wordier and may not be as clear in its intent to the human reader, that is what comments are for, but the key point is that it is now clearer to the query optimizer what our intention is. Let look at the plan for that query, again specifically the index seek operation on #SourceTable No ‘predicate’, just a ‘Seek Predicate’ against the index to resolve both ID and SubID.  A subtle difference that can be easily overlooked.  But has it made a difference to the performance ? Well, yes , a perhaps surprisingly high one. Clever query optimizer well done. If you are using a scalar function on a column, you a pretty much guaranteeing that a residual probe will be used.  By re-wording the query you may well be able to avoid this and use the index completely to resolve lookups. In-terms of performance and scalability your system will be in a much better position if you can.

    Read the article

  • Custom page sizes in paging dropdown in Telerik RadGrid

    Working with Telerik RadControls for ASP.NET AJAX is actually quite easy and the initial effort to get started with the control suite is very low. Meaning that you can easily get good result with little time. But there are usually cases where you have to go a little further and dig a little bit deeper than the standard scenarios. In this article I am going to describe how you can customize the default values (10, 20 and 50) of the drop-down list in the paging element of RadGrid. Get control over the displayed page sizes while using numeric paging... The default page sizes are good but not always good enough The paging feature in RadGrid offers you 3, well actually 4, possible page sizes in the drop-down element out-of-the box, which are 10, 20 or 50 items. You can get a fourth option by specifying a value different than the three standards for the PageSize attribute, ie. 35 or 100. The drawback in that case is that it is the initial page size. Certainly, the available choices could be more flexible or even a little bit more intelligent. For example, by taking the total count of records into consideration. There are some interesting scenarios that would justify a customized page size element: A low number of records, like 14 or similar shouldn't provide a page size of 50, A high total count of records (ie: 300+) should offer more choices, ie: 100, 200, 500, or display of all records regardless of number of records I am sure that you might have your own requirements, and I hope that the following source code snippets might be helpful. Wiring the ItemCreated event In order to adjust and manipulate the existing RadComboBox in the paging element we have to handle the OnItemCreated event of RadGrid. Simply specify your code behind method in the attribute of the RadGrid tag, like so: <telerik:RadGrid ID="RadGridLive" runat="server" AllowPaging="true" PageSize="20"    AllowSorting="true" AutoGenerateColumns="false" OnNeedDataSource="RadGridLive_NeedDataSource"    OnItemDataBound="RadGrid_ItemDataBound" OnItemCreated="RadGrid_ItemCreated">    <ClientSettings EnableRowHoverStyle="true">        <ClientEvents OnRowCreated="RowCreated" OnRowSelected="RowSelected" />        <Resizing AllowColumnResize="True" AllowRowResize="false" ResizeGridOnColumnResize="false"            ClipCellContentOnResize="true" EnableRealTimeResize="false" AllowResizeToFit="true" />        <Scrolling AllowScroll="true" ScrollHeight="360px" UseStaticHeaders="true" SaveScrollPosition="true" />        <Selecting AllowRowSelect="true" />    </ClientSettings>    <MasterTableView DataKeyNames="AdvertID">        <PagerStyle AlwaysVisible="true" Mode="NextPrevAndNumeric" />        <Columns>            <telerik:GridBoundColumn HeaderText="Listing ID" DataField="AdvertID" DataType="System.Int32"                SortExpression="AdvertID" UniqueName="AdvertID">                <HeaderStyle Width="66px" />            </telerik:GridBoundColumn>             <!--//  ... and some more columns ... -->         </Columns>    </MasterTableView></telerik:RadGrid> To provide a consistent experience for your visitors it might be helpful to display the page size selection always. This is done by setting the AlwaysVisible attribute of the PagerStyle element to true, like highlighted above. Customize the values of page size Your delegate method for the ItemCreated event should look like this: protected void RadGrid_ItemCreated(object sender, GridItemEventArgs e){    if (e.Item is GridPagerItem)    {        var dropDown = (RadComboBox)e.Item.FindControl("PageSizeComboBox");        var totalCount = ((GridPagerItem)e.Item).Paging.DataSourceCount;        var sizes = new Dictionary<string, string>() {            {"10", "10"},            {"20", "20"},            {"50", "50"}        };        if (totalCount > 100)        {            sizes.Add("100", "100");        }        if (totalCount > 200)        {            sizes.Add("200", "200");        }        sizes.Add("All", totalCount.ToString());        dropDown.Items.Clear();        foreach (var size in sizes)        {            var cboItem = new RadComboBoxItem() { Text = size.Key, Value = size.Value };            cboItem.Attributes.Add("ownerTableViewId", e.Item.OwnerTableView.ClientID);            dropDown.Items.Add(cboItem);        }        dropDown.FindItemByValue(e.Item.OwnerTableView.PageSize.ToString()).Selected = true;    }} It is important that we explicitly check the event arguments for GridPagerItem as it is the control that contains the PageSizeComboBox control that we want to manipulate. To keep the actual modification and exposure of possible page size values flexible I am filling a Dictionary with the requested 'key/value'-pairs based on the number of total records displayed in the grid. As a final step, ensure that the previously selected value is the active one using the FindItemByValue() method. Of course, there might be different requirements but I hope that the snippet above provide a first insight into customized page size value in Telerik's Grid. The Grid demos describe a more advanced approach to customize the Pager.

    Read the article

  • Essential Links for the SharePoint Client Side Developer

    - by Mark Rackley
    Front End Developer? Client Side Developer? Middle Tier??? I’m covering all my bases.  Regardless, I’m sick and tired of Googling with Bing when I forget where information that I need often is located. I was getting ready to bookmark some of them when it hit me… “Hey Mark… (I don’t actually refer to myself in the third person), Why don’t you put the links in a blog so that it looks like you are being helpful!” I can’t tell you how many times I’ve had to go back to some of my old blogs to remember how I did something. Seriously people, you need to start a blog, it’s the best way to remember how the frick you got something to work… and it looks like you are being helpful when in reality you are just forgetful.  So… where was I? Oh yeah.. essential information that I’ve needed from time to time when I was not using Visual Studio. All of this info has come in handy from time to time. Know about these things and keep them in your tool belt, it’s amazing the stuff you can accomplish with just knowing where to look. What Why SPServices Widely used library written by Marc Anderson used to call SharePoint Web Services with jQuery jQuery For SPServices and other cool stuff Easy Tabs Essential tool for quick page enhancements. This widely used too from Christophe Humbert groups multiple web parts into one tabbed display. Very quick and easy way to get oohs and ahs from End Users. Convert Calculated Columns to HTML Also from Christophe, I use this script all the time to convert html in my calculated columns to actually display as html and not with the tags. Unlocking the Mysteries of Data View Web Part XSL Tags This blog series from Marc Anderson makes it very easy to understand what’s going on with all those weird xsl tags in your data view web parts. Essential to make those things do what you want them to do. Creating Parent / Child list relationships (2007) Creating Parent / Child list relationships (2010) By far my most viewed blog posts (tens and tens of thousands).  I have posts for both 2007 and 2010 that walk you through automatically setting the lookup id on a list to its “parent”. Set SharePoint Form fields using Query String Variables Also widely read, this one walks you through taking a variable from your Query String and set a form field to that value.   Hmmm… I KNOW there are more, but I’m tired and drawing a blank.  I’ll try to add them when I remember them (or need them again and think “Oh, I forgot to add that one”) But it’s a start, and please feel free to add your own in the comments… So, it’s YOUR turn to be helpful. What little tip or trick do you find yourself using ALL the time that you think everyone should know about??

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • Seeking on a Heap, and Two Useful DMVs

    - by Paul White
    So far in this mini-series on seeks and scans, we have seen that a simple ‘seek’ operation can be much more complex than it first appears.  A seek can contain one or more seek predicates – each of which can either identify at most one row in a unique index (a singleton lookup) or a range of values (a range scan).  When looking at a query plan, we will often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.  As you saw in the first post in this series, the number of hidden seeking operations can have an appreciable impact on performance. Measuring Seeks and Scans I mentioned in my last post that there is no way to tell from a graphical query plan whether you are seeing a singleton lookup or a range scan.  You can work it out – if you happen to know that the index is defined as unique and the seek predicate is an equality comparison, but there’s no separate property that says ‘singleton lookup’ or ‘range scan’.  This is a shame, and if I had my way, the query plan would show different icons for range scans and singleton lookups – perhaps also indicating whether the operation was one or more of those operations underneath the covers. In light of all that, you might be wondering if there is another way to measure how many seeks of either type are occurring in your system, or for a particular query.  As is often the case, the answer is yes – we can use a couple of dynamic management views (DMVs): sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats. Index Usage Stats The index usage stats DMV contains counts of index operations from the perspective of the Query Executor (QE) – the SQL Server component that is responsible for executing the query plan.  It has three columns that are of particular interest to us: user_seeks – the number of times an Index Seek operator appears in an executed plan user_scans – the number of times a Table Scan or Index Scan operator appears in an executed plan user_lookups – the number of times an RID or Key Lookup operator appears in an executed plan An operator is counted once per execution (generating an estimated plan does not affect the totals), so an Index Seek that executes 10,000 times in a single plan execution adds 1 to the count of user seeks.  Even less intuitively, an operator is also counted once per execution even if it is not executed at all.  I will show you a demonstration of each of these things later in this post. Index Operational Stats The index operational stats DMV contains counts of index and table operations from the perspective of the Storage Engine (SE).  It contains a wealth of interesting information, but the two columns of interest to us right now are: range_scan_count – the number of range scans (including unrestricted full scans) on a heap or index structure singleton_lookup_count – the number of singleton lookups in a heap or index structure This DMV counts each SE operation, so 10,000 singleton lookups will add 10,000 to the singleton lookup count column, and a table scan that is executed 5 times will add 5 to the range scan count. The Test Rig To explore the behaviour of seeks and scans in detail, we will need to create a test environment.  The scripts presented here are best run on SQL Server 2008 Developer Edition, but the majority of the tests will work just fine on SQL Server 2005.  A couple of tests use partitioning, but these will be skipped if you are not running an Enterprise-equivalent SKU.  Ok, first up we need a database: USE master; GO IF DB_ID('ScansAndSeeks') IS NOT NULL DROP DATABASE ScansAndSeeks; GO CREATE DATABASE ScansAndSeeks; GO USE ScansAndSeeks; GO ALTER DATABASE ScansAndSeeks SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE ScansAndSeeks SET AUTO_CLOSE OFF, AUTO_SHRINK OFF, AUTO_CREATE_STATISTICS OFF, AUTO_UPDATE_STATISTICS OFF, PARAMETERIZATION SIMPLE, READ_COMMITTED_SNAPSHOT OFF, RESTRICTED_USER ; Notice that several database options are set in particular ways to ensure we get meaningful and reproducible results from the DMVs.  In particular, the options to auto-create and update statistics are disabled.  There are also three stored procedures, the first of which creates a test table (which may or may not be partitioned).  The table is pretty much the same one we used yesterday: The table has 100 rows, and both the key_col and data columns contain the same values – the integers from 1 to 100 inclusive.  The table is a heap, with a non-clustered primary key on key_col, and a non-clustered non-unique index on the data column.  The only reason I have used a heap here, rather than a clustered table, is so I can demonstrate a seek on a heap later on.  The table has an extra column (not shown because I am too lazy to update the diagram from yesterday) called padding – a CHAR(100) column that just contains 100 spaces in every row.  It’s just there to discourage SQL Server from choosing table scan over an index + RID lookup in one of the tests. The first stored procedure is called ResetTest: CREATE PROCEDURE dbo.ResetTest @Partitioned BIT = 'false' AS BEGIN SET NOCOUNT ON ; IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; IF @Partitioned = 'true' BEGIN -- Enterprise, Trial, or Developer -- required for partitioning tests IF SERVERPROPERTY('EngineEdition') = 3 BEGIN EXECUTE (' DROP TABLE dbo.Example ; IF EXISTS ( SELECT 1 FROM sys.partition_schemes WHERE name = N''PS'' ) DROP PARTITION SCHEME PS ; IF EXISTS ( SELECT 1 FROM sys.partition_functions WHERE name = N''PF'' ) DROP PARTITION FUNCTION PF ; CREATE PARTITION FUNCTION PF (INTEGER) AS RANGE RIGHT FOR VALUES (20, 40, 60, 80, 100) ; CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) ; CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ON PS (key_col); '); END ELSE BEGIN RAISERROR('Invalid SKU for partition test', 16, 1); RETURN; END; END ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; END; GO The second stored procedure, ShowStats, displays information from the Index Usage Stats and Index Operational Stats DMVs: CREATE PROCEDURE dbo.ShowStats @Partitioned BIT = 'false' AS BEGIN -- Index Usage Stats DMV (QE) SELECT index_name = ISNULL(I.name, I.type_desc), scans = IUS.user_scans, seeks = IUS.user_seeks, lookups = IUS.user_lookups FROM sys.dm_db_index_usage_stats AS IUS JOIN sys.indexes AS I ON I.object_id = IUS.object_id AND I.index_id = IUS.index_id WHERE IUS.database_id = DB_ID(N'ScansAndSeeks') AND IUS.object_id = OBJECT_ID(N'dbo.Example', N'U') ORDER BY I.index_id ; -- Index Operational Stats DMV (SE) IF @Partitioned = 'true' SELECT index_name = ISNULL(I.name, I.type_desc), partitions = COUNT(IOS.partition_number), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; ELSE SELECT index_name = ISNULL(I.name, I.type_desc), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; END; The final stored procedure, RunTest, executes a query written against the example table: CREATE PROCEDURE dbo.RunTest @SQL VARCHAR(8000), @Partitioned BIT = 'false' AS BEGIN -- No execution plan yet SET STATISTICS XML OFF ; -- Reset the test environment EXECUTE dbo.ResetTest @Partitioned ; -- Previous call will throw an error if a partitioned -- test was requested, but SKU does not support it IF @@ERROR = 0 BEGIN -- IO statistics and plan on SET STATISTICS XML, IO ON ; -- Test statement EXECUTE (@SQL) ; -- Plan and IO statistics off SET STATISTICS XML, IO OFF ; EXECUTE dbo.ShowStats @Partitioned; END; END; The Tests The first test is a simple scan of the heap table: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example'; The top result set comes from the Index Usage Stats DMV, so it is the Query Executor’s (QE) view.  The lower result is from Index Operational Stats, which shows statistics derived from the actions taken by the Storage Engine (SE).  We see that QE performed 1 scan operation on the heap, and SE performed a single range scan.  Let’s try a single-value equality seek on a unique index next: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 32'; This time we see a single seek on the non-clustered primary key from QE, and one singleton lookup on the same index by the SE.  Now for a single-value seek on the non-unique non-clustered index: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32'; QE shows a single seek on the non-clustered non-unique index, but SE shows a single range scan on that index – not the singleton lookup we saw in the previous test.  That makes sense because we know that only a single-value seek into a unique index is a singleton seek.  A single-value seek into a non-unique index might retrieve any number of rows, if you think about it.  The next query is equivalent to the IN list example seen in the first post in this series, but it is written using OR (just for variety, you understand): EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32 OR data = 33'; The plan looks the same, and there’s no difference in the stats recorded by QE, but the SE shows two range scans.  Again, these are range scans because we are looking for two values in the data column, which is covered by a non-unique index.  I’ve added a snippet from the Properties window to show that the query plan does show two seek predicates, not just one.  Now let’s rewrite the query using BETWEEN: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data BETWEEN 32 AND 33'; Notice the seek operator only has one predicate now – it’s just a single range scan from 32 to 33 in the index – as the SE output shows.  For the next test, we will look up four values in the key_col column: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col IN (2,4,6,8)'; Just a single seek on the PK from the Query Executor, but four singleton lookups reported by the Storage Engine – and four seek predicates in the Properties window.  On to a more complex example: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WITH (INDEX([PK dbo.Example key_col])) WHERE key_col BETWEEN 1 AND 8'; This time we are forcing use of the non-clustered primary key to return eight rows.  The index is not covering for this query, so the query plan includes an RID lookup into the heap to fetch the data and padding columns.  The QE reports a seek on the PK and a lookup on the heap.  The SE reports a single range scan on the PK (to find key_col values between 1 and 8), and eight singleton lookups on the heap.  Remember that a bookmark lookup (RID or Key) is a seek to a single value in a ‘unique index’ – it finds a row in the heap or cluster from a unique RID or clustering key – so that’s why lookups are always singleton lookups, not range scans. Our next example shows what happens when a query plan operator is not executed at all: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 8 AND @@TRANCOUNT < 0'; The Filter has a start-up predicate which is always false (if your @@TRANCOUNT is less than zero, call CSS immediately).  The index seek is never executed, but QE still records a single seek against the PK because the operator appears once in an executed plan.  The SE output shows no activity at all.  This next example is 2008 and above only, I’m afraid: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WHERE key_col BETWEEN 1 AND 30', @Partitioned = 'true'; This is the first example to use a partitioned table.  QE reports a single seek on the heap (yes – a seek on a heap), and the SE reports two range scans on the heap.  SQL Server knows (from the partitioning definition) that it only needs to look at partitions 1 and 2 to find all the rows where key_col is between 1 and 30 – the engine seeks to find the two partitions, and performs a range scan seek on each partition. The final example for today is another seek on a heap – try to work out the output of the query before running it! EXECUTE dbo.RunTest @SQL = 'SELECT TOP (2) WITH TIES * FROM Example WHERE key_col BETWEEN 1 AND 50 ORDER BY $PARTITION.PF(key_col) DESC', @Partitioned = 'true'; Notice the lack of an explicit Sort operator in the query plan to enforce the ORDER BY clause, and the backward range scan. © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Pixel Perfect Collision Detection in HTML5 Canvas

    - by Armin Ronacher
    Hi, I want to check a collision between two Sprites in HTML5 canvas. So for the sake of the discussion, let's assume that both sprites are IMG objects and a collision means that the alpha channel is not 0. Now both of these sprites can have a rotation around the object's center but no other transformation in case this makes this any easier. Now the obvious solution I came up with would be this: calculate the transformation matrix for both figure out a rough estimation of the area where the code should test (like offset of both + calculated extra space for the rotation) for all the pixels in the intersecting rectangle, transform the coordinate and test the image at the calculated position (rounded to nearest neighbor) for the alpha channel. Then abort on first hit. The problem I see with that is that a) there are no matrix classes in JavaScript which means I have to do that in JavaScript which could be quite slow, I have to test for collisions every frame which makes this pretty expensive. Furthermore I have to replicate something I already have to do on drawing (or what canvas does for me, setting up the matrices). I wonder if I'm missing anything here and if there is an easier solution for collision detection.

    Read the article

  • dateByAddingComponents problem and getting difference of dates with NSDateComponents problem!

    - by Rob
    I am having problems with adding values to dates and also getting differences between dates. The dates and components calculated are incorrect. So for adding, if I add 1.5 months, I only get 1 month, however if I add any whole number ie (1 or 2 or 3 and etc) it calculates correctly. Float32 addAmount = 1.5; NSDateComponents *components = [[[NSDateComponents alloc] init] autorelease]; [components setMonth:addAmount]; NSCalendar *gregorian = [[[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar] autorelease]; [gregorian setTimeZone:[NSTimeZone timeZoneWithName:@"UTC"]]; NSDate *newDate2 = [gregorian dateByAddingComponents:components toDate:Date1 options:0]; Now for difference, if I have a date that has been added with exactly one year (almost same code as above), it adds correctly, but when the difference is calculated, I get 0 years, 11 months and 30 days. NSDate *startDate = Date1; NSDate *endDate = Date2; NSCalendar *gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; [gregorian setTimeZone:[NSTimeZone timeZoneWithName:@"UTC"]]; NSUInteger unitFlags = NSYearCalendarUnit | NSMonthCalendarUnit | NSDayCalendarUnit; NSDateComponents *components = [gregorian components:unitFlags fromDate:startDate toDate:endDate options:0]; NSInteger years = [components year]; NSInteger months = [components month]; NSInteger days = [components day]; What am I doing wrong? Also I have added the kCFCalendarComponentsWrap constanct in the options for both adding and difference functions but with no difference. Thanks

    Read the article

  • Python, dictionaries, and chi-square contingency table

    - by rohanbk
    I have a file which contains several lines in the following format (word, time that the word occurred in, and frequency of documents containing the given word within the given instance in time): #inputfile <word, time, frequency> apple, 1, 3 banana, 1, 2 apple, 2, 1 banana, 2, 4 orange, 3, 1 I have Python class below that I used to create 2-D dictionaries to store the above file using as the key, and frequency as the value: class Ddict(dict): ''' 2D dictionary class ''' def __init__(self, default=None): self.default = default def __getitem__(self, key): if not self.has_key(key): self[key] = self.default() return dict.__getitem__(self, key) wordtime=Ddict(dict) # Store each inputfile entry with a <word,time> key timeword=Ddict(dict) # Store each inputfile entry with a <time,word> key # Loop over every line of the inputfile for line in open('inputfile'): word,time,count=line.split(',') # If <word,time> already a key, increment count try: wordtime[word][time]+=count # Otherwise, create the key except KeyError: wordtime[word][time]=count # If <time,word> already a key, increment count try: timeword[time][word]+=count # Otherwise, create the key except KeyError: timeword[time][word]=count The question that I have pertains to calculating certain things while iterating over the entries in this 2D dictionary. For each word 'w' at each time 't', calculate: The number of documents with word 'w' within time 't'. (a) The number of documents without word 'w' within time 't'. (b) The number of documents with word 'w' outside time 't'. (c) The number of documents without word 'w' outside time 't'. (d) Each of the items above represents one of the cells of a chi-square contingency table for each word and time. Can all of these be calculated within a single loop or do they need to be done one at a time? Ideally, I would like the output to be what's below, where a,b,c,d are all the items calculated above: print "%s, %s, %s, %s" %(a,b,c,d)

    Read the article

  • Call by Reference Function in C

    - by Chad
    Hello everyone, I would just like a push in the right direction here with my homework assignment. Here is the question: (1) Write a C function called input which returns void, this function prompts the user for input of two integers followed by a double precision value. This function reads these values from the keyboard and finds the product of the two integers entered. The function uses call by reference to communicate the values of the three values read and the product calculated back to the main program. The main program then prints the three values read and the product calculated. Provide test results for the input: 3 5 23.5. Do not use arrays or global variables in your program. And here is my code: #include <stdio.h> #include <stdlib.h> void input(int *day, int *month, double *k, double *pro); int main(void){ int i,j; double k, pro; input(&i, &j, &k, &pro); printf("%f\n", pro); return 0; } void input(int *i, int *j, double *k, double *pro){ int x,y; double z; double product; scanf("%d", &x); scanf("%d", &y); scanf("%f", &z); *pro += (x * y * z); } I can't figure out how to reference the variables with pointers really, it is just not working out for me. Any help would be great!

    Read the article

  • How do i refactor this code by using Action<t> or Func<t> delegates

    - by user330612
    I have a sample program, which needs to execute 3 methods in a particular order. And after executing each method, should do error handling. Now i did this in a normal fashion, w/o using delegates like this. class Program { public static void Main() { MyTest(); } private static bool MyTest() { bool result = true; int m = 2; int temp = 0; try { temp = Function1(m); } catch (Exception e) { Console.WriteLine("Caught exception for function1" + e.Message); result = false; } try { Function2(temp); } catch (Exception e) { Console.WriteLine("Caught exception for function2" + e.Message); result = false; } try { Function3(temp); } catch (Exception e) { Console.WriteLine("Caught exception for function3" + e.Message); result = false; } return result; } public static int Function1(int x) { Console.WriteLine("Sum is calculated"); return x + x; } public static int Function2(int x) { Console.WriteLine("Difference is calculated "); return (x - x); } public static int Function3(int x) { return x * x; } } As you can see, this code looks ugly w/ so many try catch loops, which are all doing the same thing...so i decided that i can use delegates to refactor this code so that Try Catch can be all shoved into one method so that it looks neat. I was looking at some examples online and couldnt figure our if i shud use Action or Func delegates for this. Both look similar but im unable to get a clear idea how to implement this. Any help is gr8ly appreciated. I'm using .NET 4.0, so im allowed to use anonymous methods n lambda expressions also for this Thanks

    Read the article

  • Efficient way of calculating average difference of array elements from array average value

    - by Saysmaster
    Is there a way to calculate the average distance of array elements from array average value, by only "visiting" each array element once? (I search for an algorithm) Example: Array : [ 1 , 5 , 4 , 9 , 6 ] Average : ( 1 + 5 + 4 + 9 + 6 ) / 5 = 5 Distance Array : [|1-5|, |5-5|, |4-5|, |9-5|, |6-5|] = [4 , 0 , 1 , 4 , 1 ] Average Distance : ( 4 + 0 + 1 + 4 + 1 ) / 5 = 2 The simple algorithm needs 2 passes. 1st pass) Reads and accumulates values, then divides the result by array length to calculate average value of array elements. 2nd pass) Reads values, accumulates each one's distance from the previously calculated average value, and then divides the result by array length to find the average distance of the elements from the average value of the array. The two passes are identical. It is the classic algorithm of calculating the average of a set of values. The first one takes as input the elements of the array, the second one the distances of each element from the array's average value. Calculating the average can be modified to not accumulate the values, but caclulating the average "on the fly" as we sequentialy read the elements from the array. The formula is: Compute Running Average of Array's elements ------------------------------------------- RA[i] = E[i] {for i == 1} RA[i] = RA[i-1] - RA[i-1]/i + A[i]/i { for i > 1 } Where A[x] is the array's element at position x, RA[x] is the average of the array's elements between position 1 and x (running average). My question is: Is there a similar algorithm, to calculate "on the fly" (as we read the array's elements), the average distance of the elements from the array's mean value? The problem is that, as we read the array's elements, the final average value of the array is not known. Only the running average is known. So calculating differences from the running average will not yield the correct result. I suppose, if such algorithm exists, it probably should have the "ability" to compensate, in a way, on each new element read for the error calculated as far.

    Read the article

  • Passing IDisposable objects through constructor chains

    - by Matt Enright
    I've got a small hierarchy of objects that in general gets constructed from data in a Stream, but for some particular subclasses, can be synthesized from a simpler argument list. In chaining the constructors from the subclasses, I'm running into an issue with ensuring the disposal of the synthesized stream that the base class constructor needs. Its not escaped me that the use of IDisposable objects this way is possibly just dirty pool (plz advise?) for reasons I've not considered, but, this issue aside, it seems fairly straightforward (and good encapsulation). Codes: abstract class Node { protected Node (Stream raw) { // calculate/generate some base class properties } } class FilesystemNode : Node { public FilesystemNode (FileStream fs) : base (fs) { // all good here; disposing of fs not our responsibility } } class CompositeNode : Node { public CompositeNode (IEnumerable some_stuff) : base (GenerateRaw (some_stuff)) { // rogue stream from GenerateRaw now loose in the wild! } static Stream GenerateRaw (IEnumerable some_stuff) { var content = new MemoryStream (); // molest elements of some_stuff into proper format, write to stream content.Seek (0, SeekOrigin.Begin); return content; } } I realize that not disposing of a MemoryStream is not exactly a world-stopping case of bad CLR citizenship, but it still gives me the heebie-jeebies (not to mention that I may not always be using a MemoryStream for other subtypes). It's not in scope, so I can't explicitly Dispose () it later in the constructor, and adding a using statement in GenerateRaw () is self-defeating since I need the stream returned. Is there a better way to do this? Preemptive strikes: yes, the properties calculated in the Node constructor should be part of the base class, and should not be calculated by (or accessible in) the subclasses I won't require that a stream be passed into CompositeNode (its format should be irrelevant to the caller) The previous iteration had the value calculation in the base class as a separate protected method, which I then just called at the end of each subtype constructor, moved the body of GenerateRaw () into a using statement in the body of the CompositeNode constructor. But the repetition of requiring that call for each constructor and not being able to guarantee that it be run for every subtype ever (a Node is not a Node, semantically, without these properties initialized) gave me heebie-jeebies far worse than the (potential) resource leak here does.

    Read the article

  • Ordering by formula fields in NHibernate

    - by Darin Dimitrov
    Suppose that I have the following mapping with a formula property: <class name="Planet" table="planets"> <id name="Id" column="id"> <generator class="native" /> </id> <!-- somefunc() is a native SQL function --> <property name="Distance" formula="somefunc()" /> </class> I would like to get all planets and order them by the Distance calculated property: var planets = session .CreateCriteria<Planet>() .AddOrder(Order.Asc("Distance")) .List<Planet>(); This is translated to the following query: SELECT Id as id0, somefunc() as formula0 FROM planets ORDER BY somefunc() Desired query: SELECT Id as id0, somefunc() as formula0 FROM planets ORDER BY formula0 If I set a projection with an alias it works fine: var planets = session .CreateCriteria<Planet>() .SetProjection(Projections.Alias(Projections.Property("Distance"), "dist")) .AddOrder(Order.Asc("dist")) .List<Planet>(); SQL: SELECT somefunc() as formula0 FROM planets ORDER BY formula0 but it populates only the Distance property in the result and I really like to avoid projecting manually over all the other properties of my object (there could be many other properties). Is this achievable with NHibernate? As a bonus I would like to pass parameters to the native somefunc() SQL function. Anything producing the desired SQL is acceptable (replacing the formula field with subselects, etc...), the important thing is to have the calculated Distance property in the resulting object and order by this distance inside SQL.

    Read the article

  • java.math.BigInteger pow(exponent) question

    - by Jan Kraus
    Hi, I did some tests on pow(exponent) method. Unfortunately, my math skills are not strong enough to handle the following problem. I'm using this code: BigInteger.valueOf(2).pow(var); Results: var | time in ms 2000000 | 11450 2500000 | 12471 3000000 | 22379 3500000 | 32147 4000000 | 46270 4500000 | 31459 5000000 | 49922 See? 2,500,000 exponent is calculated almost as fast as 2,000,000. 4,500,000 is calculated much faster then 4,000,000. Why is that? To give you some help, here's the original implementation of BigInteger.pow(exponent): public BigInteger pow(int exponent) { if (exponent < 0) throw new ArithmeticException("Negative exponent"); if (signum==0) return (exponent==0 ? ONE : this); // Perform exponentiation using repeated squaring trick int newSign = (signum<0 && (exponent&1)==1 ? -1 : 1); int[] baseToPow2 = this.mag; int[] result = {1}; while (exponent != 0) { if ((exponent & 1)==1) { result = multiplyToLen(result, result.length, baseToPow2, baseToPow2.length, null); result = trustedStripLeadingZeroInts(result); } if ((exponent >>>= 1) != 0) { baseToPow2 = squareToLen(baseToPow2, baseToPow2.length, null); baseToPow2 = trustedStripLeadingZeroInts(baseToPow2); } } return new BigInteger(result, newSign); }

    Read the article

  • What is a fast way to set debugging code at a given line in a function?

    - by Josh O'Brien
    Preamble: R's trace() is a powerful debugging tool, allowing users to "insert debugging code at chosen places in any function". Unfortunately, using it from the command-line can be fairly laborious. As an artificial example, let's say I want to insert debugging code that will report the between-tick interval calculated by pretty.default(). I'd like to insert the code immediately after the value of delta is calculated, about four lines up from the bottom of the function definition. (Type pretty.default to see where I mean.) To indicate that line, I need to find which step in the code it corresponds to. The answer turns out to be step list(c(12, 3, 3)), which I zero in on by running through the following steps: as.list(body(pretty.default)) as.list(as.list(body(pretty.default))[[12]]) as.list(as.list(as.list(body(pretty.default))[[12]])[[3]]) as.list(as.list(as.list(body(pretty.default))[[12]])[[3]])[[3]] I can then insert debugging code like this: trace(what = 'pretty.default', tracer = quote(cat("\nThe value of delta is: ", delta, "\n\n")), at = list(c(12,3,3))) ## Try it a <- pretty(c(1, 7843)) b <- pretty(c(2, 23)) ## Clean up untrace('pretty.default') Questions: So here are my questions: Is there a way to print out a function (or a parsed version of it) with the lines nicely labeled by the steps to which they belong? Alternatively, is there another easier way, from the command line, to quickly set debugging code for a specific line within a function? Addendum: I used the pretty.default() example because it is reasonably tame, but with real/interesting functions, repeatedly using as.list() quickly gets tiresome and distracting. Here's an example: as.list(as.list(as.list(as.list(as.list(as.list(as.list(as.list(as.list(body(# model.frame.default))[[26]])[[3]])[[2]])[[4]])[[3]])[[4]])[[4]])[[4]])[[3]]

    Read the article

  • vba: what does ReDim Preserve do and simple array question

    - by every_answer_gets_a_point
    i am looking at someone else's vba excel code. they are doing ReDim Preserve dataMatrix(7, i) in both loops. what does this do? also, it seems like the second loop just overwrites the data in the first, loop, is that correct? Dim dataMatrix() As String Worksheets.Item("ETS").Select Do While Trim(Cells(r, 1)) <> "" Debug.Print "The line: ", Trim(Cells(r, 1)), r r = r + 1 dataMatrix(1, i) = Trim(Cells(r, 1)) ''file name dataMatrix(2, i) = Trim(Cells(r, 2)) ''sample type dataMatrix(3, i) = Trim(Cells(r, 3)) ''sample name dataMatrix(4, i) = "ETS" '' dataMatrix(5, i) = Trim(Cells(r, 5)) ''Response dataMatrix(6, i) = Trim(Cells(r, 6)) ''ISTD Response dataMatrix(7, i) = Trim(Cells(r, 10)) ''Calculated Conc i = i + 1 ReDim Preserve dataMatrix(7, i) Loop r = 5 Worksheets.Item("ETG").Select Do While Trim(Cells(r, 1)) <> "" Debug.Print "The line: ", Trim(Cells(r, 1)), r r = r + 1 dataMatrix(1, i) = Trim(Cells(r, 1)) ''file name dataMatrix(2, i) = Trim(Cells(r, 2)) ''sample type dataMatrix(3, i) = Trim(Cells(r, 3)) ''sample name dataMatrix(4, i) = "ETG" dataMatrix(5, i) = Trim(Cells(r, 5)) ''Response dataMatrix(6, i) = Trim(Cells(r, 6)) ''ISTD Response dataMatrix(7, i) = Trim(Cells(r, 10)) ''Calculated Conc i = i + 1 ReDim Preserve dataMatrix(7, i) Loop

    Read the article

  • How to write these two queries for a simple data warehouse, using ANSI SQL?

    - by morpheous
    I am writing a simple data warehouse that will allow me to query the table to observe periodic (say weekly) changes in data, as well as changes in the change of the data (e.g. week to week change in the weekly sale amount). For the purposes of simplicity, I will present very simplified (almost trivialized) versions of the tables I am using here. The sales data table is a view and has the following structure: CREATE TABLE sales_data ( sales_time date NOT NULL, sales_amt double NOT NULL ) For the purpose of this question. I have left out other fields you would expect to see - like product_id, sales_person_id etc, etc, as they have no direct relevance to this question. AFAICT, the only fields that will be used in the query are the sales_time and the sales_amt fields (unless I am mistaken). I also have a date dimension table with the following structure: CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); which partition dates into reporting ranges. I need to write queries that will allow me to do the following: Return the change in week on week sales_amt for a specified period. For example the change between sales today and sales N days ago - where N is a positive integer (N == 7 in this case). Return the change in change of sales_amt for a specified period. For in (1). we calculated the week on week change. Now we want to know how that change is differs from the (week on week) change calculated last week. I am stuck however at this point, as SQL is my weakest skill. I would be grateful if an SQL master can explain how I can write these queries in a DB agnostic way (i.e. using ANSI SQL).

    Read the article

  • Is there a set based solution for this problem?

    - by NYSystemsAnalyst
    We have a table set up as follows: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|2.0 | |2 |2 |2/12/2010|Vacation Earned|3.0 | |3 |1 |2/4/2010 |Vacation Used |1.0 | |4 |2 |5/18/2010|Vacation Earned|2.0 | |5 |2 |7/23/2010|Vacation Used |4.0 | The business rules are: Vacation balance is calculated by vacation earned minus vacation used. Vacation used is always applied against the oldest vacation earned amount first. We need to return the rows for Vacation Earned that have not been offset by vacation used. If vacation used has only offset part of a vacation earned record, we need to return that record showing the difference. For example, using the above table, the result set would look like: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|1.0 | |4 |2 |5/18/2010|Vacation Earned|1.0 | Note that record 2 was eliminated because it was completely offset by used time, but records 1 and 4 were only partially used, so they were calculated and returned as such. The only way we have thought of to do this is to get all of the vacation earned records in a temporary table. Then, get the total vacation used and loop through the temporary table, deleting the oldest record and subtracting that value from the total vacation used until the total vacation used is zero. We could clean it up for when the remaining vacation used is only part of the oldest vacation earned record. This would leave us with just the outstanding vacation earned records. This works, but it is very inefficient and performs poorly. Also, the performance will just degrade over time as more and more records are added. Are there any suggestions for a better solution, preferable set based? If not, we'll just have to go with this.

    Read the article

  • What is the proper way to wait a script loaded completely before another

    - by FatDogMark
    I am making a website that is divided by several sections <div id='section-1' class='section'>web content</div> <div id='section-2' class='section'>web content</div> I have like ten sections on my webpage ,each sections height is set to the user window height when document is ready by javascript $('.section').height($(window).height()); Some effects like slideshows on my webpage require the calculated height of the section in order to work properly. Therefore I always use something like this at document ready as a solution. setTimeout(startslideshow,1000); setTimeout(startanimations,1000); ...etc To make sure the section height is the user window height before the slideshow's code start because the sections cannot change to user window height instantly once the webpage loaded will generate serious problems in my slideshow code,like wrong calculated positions. Therefore there will be a situation that's after the page loaded, there will be about a second everything is messed up,before everything can works properly, how could I avoid that being seen by the user? I tried to $(document).hide(),or $('html,body').hide(), then fade in after a second,but I get other weird problems, especially on ipad,my fixed position top navigation bar will always become 'not fixed' while user is scrolling. As I am a self-learner, I afraid my method is not typical. I want to know what is the common ways of real web programmers usually do when they have to divide his webpage into different sections and set its height to window height , then make sure the other effects that's depends on the section height works properly and avoid to wait the height change for a second?

    Read the article

  • Syntax error in aggregate argument: Expecting a single column argument with possible 'Child' qualifier.

    - by Rushabh
    DataTable distinctTable = dTable.DefaultView.ToTable(true,"ITEM_NO","ITEM_STOCK"); DataTable dtSummerized = new DataTable("SummerizedResult"); dtSummerized.Columns.Add("ITEM_NO",typeof(string)); dtSummerized.Columns.Add("ITEM_STOCK",typeof(double)); int count=0; foreach(DataRow dRow in distinctTable.Rows) { count++; //string itemNo = Convert.ToString(dRow[0]); double TotalItem = Convert.ToDouble(dRow[1]); string TotalStock = dTable.Compute("sum(" + TotalItem + ")", "ITEM_NO=" + dRow["ITEM_NO"].ToString()).ToString(); dtSummerized.Rows.Add(count,dRow["ITEM_NO"],TotalStock); } Error Message: Syntax error in aggregate argument: Expecting a single column argument with possible 'Child' qualifier. Do anyone can help me out? Thanks.

    Read the article

  • .NET ExcelLibrary Export Problems - .XLS corruption

    - by hamlin11
    The library is located here: http://code.google.com/p/excellibrary/ I'm using some basic code to create an .XLS file. When I open the file in Excel 2007, I get the following errors: I click yes, then I get: And just for fun, here's the XML error details (not very helpful) Here's the code that I'm using to generate the Excel file: Dim ds As New DataSet Dim dt1 As New DataTable("table 1") dt1.Columns.Add("column A", GetType(String)) dt1.Columns.Add("column B", GetType(String)) dt1.Rows.Add("test 1", "Test 2") dt1.Rows.Add("test 3", "Test 4") ds.Tables.Add(dt1) ExcelLibrary.DataSetHelper.CreateWorkbook("c:/temp/test1.xls", ds) Note: I added a reference to the DLL provided by the project download page and have an "Imports ExcelLibrary.Office.Excel" to link up with it Any ideas on what the corruption is and/or how to fix it? Thanks

    Read the article

  • WPF DataBinding, CollectionViewSource, INotifyPropertyChanged

    - by plotnick
    First time when I tried to do something in WPF, I was puzzled with WPF DataBinding. Then I studied thorougly next example on MSDN: http://msdn.microsoft.com/en-us/library/ms771319(v=VS.90).aspx Now, I quite understand how to use Master-Detail paradigm for a form which takes data from one source (one table) - both for master and detailed parts. I mean for example I have a grid with data and below the grid I have a few fields with detailed data of current row. But how to do it, if detailed data comes from different but related tables? for example: you have a Table 'Users' with columns - 'id' and 'Name'. You also have another table 'Pictures' with columns like 'id','Filename','UserId'. And now, using master-detail paradigm you have to built a form. And every time when you chose a row in a Master you should get all associated pictures in Details. What is the right way to do it? Could you please show me an example?

    Read the article

  • RDLC filtering nested tables

    - by aprescott
    I am creating an RDLC report where the dataset consists of several datatables. There is one parent table and several child tables. What I would like to do is display relevant data from each child table for each row in the parent table. Here is a simplified example: table1 = "Purchase" has columns PurhcaseID, PurchaseNumber, PurchaseDate table2 = "PurchasedItem" has columns PurchaseItemID, PurhcaseID, ItemDescription In my RDLC, I have a Purchase table grouped on PurchaseDate and would like to display the PurchasedItems for each Purchase. The current solution uses a subreport, but I do not like this because it leaves an ugly empty space when there is no data for the subreport display. (I would be fine with using a subreport if I could properly hide it without leaving an empty space.) I am not able to rewrite the stored procedure to return a single table, either. How are others dealing with this scenario?

    Read the article

  • WPF DataGrid: Make cells readonly

    - by crauscher
    I use the following DataGrid <DataGrid Grid.Row="1" Grid.Column="1" Name="Grid" ItemsSource="{Binding}" AutoGenerateColumns="False" > <DataGrid.Columns> <DataGridTextColumn Header="Name" Width="100" Binding="{Binding Path=Name}"></DataGridTextColumn> <DataGridTextColumn Header="OldValue" Width="100" Binding="{Binding Path=OldValue}"></DataGridTextColumn> <DataGridTextColumn Header="NewValue" Width="100*" Binding="{Binding Path=NewValue}"></DataGridTextColumn> </DataGrid.Columns> </DataGrid> How can I make the cells readonly?

    Read the article

  • WPF Toolkit: how to scroll datagrid to show selected item from code behind?

    - by Akash Kava
    I tried following, all of following fails on function ScrollIntoView and gives NullReferenceException. // doesnt work grid.SelectedItem = sItem; grid.ScrollIntoView(sItem); // doesnt work grid.SelectedItem = sItem; grid.Focus(); grid.CurrentColumn = grid.Columns[0]; grid.UpdateLayout(); grid.ScrollIntoView(sItem,grid.Columns[0]); // doesnt work grid.SelectedItem = sItem; grid.UpdateLayout(); grid.ScrollIntoView(sItem); The problem is, when I select row from codebehind, selection is not visible its somewhere down in bottom, unless user scrolls he feels that selection has vanished. I need to scroll datagrid to the point user can see the selection. I also tried "BringIntoView" as well but no luck.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >