Search Results

Search found 7127 results on 286 pages for 'calculated columns'.

Page 64/286 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Is this method of writing Unit Tests correct?

    - by aspdotnetuser
    I have created a small C# project to help me learn how to write good unit tests. I know that one important rule of unit testing is to test the smallest 'unit' of code possible so that if it fails you know exactly what part of the code needs to fixed. I need help with the following before I continue to implement more unit tests for the project: If I have a Car class, for example, that creates a new Car object which has various attributes that are calculated when its' constructor method is called, would the two following tests be considered as overkill? Should there be one test that tests all calculated attributes of the Car object instead? [Test] public void CarEngineCalculatedValue() { BusinessObjects.Car car= new BusinessObjects.Car(); Assert.GreaterOrEqual(car.Engine, 1); } [Test] public void CarNameCalculatedValue() { BusinessObjects.Car car= new BusinessObjects.Car(); Assert.IsNotNull(car.Name); } Should I have the above two test methods to test these things or should I have one test method that asserts the Car object has first been created and then test these things in the same test method?

    Read the article

  • Are injectable classes allowed to have constructor parameters in DI?

    - by Songo
    Given the following code: class ClientClass{ public function print(){ //some code to calculate $inputString $parser= new Parser($inputString); $result= $parser->parse(); } } class Parser{ private $inputString; public __construct($inputString){ $this->inputString=$inputString; } public function parse(){ //some code } } Now the ClientClass has dependency on class Parser. However, if I wanted to use Dependency Injection for unit testing it would cause a problem because now I can't send the input string to the parser constructor like before as its calculated inside ClientCalss itself: class ClientClass{ private $parser; public __construct(Parser $parser){ $this->parser=$parser; } public function print(){ //some code to calculate $inputString $result= $this->parser->parse(); //--> will throw an exception since no string was provided } } The only solution I found was to modify all my classes that took parameters in their constructors to utilize Setters instead (example: setInputString()). However, I think there might be a better solution than this because sometimes modifying existing classes can cause much harm than benefit. So, Are injectable classes not allowed to have input parameters? If a class must take input parameters in its constructor, what would be the way to inject it properly? UPDATE Just for clarification, the problem happens when in my production code I decide to do this: $clientClass= new ClientClass(new Parser($inputString));//--->I have no way to predict $inputString as it is calculated inside `ClientClass` itself. UPDATE 2 Again for clarification, I'm trying to find a general solution to the problem not for this example code only because some of my classes have 2, 3 or 4 parameters in their constructors not only one.

    Read the article

  • Spherical to Cartesian Coordinates

    - by user1258455
    Well I'm reading the Frank's Luna DirectX10 book and, while I'm trying to understand the first demo, I found something that's not very clear at least for me. In the updateScene method, when I press A, S, W or D, the angles mTheta and mPhi change, but after that, there are three lines of code that I don't understand exactly what they do: // Convert Spherical to Cartesian coordinates: mPhi measured from +y // and mTheta measured counterclockwise from -z. float x = 5.0f*sinf(mPhi)*sinf(mTheta); float z = -5.0f*sinf(mPhi)*cosf(mTheta); float y = 5.0f*cosf(mPhi); I mean, this explains that they do, it says that it converts the spherical coordinates to cartesian coordinates, but, mathematically, why? why the x value is calculated by the product of the sins of both angles? And the z by the product of the sine and cosine? and why the y just uses the cosine? After that, those values (x, y and z) are used to build the view matrix. The book doesn't explain (mathematically) why those values are calculated like that (and I didn't find anything to help me to understand it at the first Part of the book: "Mathematical prerequisites"), so it would be good if someone could explain me what exactly happen in those code lines or just give me a link that helps me to understand the math part. Thanks in advance!

    Read the article

  • Cube rotation DX10

    - by German
    Well I'm reading the Frank's Luna DirectX10 book and, while I'm trying to understand the first demo, I found something that's not very clear at least for me. In the updateScene method, when I press A, S, W or D, the angles mTheta and mPhi change, but after that, there are three lines of code that I don't understand exactly what they do: // Convert Spherical to Cartesian coordinates: mPhi measured from +y // and mTheta measured counterclockwise from -z. float x = 5.0f*sinf(mPhi)*sinf(mTheta); float z = -5.0f*sinf(mPhi)*cosf(mTheta); float y = 5.0f*cosf(mPhi); I mean, this explains that they do, it says that it converts the spherical coordinates to cartesian coordinates, but, mathematically, why? why the x value is calculated by the product of the sins of both angles? And the z by the product of the sine and cosine? and why the y just uses the cosine? After that, those values (x, y and z) are used to build the view matrix. The book doesn't explain (mathematically) why those values are calculated like that (and I didn't find anything to help me to understand it at the first Part of the book: "Mathematical prerequisites"), so it would be good if someone could explain me what exactly happen in those code lines or just give me a link that helps me to understand the math part. Thanks in advance!

    Read the article

  • Approach to Authenticate Clients to TCP Server

    - by dab
    I'm writing a Server/Client application where clients will connect to the server. What I want to do, is make sure that the client connecting to the server is actually using my protocol and I can "trust" the data being sent from the client to the server. What I thought about doing is creating a sort of hash on the client's machine that follows a particular algorithm. What I did in a previous version was took their IP address, the client version, and a few other attributes of the client and sent it as a calculated hash to the server, who then took their IP, and the version of the protocol the client claimed to be using, and calculated that number to see if they matched. This works ok until you get clients that connect from within a router environment where their internal IP is different from their external IP. My fix for this was to pass the client's internal IP used to calculate this hash with the authentication protocol. My fear is this approach is not secure enough. Since I'm passing the data used to create the "auth hash". Here's an example of what I'm talking about: Client IP: 192.168.1.10, Version: 2.4.5.2 hash = 2*4*5*1 * (1+9+2) * (1+6+8) * (1) * (1+0) Client Connects to Server client sends: auth hash ip version Server calculates that info, and accepts or denies the hash. Before I go and come up with another algorithm to prove a client can provide data a server (or use this existing algorithm), I was wondering if there are any existing, proven, and secure systems out there for generating a hash that both sides can generate with general knowledge. The server won't know about the client until the very first connection is established. The protocol's intent is to manage a network of clients who will be contributing data to the server periodically. New clients will be added simply by connecting the client to the server and "registering" with the server. So a client connects to the server for the first time, and registers their info (mac address or some other kind of unique computer identifier), then when they connect again, the server will recognize that client as a previous person and associate them with their data in the database.

    Read the article

  • Reverse-engineer SharePoint fields, content types and list instance—Part2

    - by ybbest
    Reverse-engineer SharePoint fields, content types and list instance—Part1 Reverse-engineer SharePoint fields, content types and list instance—Part2 In the part1 of this series, I demonstrated how to use VS2010 to Reverse-engineer SharePoint fields, content types and list instances. In the part 2 of this series, I will demonstrate how to do the same using CKS:Dev. CKS:Dev extends the Visual Studio 2010 SharePoint project system with advanced templates and tools. Using these extensions you will be able to find relevant information from your SharePoint environments without leaving Visual Studio. You will have greater productivity while developing SharePoint components and you will have greater deployment capabilities on your local SharePoint installation. You can download the complete solution here. 1. First, download and install appropriate CKS:Dev from CodePlex. If you are using SharePoint Foundation 2010 then download and install the SharePoint Foundation 2010 version If you are using SharePoint Server 2010 then download and install the SharePoint Server 2010 version 2. After installation, you need to restart your visual studio and create empty SharePoint. 3. Go to Viewà Server Explorer 4. Add SharePoint web application connection to the server explorer. 5. After add the connection, you can browse to see the contents for the Web Application. 6. Go to Site Columns à YBBEST (Custom Group of you own choice) and right-click the YBBEST Folder and Click Import Site Columns. 7. Go to ContentTypesà YBBEST (Custom Group of you own choice) and right-click the YBBEST Folder and Click Import Content Types. 8. After the import completes, you can find the fields and contentTypes in the SharePoint project below. Of course you need to do some modification to your current project to make it work. 9. Next, create list instances using list instance item template in Visual Studio 10. Finally, create lookup columns using the feature receivers and the final project will look like this. You can download the complete solution here.

    Read the article

  • SQL Server – Learning SQL Server Performance: Indexing Basics – Video

    - by pinaldave
    Today I remember one of my older cartoon years ago created for Indexing and Performance. Every single time when Performance is discussed, Indexes are mentioned along with it. In recent times, data and application complexity is continuously growing.  The demand for faster query response, performance, and scalability by organizations is increasing and developers and DBAs need to now write efficient code to achieve this. DBA and Developers A DBA’s role is critical, because a production environment has to run 24×7, hence maintenance, trouble shooting, and quick resolutions are the need of the hour.  The first baby step into any performance tuning exercise in SQL Server involves creating, analysing, and maintaining indexes. Though we have learnt indexing concepts from our college days, indexing implementation inside SQL Server can vary.  Understanding this behaviour and designing our applications appropriately will make sure the application is performed to its highest potential. Video Learning Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. We decided to build a course which just addresses the practical aspects of the performance. In this course, we explored some of these indexing fundamentals and we elaborated on how SQL Server goes about using indexes.  At the end of this course of you will know the basic structure of indexes, practical insights into implementation, and maintenance tips and tricks revolving around indexes.  Finally, we will introduce SQL Server 2012 column store indexes.  We have refrained from discussing internal storage structure of the indexes but have taken a more practical, demo-oriented approach to explain these core concepts. Course Outline Here are salient topics of the course. We have explained every single concept along with a practical demonstration. Additionally shared our personal scripts along with the same. Introduction Fundamentals of Indexing Index Fundamentals Index Fundamentals – Visual Representation Practical Indexing Implementation Techniques Primary Key Over Indexing Duplicate Index Clustered Index Unique Index Included Columns Filtered Index Disabled Index Index Maintenance and Defragmentation Introduction to Columnstore Index Indexing Practical Performance Tips and Tricks Index and Page Types Index and Non Deterministic Columns Index and SET Values Importance of Clustered Index Effect of Compression and Fillfactor Index and Functions Dynamic Management Views (DMV) – Fillfactor Table Scan, Index Scan and Index Seek Index and Order of Columns Final Checklist: Index and Performance Well, we believe we have done our part, now waiting for your comments and feedback. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • Silverlight 3 Dynamic DataGrid RowStyle Ignored

    - by antoinne85
    I subclassed the standard DataGrid into SpecialDataGrid so I could override the KeyDown/KeyUp events. Other than that SpecialDataGrid is exactly the same as DataGrid. At run-time I dynamically create a bunch of these SpecialDataGrids. When a user clicks a row in the grid it hightlights, which is fine, but when that grid loses focus, it leaves a residual gray highlight on the last-selected row, which is not fine. I've heavily edited the RowStyle and CellStyle I'm applying to these Grids to more-or-less remove all formatting. I even added a static SpecialDataGrid to the app with test data so I could see if the RowStyle was somehow incorrect, applying the same RowStyle and CellStyle that I'm applying to the dynamically generated one (you'll see it in the code below). What I saw was that the "test grid" showed up exactly as I wanted, and the real grid is ignoring part of the RowStyle! Has anyone run into this issue or have any ideas of how to correct it? Some source and images follow. Creating the SpecialDataGrid: //Set up a datagrid. SpecialDataGrid radio_datagrid = new SpecialDataGrid(); radio_datagrid.ItemsSource = radios; radio_datagrid.AutoGenerateColumns = false; radio_datagrid.HeadersVisibility = DataGridHeadersVisibility.None; radio_datagrid.BorderThickness = new Thickness(0); radio_datagrid.HorizontalAlignment = HorizontalAlignment.Stretch; radio_datagrid.IsReadOnly = true; radio_datagrid.MouseLeftButtonUp += new MouseButtonEventHandler(option_datagrid_MouseLeftButtonUp); radio_datagrid.KeyDown += new KeyEventHandler(radio_datagrid_KeyDown); radio_datagrid.KeyUp += new KeyEventHandler(radio_datagrid_KeyUp); //Radio column. DataGridTemplateColumn temp_col = new DataGridTemplateColumn(); temp_col.CellTemplate = (DataTemplate)this.Resources["RadioColumnTemplate"]; temp_col.Width = new DataGridLength(20); radio_datagrid.Columns.Add(temp_col); //Description column. DataGridTextColumn txt_col = new DataGridTextColumn(); txt_col.Binding = new Binding("optionlabel"); txt_col.Width = new DataGridLength(350); radio_datagrid.Columns.Add(txt_col); //Product code column. txt_col = new DataGridTextColumn(); txt_col.Binding = new Binding("optioncode"); txt_col.Width = new DataGridLength(80); radio_datagrid.Columns.Add(txt_col); //Price column. txt_col = new DataGridTextColumn(); txt_col.Binding = new Binding("optionprice"); txt_col.Width = new DataGridLength(80); radio_datagrid.Columns.Add(txt_col); //View column. temp_col = new DataGridTemplateColumn(); temp_col.CellTemplate = (DataTemplate)this.Resources["HyperlinkButtonColumnTemplate"]; temp_col.Width = new DataGridLength(30); radio_datagrid.Columns.Add(temp_col); radio_datagrid.RowStyle = (Style)this.Resources["StyleDataGridRowNoAlternating"]; radio_datagrid.CellStyle = (Style)this.Resources["Style_DataGridCell_NoHighlight"]; Example Image: The lower DataGrid appears that way regardless of what you do to it. No highlighting of any sort and certainly no residual highlights. Any idea what's keeping this from being applied to the first?

    Read the article

  • Index was out of range. Must be non-negative and less than the size of the collection. Parameter: In

    - by user356973
    Dear Telerik Team, When I am trying to display data using radgrid I am getting error like "Index was out of range. Must be non-negative and less than the size of the collection. Parameter: Index". Here is my code <telerik:RadGrid ID="gvCktMap" BorderColor="White" runat="server" AutoGenerateColumns="true" AllowSorting="true" BackColor="White" AllowPaging="true" PageSize="25" GridLines="None" OnPageIndexChanging="gvCktMap_PageIndexChanging" OnRowCancelingEdit="gvCktMap_RowCancelingEdit" OnRowCommand="gvCktMap_RowCommand" OnRowUpdating="gvCktMap_RowUpdating" OnRowDataBound="gvCktMap_RowDataBound" OnSorting="gvCktMap_Sorting" OnRowEditing="gvCktMap_RowEditing" ShowGroupPanel="True" EnableHeaderContextMenu="true" EnableHeaderContextFilterMenu="true" AllowMultiRowSelection="true" AllowFilteringByColumn="True" AllowCustomPaging="false" OnItemCreated="gvCktMap_ItemCreated" EnableViewState="false" OnNeedDataSource="gvCktMap_NeedDataSource" OnItemUpdated="gvCktMap_ItemUpdated" > <MasterTableView DataKeyNames="sId" UseAllDataFields="true"> <Columns> <telerik:GridBoundColumn UniqueName="sId" HeaderText="sId" DataField="sId" Visible="false"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="orderId" HeaderText="orderId" DataField="orderId"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="REJ" HeaderText="REJ" DataField="REJ"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="Desc" HeaderText="Desc" DataField="Desc"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="CustomerName" HeaderText="CustomerName" DataField="CustomerName"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="MarketName" HeaderText="MarketName" DataField="MarketName"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="HeadendName" HeaderText="HeadendName" DataField="HeadendName"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="SiteName" HeaderText="SiteName" DataField="SiteName"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="TaskStatus" HeaderText="TaskStatus" DataField="TaskStatus"> </telerik:GridBoundColumn> </Columns> </telerik:RadGrid> Here is my code behind private void bingGrid() { try { gvCktMap.Columns.Clear(); DataSet dsResult = new DataSet(); DataSet dsEditItems = new DataSet(); dsEditItems.ReadXml(Server.MapPath("XMLS/" + Session["TaskID"].ToString() + ".xml")); clsSearch_BL clsObj = new clsSearch_BL(); clsObj.TaskID = (string)Session["TaskID"]; clsObj.CustName = (string)Session["CustName"]; clsObj.MarketName = (string)Session["MarketName"]; clsObj.HeadendName = (string)Session["HeadendName"]; clsObj.SiteName = (string)Session["SiteName"]; clsObj.TaskStatus = (string)Session["TaskStatus"]; clsObj.OrdType = (string)Session["OrdType"]; clsObj.OrdStatus = (string)Session["OrdStatus"]; clsObj.ProName = (string)Session["ProName"]; clsObj.LOC = (string)Session["LOC"]; dsResult = clsObj.getSearchResults_BL(clsObj); Session["SearchRes"] = dsResult; DataTable dtFilter = new DataTable(); DataColumn dtCol = new DataColumn("FilterBy"); dtFilter.Columns.Add(dtCol); dtCol = new DataColumn("DataType"); dtFilter.Columns.Add(dtCol); gvCktMap.DataSource = dsResult; gvCktMap.DataBind(); } catch (Exception ex) { } } If i remove <MasterTableView></MasterTableView> Its working fine without any error. But for some reasons i need to use <MasterTableView></MasterTableView> Can anyone help me out to fix this error. Thanks In Advance

    Read the article

  • New Skool Crosstabbing

    - by Tim Dexter
    A while back I spoke about having to go back to BIP's original crosstabbing solution to achieve a certain layout. Hok Min has provided a 'man' page for the new crosstab/pivot builder for 10.1.3.4.1 users. This will make the documentation drop but for now, get it here! The old, hand method is still available but this new approach, is more efficient and flexible. That said you may need to get into the crosstab code to tweak it where the crosstab dialog can not help. I had to do this, this week but more on that later. The following explains how the crosstab wizard builds the crosstab and what the fields inside the resulting template structure are there for. To create the crosstab a new XDO command "<?crosstab:...?>" has been created. XDO Command: <?crosstab: ctvarname; data-element; rows; columns; measures; aggregation?> Parameter Description Example Ctvarname Crosstab variable name. This is automatically generated by the Add-in. C123 data-element This is the XML data element that contains the data. "//ROW" Rows This contains a list of XML elements for row headers. The ordering information is specified within "{" and "}". The first attribute is the sort element. Leaving it blank means the sort element is the same as the row header element. The attribute "o" means order. Its value can be "a" for ascending, or "d" for descending. The attribute "t" means type. Its value can be "t" for text, and "n" for numeric. There can be more than one sort elements, example: "emp-full-name {emp-lastname,o=a,t=n}{emp-firstname,o=a,t=n}. This will sort employee by last name and first name. "Region{,o=a,t=t}, District{,o=a,t=t}" In the example, the first row header is "Region". It is sort by "Region", order is ascending, and type is text. The second row header is "District". It is sort by "District", order is ascending, and type is text. Columns This contains a list of XML elements for columns headers. The ordering information is specified within "{" and "}". The first attribute is the sort element. Leaving it blank means the sort element is the same as the column header element. The attribute "o" means order. Its value can be "a" for ascending, or "d" for descending. The attribute "t" means type. Its value can be "t" for text, and "n" for numeric. There can be more than one sort elements, example: "emp-full-name {emp-lastname,o=a,t=n}{emp-firstname,o=a,t=n}. This will sort employee by last name and first name. "ProductsBrand{,o=a,t=t}, PeriodYear{,o=a,t=t}" In the example, the first column header is "ProductsBrand". It is sort by "ProductsBrand", order is ascending, and type is text. The second column header is "PeriodYear". It is sort by "District", order is ascending, and type is text. Measures This contains a list of XML elements for measures. "Revenue, PrevRevenue" Aggregation The aggregation function name. Currently, we only support "sum". "sum" Using the Oracle BI Publisher Template Builder for Word add-in, we are able to construct the following Pivot Table: The generated XDO command for this Pivot Table is as follow: <?crosstab:c547; "//ROW";"Region{,o=a,t=t}, District{,o=a,t=t}"; "ProductsBrand{,o=a,t=t},PeriodYear{,o=a,t=t}"; "Revenue, PrevRevenue";"sum"?> Running the command on the give XML data files generates this XML file "cttree.xml". Each XPath in the "cttree.xml" is described in the following table. Element XPath Count Description C0 /cttree/C0 1 This contains elements which are related to column. C1 /cttree/C0/C1 4 The first level column "ProductsBrand". There are four distinct values. They are shown in the label H element. CS /cttree/C0/C1/CS 4 The column-span value. It is used to format the crosstab table. H /cttree/C0/C1/H 4 The column header label. There are four distinct values "Enterprise", "Magicolor", "McCloskey" and "Valspar". T1 /cttree/C0/C1/T1 4 The sum for measure 1, which is Revenue. T2 /cttree/C0/C1/T2 4 The sum for measure 2, which is PrevRevenue. C2 /cttree/C0/C1/C2 8 The first level column "PeriodYear", which is the second group-by key. There are two distinct values "2001" and "2002". H /cttree/C0/C1/C2/H 8 The column header label. There are two distinct values "2001" and "2002". Since it is under C1, therefore the total number of entries is 4 x 2 => 8. T1 /cttree/C0/C1/C2/T1 8 The sum for measure 1 "Revenue". T2 /cttree/C0/C1/C2/T2 8 The sum for measure 2 "PrevRevenue". M0 /cttree/M0 1 This contains elements which are related to measures. M1 /cttree/M0/M1 1 This contains summary for measure 1. H /cttree/M0/M1/H 1 The measure 1 label, which is "Revenue". T /cttree/M0/M1/T 1 The sum of measure 1 for the entire xpath from "//ROW". M2 /cttree/M0/M2 1 This contains summary for measure 2. H /cttree/M0/M2/H 1 The measure 2 label, which is "PrevRevenue". T /cttree/M0/M2/T 1 The sum of measure 2 for the entire xpath from "//ROW". R0 /cttree/R0 1 This contains elements which are related to row. R1 /cttree/R0/R1 4 The first level row "Region". There are four distinct values, they are shown in the label H element. H /cttree/R0/R1/H 4 This is row header label for "Region". There are four distinct values "CENTRAL REGION", "EASTERN REGION", "SOUTHERN REGION" and "WESTERN REGION". RS /cttree/R0/R1/RS 4 The row-span value. It is used to format the crosstab table. T1 /cttree/R0/R1/T1 4 The sum of measure 1 "Revenue" for each distinct "Region" value. T2 /cttree/R0/R1/T2 4 The sum of measure 1 "Revenue" for each distinct "Region" value. R1C1 /cttree/R0/R1/R1C1 16 This contains elements from combining R1 and C1. There are 4 distinct values for "Region", and four distinct values for "ProductsBrand". Therefore, the combination is 4 X 4 è 16. T1 /cttree/R0/R1/R1C1/T1 16 The sum of measure 1 "Revenue" for each combination of "Region" and "ProductsBrand". T2 /cttree/R0/R1/R1C1/T2 16 The sum of measure 2 "PrevRevenue" for each combination of "Region" and "ProductsBrand". R1C2 /cttree/R0/R1/R1C1/R1C2 32 This contains elements from combining R1, C1 and C2. There are 4 distinct values for "Region", and four distinct values for "ProductsBrand", and two distinct values of "PeriodYear". Therefore, the combination is 4 X 4 X 2 è 32. T1 /cttree/R0/R1/R1C1/R1C2/T1 32 The sum of measure 1 "Revenue" for each combination of "Region", "ProductsBrand" and "PeriodYear". T2 /cttree/R0/R1/R1C1/R1C2/T2 32 The sum of measure 2 "PrevRevenue" for each combination of "Region", "ProductsBrand" and "PeriodYear". R2 /cttree/R0/R1/R2 18 This contains elements from combining R1 "Region" and R2 "District". Since the list of values in R2 has dependency on R1, therefore the number of entries is not just a simple multiplication. H /cttree/R0/R1/R2/H 18 The row header label for R2 "District". R1N /cttree/R0/R1/R2/R1N 18 The R2 position number within R1. This is used to check if it is the last row, and draw table border accordingly. T1 /cttree/R0/R1/R2/T1 18 The sum of measure 1 "Revenue" for each combination "Region" and "District". T2 /cttree/R0/R1/R2/T2 18 The sum of measure 2 "PrevRevenue" for each combination of "Region" and "District". R2C1 /cttree/R0/R1/R2/R2C1 72 This contains elements from combining R1, R2 and C1. T1 /cttree/R0/R1/R2/R2C1/T1 72 The sum of measure 1 "Revenue" for each combination of "Region", "District" and "ProductsBrand". T2 /cttree/R0/R1/R2/R2C1/T2 72 The sum of measure 2 "PrevRevenue" for each combination of "Region", "District" and "ProductsBrand". R2C2 /cttree/R0/R1/R2/R2C1/R2C2 144 This contains elements from combining R1, R2, C1 and C2, which gives the finest level of details. M1 /cttree/R0/R1/R2/R2C1/R2C2/M1 144 The sum of measure 1 "Revenue". M2 /cttree/R0/R1/R2/R2C1/R2C2/M2 144 The sum of measure 2 "PrevRevenue". Lots to read and digest I know! Customization One new feature I discovered this week is the ability to show one column and sort by another. I had a data set that was extracting month abbreviations, we wanted to show the months across the top and some row headers to the side. As you may know XSL is not great with dates, especially recognising month names. It just wants to sort them alphabetically, so Apr comes before Jan, etc. A way around this is to generate a month number alongside the month and use that to sort. We can do that in the crosstab, sadly its not exposed in the UI yet but its doable. Go back up and take a look a the initial crosstab command. especially the Rows and Columns entries. In there you will find the sort criteria. "ProductsBrand{,o=a,t=t}, PeriodYear{,o=a,t=t}" Notice those leading commas inside the curly braces? Because there is no field preceding them it means that the crosstab should sort on the column before the brace ie PeriodYear. But you can insert another column in the data set to sort by. To get my sort working how I needed. <?crosstab:c794;"current-group()";"_Fund_Type_._Fund_Type_Display_{_Fund_Type_._Fund_Type_Sort_,o=a,t=n}";"_Fiscal_Period__Amount__._Amt_Fm_Disp_Abbr_{_Fiscal_Period__Amount__._Amt_Fiscal_Month_Sort_,o=a,t=n}";"_Execution_Facts_._Amt_";"sum"?> Excuse the horribly verbose XML tags, good ol BIEE :0) The emboldened columns are not in the crosstab but are in the data set. I just opened up the field, dropped them in and changed the type(t) value to be 'n', for number, instead of the default 'a' and my crosstab started sorting how I wanted it. If you find other tips and tricks, please share in the comments.

    Read the article

  • OWB 11gR2 &ndash; Degenerate Dimensions

    - by David Allan
    Ever wondered how to build degenerate dimensions in OWB and get the benefits of slowly changing dimensions and cube loading? Now its possible through some changes in 11gR2 to make the dimension and cube loading much more flexible. This will let you get the benefits of OWB's surrogate key handling and slowly changing dimension reference when loading the fact table and need degenerate dimensions (see Ralph Kimball's degenerate dimensions design tip). Here we will see how to use the cube operator to load slowly changing, regular and degenerate dimensions. The cube and cube operator can now work with dimensions which have no surrogate key as well as dimensions with surrogates, so you can get the benefit of the cube loading and incorporate the degenerate dimension loading. What you need to do is create a dimension in OWB that is purely used for ETL metadata; the dimension itself is never deployed (its table is, but has not data) it has no surrogate keys has a single level with a business attribute the degenerate dimension data and a dummy attribute, say description just to pass the OWB validation. When this degenerate dimension is added into a cube, you will need to configure the fact table created and set the 'Deployable' flag to FALSE for the foreign key generated to the degenerate dimension table. The degenerate dimension reference will then be in the cube operator and used when matching. Create the degenerate dimension using the regular wizard. Delete the Surrogate ID attribute, this is not needed. Define a level name for the dimension member (any name). After the wizard has completed, in the editor delete the hierarchy STANDARD that was automatically generated, there is only a single level, no need for a hierarchy and this shouldn't really be created. Deploy the implementing table DD_ORDERNUMBER_TAB, this needs to be deployed but with no data (the mapping here will do a left outer join of the source data with the empty degenerate dimension table). Now, go ahead and build your cube, use the regular TIMES dimension for example and your degenerate dimension DD_ORDERNUMBER, can add in SCD dimensions etc. Configure the fact table created and set Deployable to false, so the foreign key does not get generated. Can now use the cube in a mapping and load data into the fact table via the cube operator, this will look after surrogate lookups and slowly changing dimension references.   If you generate the SQL you will see the ON clause for matching includes the columns representing the degenerate dimension columns. Here we have seen how this use case for loading fact tables using degenerate dimensions becomes a whole lot simpler using OWB 11gR2. I'm sure there are other use cases where using this mix of dimensions with surrogate and regular identifiers is useful, Fact tables partitioned by date columns is another classic example that this will greatly help and make the cube operator much more useful. Good to hear any comments.

    Read the article

  • How to structure classes in the filesystem?

    - by da_b0uncer
    I have a few (view) classes. Table, Tree, PagingColumn, SelectionColumn, SparkLineColumn, TimeColumn. currently they're flat under app/view like this: app/view/Table app/view/Tree app/view/PagingColumn ... I thought about restructuring it, because the Trees and Tables use the columns, but there are some columns, which only work in a tree, some who work in trees and tables and in the future there are probably some who only work in tables, I don't know. My first idea was like this: app/view/Table app/view/Tree app/view/column/PagingColumn app/view/column/SelectionColumn app/view/column/SparkLineColumn app/view/column/TimeColumn But since the SelectionColumn is explicitly for trees, I have the fear that future developers could get the idea of missuse them. But how to restructure it probably? Like this: app/view/table/panel/Table app/view/tree/panel/Tree app/view/tree/column/PagingColumn app/view/tree/column/SelectionColumn app/view/column/SparkLineColumn app/view/column/TimeColumn Or like this: app/view/Table app/view/Tree app/view/column/SparkLineColumn app/view/column/TimeColumn app/view/column/tree/PagingColumn app/view/column/tree/SelectionColumn

    Read the article

  • SQL SERVER – DMV – sys.dm_os_waiting_tasks and sys.dm_exec_requests – Wait Type – Day 4 of 28

    - by pinaldave
    Previously, we covered the DMV sys.dm_os_wait_stats, and also saw how it can be useful to identify the major resource bottleneck. However, at the same time, we discussed that this is only useful when we are looking at an instance-level picture. Quite often we want to know about the processes going in our server at the given instant. Here is the query for the same. This DMV is written taking the following into consideration: we want to analyze the queries that are currently running or which have recently ran and their plan is still in the cache. SELECT dm_ws.wait_duration_ms, dm_ws.wait_type, dm_es.status, dm_t.TEXT, dm_qp.query_plan, dm_ws.session_ID, dm_es.cpu_time, dm_es.memory_usage, dm_es.logical_reads, dm_es.total_elapsed_time, dm_es.program_name, DB_NAME(dm_r.database_id) DatabaseName, -- Optional columns dm_ws.blocking_session_id, dm_r.wait_resource, dm_es.login_name, dm_r.command, dm_r.last_wait_type FROM sys.dm_os_waiting_tasks dm_ws INNER JOIN sys.dm_exec_requests dm_r ON dm_ws.session_id = dm_r.session_id INNER JOIN sys.dm_exec_sessions dm_es ON dm_es.session_id = dm_r.session_id CROSS APPLY sys.dm_exec_sql_text (dm_r.sql_handle) dm_t CROSS APPLY sys.dm_exec_query_plan (dm_r.plan_handle) dm_qp WHERE dm_es.is_user_process = 1 GO You can change CROSS APPLY to OUTER APPLY if you want to see all the details which are omitted because of the plan cache. Let us analyze the result of the above query and see how it can be helpful to identify the query and the kind of wait type it creates. Click to Enlarage The above query will return various columns. There are various columns that provide very important details. e.g. wait_duration_ms – it indicates current wait for the query that executes at that point of time. wait_type – it indicates the current wait type for the query text – indicates the query text query_plan – when clicked on the same, it will display the query plans There are many other important information like CPU_time, memory_usage, and logical_reads, which can be read from the query as well. In future posts on this series, we will see how once identified wait type we can attempt to reduce the same. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Excel Macro Runtime error 428 in Excel 2003

    - by Adam
    Hi I have created a xlt excel template which works fine in Excel 2007 under compatibility mode and shows no errors on compatibility check. The template runs a number of Macros which creates pivot tables and charts. When a colleague tries to run the same xlt on excel 2003 they get a Runtime error 428 (Object does not support this property or method). The runtime error fails at this point; ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R7C1", TableName:="PivotTable2", _ DefaultVersion:=xlPivotTableVersion10 Any help would be appreciated. This is the full Macro; Sub Auto_Open() ' ' ImportData Macro ' Macro to import data, Data must be in your local D: Drive and named raw.csv ' ' Sheets("raw").Select With ActiveSheet.QueryTables.Add(Connection:= _ "TEXT;d:\raw.csv", Destination:=Range _ ("$A$1")) .Name = "raw_1" .FieldNames = True .RowNumbers = False .FillAdjacentFormulas = False .PreserveFormatting = True .RefreshOnFileOpen = False .RefreshStyle = xlInsertDeleteCells .SavePassword = False .SaveData = True .AdjustColumnWidth = True .RefreshPeriod = 0 .TextFilePromptOnRefresh = False .TextFilePlatform = 850 .TextFileStartRow = 1 .TextFileParseType = xlDelimited .TextFileTextQualifier = xlTextQualifierDoubleQuote .TextFileConsecutiveDelimiter = False .TextFileTabDelimiter = False .TextFileSemicolonDelimiter = False .TextFileCommaDelimiter = True .TextFileSpaceDelimiter = False .TextFileColumnDataTypes = Array(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, _ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) .TextFileTrailingMinusNumbers = True .Refresh BackgroundQuery:=False End With ' ' AddMonthColumn Macro ' ' Sheets("raw").Select Range("AK1").Select ActiveCell.FormulaR1C1 = "Month" Range("AK2").FormulaR1C1 = "=DATE(YEAR(RC[-36]),MONTH(RC[-36]),1)" LastRow = ActiveSheet.UsedRange.Rows.Count Range("AK2").AutoFill Destination:=Range("AK2:AK" & LastRow) Columns("AK:AK").EntireColumn.AutoFit Columns("AK:AK").Select Selection.NumberFormat = "mmmm" With Selection .HorizontalAlignment = xlCenter End With Columns("AK:AK").EntireColumn.AutoFit Selection.Copy Selection.PasteSpecial Paste:=xlPasteValues, Operation:=xlNone, SkipBlanks _ :=False, Transpose:=False ' ' Add Report Information [Text] ' Sheets("Frontpage").Select Range("A2:N2").Select Selection.Merge ActiveCell.FormulaR1C1 = "Service Activity Report" With Selection.Font .Size = 20 End With Range("A3:N3").Select Selection.Merge ActiveCell.FormulaR1C1 = InputBox("Customer Name") With Selection .HorizontalAlignment = xlCenter .VerticalAlignment = xlCenter End With Range("A4:N4").Select Selection.Merge ActiveCell.FormulaR1C1 = InputBox("Date Range dd/mm/yyyy - dd/mm/yyyy") With Selection .HorizontalAlignment = xlCenter .VerticalAlignment = xlCenter End With ' ' IncidentsbyPriority Macro ' ' Sheets("Frontpage").Select Range("A7").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R7C1", TableName:="PivotTable2", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(7, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$7:$H$22") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable2").PivotFields("Priority") .Orientation = xlRowField .Position = 1 End With ActiveSheet.PivotTables("PivotTable2").AddDataField ActiveSheet.PivotTables( _ "PivotTable2").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentsbyPriority" ActiveChart.ChartTitle.Text = "Incidents by Priority" Dim RngToCover As Range Dim ChtOb As ChartObject Set RngToCover = ActiveSheet.Range("D7:L16") Set ChtOb = ActiveSheet.ChartObjects("IncidentsbyPriority") ChtOb.Height = RngToCover.Height ' resize ChtOb.Width = RngToCover.Width ' resize ChtOb.Top = RngToCover.Top ' reposition ChtOb.Left = RngToCover.Left ' reposition ' ' IncidentbyMonth Macro ' ' Sheets("Frontpage").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R18C1", TableName:="PivotTable4", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(18, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$18:$H$38") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable4").PivotFields("Month") .Orientation = xlRowField .Position = 1 End With ActiveSheet.PivotTables("PivotTable4").AddDataField ActiveSheet.PivotTables( _ "PivotTable4").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentbyMonth" ActiveChart.ChartTitle.Text = "Incidents by Month" Dim RngToCover2 As Range Dim ChtOb2 As ChartObject Set RngToCover2 = ActiveSheet.Range("D18:L30") Set ChtOb2 = ActiveSheet.ChartObjects("IncidentbyMonth") ChtOb2.Height = RngToCover2.Height ' resize ChtOb2.Width = RngToCover2.Width ' resize ChtOb2.Top = RngToCover2.Top ' reposition ChtOb2.Left = RngToCover2.Left ' reposition ' ' IncidentbyCategory Macro ' ' Sheets("Frontpage").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R38C1", TableName:="PivotTable6", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(38, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$38:$H$119") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable6").PivotFields("Category 2") .Orientation = xlRowField .Position = 1 End With With ActiveSheet.PivotTables("PivotTable6").PivotFields("Category 3") .Orientation = xlPageField .Position = 1 End With ActiveSheet.PivotTables("PivotTable6").AddDataField ActiveSheet.PivotTables( _ "PivotTable6").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentbyCategory" ActiveChart.ChartTitle.Text = "Incidents by Category" Dim RngToCover3 As Range Dim ChtOb3 As ChartObject Set RngToCover3 = ActiveSheet.Range("D38:L56") Set ChtOb3 = ActiveSheet.ChartObjects("IncidentbyCategory") ChtOb3.Height = RngToCover3.Height ' resize ChtOb3.Width = RngToCover3.Width ' resize ChtOb3.Top = RngToCover3.Top ' reposition ChtOb3.Left = RngToCover3.Left ' reposition ' ' IncidentsbySiteandPriority Macro ' ' Sheets("Frontpage").Select Range("A71").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R71C1", TableName:="PivotTable3", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(71, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$71:$H$90") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable3").PivotFields("Site Name") .Orientation = xlRowField .Position = 1 End With With ActiveSheet.PivotTables("PivotTable3").PivotFields("Priority") .Orientation = xlColumnField .Position = 1 End With ActiveSheet.PivotTables("PivotTable3").AddDataField ActiveSheet.PivotTables( _ "PivotTable3").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentbySiteandPriority" ' ActiveChart.ChartTitle.Text = "Incidents by Site and Priority" Dim RngToCover4 As Range Dim ChtOb4 As ChartObject Set RngToCover4 = ActiveSheet.Range("H71:O91") Set ChtOb4 = ActiveSheet.ChartObjects("IncidentbySiteandPriority") ChtOb4.Height = RngToCover4.Height ' resize ChtOb4.Width = RngToCover4.Width ' resize ChtOb4.Top = RngToCover4.Top ' reposition ChtOb4.Left = RngToCover4.Left ' reposition Columns("A:G").Select Range("A52").Activate Columns("A:G").EntireColumn.AutoFit End Sub

    Read the article

  • MySQL for Excel 1.1.0 GA has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.0 GA, one of our newest products contained in the MySQL Installer suite. You can download it from our official Downloads page at http://dev.mysql.com/downloads/installer/. The 1.1.0 release of MySQL for Excel introduces the following features: Edit MySQL Data. Edit MySQL Data This may be the coolest feature so far; users will be able to edit the data in a MySQL table using MS Excel in a very friendly and intuitive way.  Edit Data supports inserting new rows, deleting existing rows and updating existing data as easy as playing with data in an Excel’s spreadsheet and pushing changes back to the server.  Also this version contains the following bug fixes: Enabled the following checkboxes in the Append Data's Advanced Options dialog and added code in the Append Data dialog to use the checkboxes as follows: Automatically store the column mapping for the given table     If checked the current mapping will be stored automatically after clicking the Append button if the append operation is successful and there is no mapping for the current connection.schema.table already; the new mapping is stored with a proposed name of Mapping. Reload stored column mapping for the selected table automatically     If checked the first Stored Mapping found where all column names in the source grid match all column names in the target grid is automatically selected and applied when the Append Data dialog is loaded. Fixed code in Append Data that applies a stored column mapping to skip target columns where the associated mapping is empty (saved as a -1). Enclosed the Add-In's startup code in a try-catch block in order to log any possible error thrown during startup; and added information messages to the log at the beginning of the Add-In's startup code and at the end of the shutdown code.  Also changed the wrapper method that calls the MySQLUtility to write messages to the log to make logging easier, thus changed the log call throughout all the code that contains a try-catch block. Added code to the main wix configuration file to check if a newer version is already installed and if so abort the installation Fixed code to refresh the Import Procedure Form's preview grid's data source to repaint its contents every time the Call button is pressed. Added code to re-pull connections after connections are migrated from Excel to Workbench. Fixed code so when the Append Data's Automatic Mapping is performed any subsequent change on a mapping resets the mapping to a Manual Mapping. Added code to the InfoDialog class to set the button text to "Show Details" or "Hide Details" depending on the status of the Details text container. Fixed a GUID in the main wix configuration file so now previous versions are uninstalled during a new installation. Added an option to the Export Data's Advanced Options dialog to remove columns with no data, by default the Export Dialog will only flag those columns as Excluded. Added code to display a warning and paint a column red if the column name in the Export Data dialog is not set, display a warning if the table name is not set, and stack warnings but not display them if a column is Excluded, warnings are displayed normally for columns if they are not Excluded anymore.  Added code to prevent the Append and Export of Data if more than 1 selection is made (selecting more than 1 area holding the Ctrl key while selecting Excel cells). Fixed problem that prevented MySQL for Excel from loading when Display settings in Windows 7 is set to Adjust to Best Performance (Oracle bug 14521405 - UNHANDLED EXCEPTION IS THROWN WHEN LOADING MYSQL FOR EXCEL). Fixed code that renames the auto-generated Primary Key column when the Table name changes since it was not detecting if a column with the same name already existed in the table. The column duplication was not actually happening, it looked that way because the automatically generated PK column was not detecting a column had that same name. Fixed code in Export Data dialog to always set an empty string instead of null to the MySQLDataColumn properties that stores MySQL data types (MySQLDataType, RowsFrom1stDataType and RowsFrom2ndDataType). Added code to display a warning and color red a column which Data Type has not been set by the user or has been manually cleared. Added code to output to the application log exception messages consistently in all places where exceptions are catched. A series of blog posts explaining the new Edit MySQL Data feature and the other existing features are coming in this blog. You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.5/en/mysql-for-excel.html You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Customizing Flowcharts in Oracle Tutor

    - by [email protected]
    Today we're going to look at how you can customize the flowcharts within Oracle Tutor procedures, and how you can share those changes with other authors within your company. Here is an image of a flowchart within a Tutor procedure with the default size and color scheme. You may want to change the size of your flowcharts as your end-users might have larger screens or need larger fonts. To change the size and number of columns, navigate to Tutor Author Author Options Flowcharts. The default is to have 4 columns appear in each flowchart, but, if I change it to six, my end-users will see a denser flowchart. This might be too dense for my end-users, so I will change it to 5 columns, and I will also deselect the option to have separate task boxes. Now let's look at how to customize the colors. Within the Flowchart options dialog, there is a button labeled "Colors." This brings up a dialog box of every object on a Tutor flowchart, and I can modify the color of each object, as well as the text within the object. If I click on the background, the "page" object appears in the Item field, and now I can customize the color and the title text by selecting Select Fill Color and/or Select Text Color. A dialog box with color choices appears. If I select Define Custom Colors, I can make my selections even more precise. Each time I change the color of an object, it appears in the selection screen. When the flowchart customization is finished, I can save my changes by naming the scheme. Although the color scheme I have chosen is rather silly looking, perhaps I want others to give me their feedback and make changes as they wish. I can share the color scheme with them by copying the FCP.INI file in the Tutor\Author directory into the same directory on their systems. If the other users have color schemes that they do not want to lose, they can copy the relevant lines from the FCP.INI file into their file. If I flowchart my document with the new scheme, I can see how it looks within the document. Sometimes just one or two changes to the default scheme are enough to customize the flowchart to your company's color palette. I have seen customers who have only changed the Start object to green and the End object to red, and I've seen another customer who changed every object to some variant of black and orange. Experiment! And let us know how you have customized your flowcharts. Mary R. Keane Senior Development Director, Oracle Tutor

    Read the article

  • SharePoint: Problem with BaseFieldControl

    - by Anoop
    Hi All, In below code in a Gird First column is BaseFieldControl from a column of type Choice of SPList. Secound column is a text box control with textchange event. Both the controls are created at rowdatabound event of gridview. Now the problem is that when Steps: 1) select any of the value from BaseFieldControl(DropDownList) which is rendered from Choice Column of SPList 2) enter any thing in textbox in another column of grid. 3) textchanged event fires up and in textchange event rebound the grid. Problem: the selected value becomes the first item or the default value(if any). but if i do not rebound the grid at text changed event it works fine. Please suggest what to do. using System; using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls; using System.Web.UI; using System.Web.UI.WebControls; using System.Data; namespace SharePointProjectTest.Layouts.SharePointProjectTest { public partial class TestBFC : LayoutsPageBase { GridView grid = null; protected void Page_Load(object sender, EventArgs e) { try { grid = new GridView(); grid.ShowFooter = true; grid.ShowHeader = true; grid.AutoGenerateColumns = true; grid.ID = "grdView"; grid.RowDataBound += new GridViewRowEventHandler(grid_RowDataBound); grid.Width = Unit.Pixel(900); MasterPage holder = (MasterPage)Page.Controls[0]; holder.FindControl("PlaceHolderMain").Controls.Add(grid); DataTable ds = new DataTable(); ds.Columns.Add("Choice"); //ds.Columns.Add("person"); ds.Columns.Add("Curr"); for (int i = 0; i < 3; i++) { DataRow dr = ds.NewRow(); ds.Rows.Add(dr); } grid.DataSource = ds; grid.DataBind(); } catch (Exception ex) { } } void tx_TextChanged(object sender, EventArgs e) { DataTable ds = new DataTable(); ds.Columns.Add("Choice"); ds.Columns.Add("Curr"); for (int i = 0; i < 3; i++) { DataRow dr = ds.NewRow(); ds.Rows.Add(dr); } grid.DataSource = ds; grid.DataBind(); } void grid_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { SPWeb web = SPContext.Current.Web; SPList list = web.Lists["Source for test"]; SPField field = list.Fields["Choice"]; SPListItem item=list.Items.Add(); BaseFieldControl control = (BaseFieldControl)GetSharePointControls(field, list, item, SPControlMode.New); if (control != null) { e.Row.Cells[0].Controls.Add(control); } TextBox tx = new TextBox(); tx.AutoPostBack = true; tx.ID = "Curr"; tx.TextChanged += new EventHandler(tx_TextChanged); e.Row.Cells[1].Controls.Add(tx); } } public static Control GetSharePointControls(SPField field, SPList list, SPListItem item, SPControlMode mode) { if (field == null || field.FieldRenderingControl == null || field.Hidden) return null; try { BaseFieldControl webControl = field.FieldRenderingControl; webControl.ListId = list.ID; webControl.ItemId = item.ID; webControl.FieldName = field.Title; webControl.ID = "id_" + field.InternalName; webControl.ControlMode = mode; webControl.EnableViewState = true; return webControl; } catch (Exception ex) { return null; } } } }

    Read the article

  • Which design better when use foreign key instead of a string to store a list of id

    - by Kien Thanh
    I'm building online examination system. I have designed to table, Question and GeneralExam. The table GeneralExam contains info about the exam like name, description, duration,... Now I would like to design table GeneralQuestion, it will contain the ids of questions belongs to a general exam. Currently, I have two ideas to design GeneralQuestion table: It will have two columns: general_exam_id, question_id. It will have two columns: general_exam_id, list_question_ids (string/text). I would like to know which designing is better, or pros and cons of each designing. I'm using Postgresql database.

    Read the article

  • mysql join with multiple values in one column

    - by CYREX
    I need to make a query that creates 3 columns that come from 2 tables which have the following relations: TABLE 1 has Column ID that relates to TABLE 2 with column ID2 In TABLE 1 there is a column called user In TABLE 2 there is a column called names There can be 1 unique user but there can be many names associated to that user. If i do the following i get all data BUT the user column repeats itself for each name it has associated. What i want is for use to appear unique but the names columns appear with all the names associated to the user column but separated by commas, like the following: select user,names from TABLE1 left join TABLE2 on TABLE1.id = TABLE2.id This will show the users repeated everytime a name appears for that user. what i want is to appear like this: USER - NAMES cyrex - pedrox, rambo, zelda homeboy - carmen, carlos, tom, sandra jerry - seinfeld, christine ninja - soloboy etc....

    Read the article

  • Altering a Column Which has a Default Constraint

    - by Dinesh Asanka
    Setting up a default column is a common task for  developers.  But, are we naming those default constraints explicitly? In the below  table creation, for the column, sys_DateTime the default value Getdate() will be allocated. CREATE TABLE SampleTable (ID int identity(1,1), Sys_DateTime Datetime DEFAULT getdate() ) We can check the relevant information from the system catalogs from following query. SELECT sc.name TableName, dc.name DefaultName, dc.definition, OBJECT_NAME(dc.parent_object_id) TableName, dc.is_system_named  FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id and results would be: Most of the above columns are self-explanatory. The last column, is_system_named, is to identify whether the default name was given by the system. As you know, in the above case, since we didn’t provide  any default name, the  system will generate a default name for you. But the problem with these names is that they can differ from environment to environment.  If example if I create this table in different table the default name could be DF__SampleTab__Sys_D__7E6CC920 Now let us create another default and explicitly name it: CREATE TABLE SampleTable2 (ID int identity(1,1), Sys_DateTime Datetime )   ALTER TABLE SampleTable2 ADD CONSTRAINT DF_sys_DateTime_Getdate DEFAULT( Getdate()) FOR Sys_DateTime If we run the previous query again we will be returned the below output. And you can see that last created default name has 0 for is_system_named. Now let us say I want to change the data type of the sys_DateTime column to something else: ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date This will generate the below error: Msg 5074, Level 16, State 1, Line 1 The object ‘DF_sys_DateTime_Getdate’ is dependent on column ‘Sys_DateTime’. Msg 4922, Level 16, State 9, Line 1 ALTER TABLE ALTER COLUMN Sys_DateTime failed because one or more objects access this column. This means, you need to drop the default constraint before altering it: ALTER TABLE [dbo].[SampleTable2] DROP CONSTRAINT [DF_sys_DateTime_Getdate] ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date   ALTER TABLE [dbo].[SampleTable2] ADD CONSTRAINT [DF_sys_DateTime_Getdate] DEFAULT (getdate()) FOR [Sys_DateTime] If you have a system named default constraint that can differ from environment to environment and so you cannot drop it as before, you can use the below code template: DECLARE @defaultname VARCHAR(255) DECLARE @executesql VARCHAR(1000)   SELECT @defaultname = dc.name FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id WHERE OBJECT_NAME (parent_object_id) = 'SampleTable' AND sc.name ='Sys_DateTime' SET @executesql = 'ALTER TABLE SampleTable DROP CONSTRAINT ' + @defaultname EXEC( @executesql) ALTER TABLE SampleTable ALTER COLUMN Sys_DateTime Date ALTER TABLE [dbo].[SampleTable] ADD DEFAULT (Getdate()) FOR [Sys_DateTime]

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 6

    - by Max
    In this post, we are going to look into implementing lists into our twitter application and also about enhancing the data grid to display the status messages in a pleasing way with the profile images. Twitter lists are really cool feature that they recently added, I love them and I’ve quite a few lists setup one for DOTNET gurus, SQL Server gurus and one for a few celebrities. You can follow them here. Now let us move onto our tutorial. 1) Lists can be subscribed to in two ways, one can be user’s own lists, which he has created and another one is the lists that the user is following. Like for example, I’ve created 3 lists myself and I am following 1 other lists created by another user. Both of them cannot be fetched in the same api call, its a two step process. 2) In the TwitterCredentialsSubmit method we’ve in Home.xaml.cs, let us do the first api call to get the lists that the user has created. For this the call has to be made to https://twitter.com/<TwitterUsername>/lists.xml. The API reference is available here. myService1.AllowReadStreamBuffering = true; myService1.UseDefaultCredentials = false; myService1.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService1.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsRequestCompleted); myService1.DownloadStringAsync(new Uri("https://twitter.com/" + GlobalVariable.getUserName() + "/lists.xml")); 3) Now let us look at implementing the event handler – ListRequestCompleted for this. public void ListsRequestCompleted(object sender, System.Net.DownloadStringCompletedEventArgs e) { if (e.Error != null) { StatusMessage.Text = "This application must be installed first."; parseXML(""); } else { //MessageBox.Show(e.Result.ToString()); parseXMLLists(e.Result.ToString()); } } 4) Now let us look at the parseXMLLists in detail xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.MinWidth = 70; b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } So here what am I doing, I am just dynamically creating a button for each of the lists and put them within a StackPanel and for each of these buttons, I am creating a event handler b_Click which will be fired on button click. We will look into this method in detail soon. For now let us get the buttons displayed. 5) Now the user might have some lists to which he has subscribed to. We need to create a button for these as well. In the end of TwitterCredentialsSubmit method, we need to make a call to http://api.twitter.com/1/<TwitterUsername>/lists/subscriptions.xml. Reference is available here. The code will look like this below. myService2.AllowReadStreamBuffering = true; myService2.UseDefaultCredentials = false; myService2.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService2.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsSubsRequestCompleted); myService2.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/subscriptions.xml")); 6) In the event handler – ListsSubsRequestCompleted, we need to parse through the xml string and create a button for each of the lists subscribed, let us see how. I’ve taken only the “full_name”, you can choose what you want, refer the documentation here. Note the point that the full_name will have @<UserName>/<ListName> format – this will be useful very soon. xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("full_name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.MinWidth = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } Please note, I am setting the button width to be auto based on the content and also giving it a midwidth value. I wanted to create a rounded corner buttons, but for some reason its not working. Also add this StackPanel – TwitterListStack of the Home.xaml <StackPanel HorizontalAlignment="Center" Orientation="Horizontal" Name="TwitterListStack"></StackPanel> After doing this, you would get a series of buttons in the top of the home page. 7) Now the button click event handler – b_Click, in this method, once the button is clicked, I call another method with the content string of the button which is clicked as the parameter. Button b = (Button)e.OriginalSource; getListStatuses(b.Content.ToString()); 8) Now the getListsStatuses method: toggleProgressBar(true); WebRequest.RegisterPrefix("http://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted); if (listName.IndexOf("@") > -1 && listName.IndexOf("/") > -1) { string[] arrays = null; arrays = listName.Split('/'); arrays[0] = arrays[0].Replace("@", " ").Trim(); //MessageBox.Show(arrays[0]); //MessageBox.Show(arrays[1]); string url = "http://api.twitter.com/1/" + arrays[0] + "/lists/" + arrays[1] + "/statuses.xml"; //MessageBox.Show(url); myService.DownloadStringAsync(new Uri(url)); } else myService.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/" + listName + "/statuses.xml")); Please note that the url to look at will be different based on the list clicked – if its user created, the url format will be http://api.twitter.com/1/<CurentUser>/lists/<ListName>/statuses.xml But if it is some lists subscribed, it will be http://api.twitter.com/1/<ListOwnerUserName>/lists/<ListName>/statuses.xml The first one is pretty straight forward to implement, but if its a list subscribed, we need to split the listName string to get the list owner and list name and user them to form the string. So that is what I’ve done in this method, if the listName has got “@” and “/” I build the url differently. 9) Until now, we’ve been using only a few nodes of the status message xml string, now we will look to fetch a new field - “profile_image_url”. Images in datagrid – COOL. So for that, we need to modify our Status.cs file to include two more fields one string another BitmapImage with get and set. public string profile_image_url { get; set; } public BitmapImage profileImage { get; set; } 10) Now let us change the generic parseXML method which is used for binding to the datagrid. public void parseXML(string text) { XDocument xdoc; xdoc = XDocument.Parse(text); statusList = new List<Status>(); statusList = (from status in xdoc.Descendants("status") select new Status { ID = status.Element("id").Value, Text = status.Element("text").Value, Source = status.Element("source").Value, UserID = status.Element("user").Element("id").Value, UserName = status.Element("user").Element("screen_name").Value, profile_image_url = status.Element("user").Element("profile_image_url").Value, profileImage = new BitmapImage(new Uri(status.Element("user").Element("profile_image_url").Value)) }).ToList(); DataGridStatus.ItemsSource = statusList; StatusMessage.Text = "Datagrid refreshed."; toggleProgressBar(false); } We are here creating a new bitmap image from the image url and creating a new Status object for every status and binding them to the data grid. Refer to the Twitter API documentation here. You can choose any column you want. 11) Until now, we’ve been using the auto generate columns for the data grid, but if you want it to be really cool, you need to define the columns with templates, etc… <data:DataGrid AutoGenerateColumns="False" Name="DataGridStatus" Height="Auto" MinWidth="400"> <data:DataGrid.Columns> <data:DataGridTemplateColumn Width="50" Header=""> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Image Source="{Binding profileImage}" Width="50" Height="50" Margin="1"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> <data:DataGridTextColumn Width="Auto" Header="User Name" Binding="{Binding UserName}" /> <data:DataGridTemplateColumn MinWidth="300" Width="Auto" Header="Status"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock TextWrapping="Wrap" Text="{Binding Text}"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> I’ve used only three columns – Profile image, Username, Status text. Now our Datagrid will look super cool like this. Coincidentally,  Tim Heuer is on the screenshot , who is a Silverlight Guru and works on SL team in Microsoft. His blog is really super. Here is the zipped file for all the xaml, xaml.cs & class files pages. Ok let us stop here for now, will look into implementing few more features in the next few posts and then I am going to look into developing a ASP.NET MVC 2 application. Hope you all liked this post. If you have any queries / suggestions feel free to comment below or contact me. Cheers! Technorati Tags: Silverlight,LINQ,Twitter API,Twitter,Silverlight 4

    Read the article

  • How to find which w3wp.exe to attach when debugging your SharePiont2010 project

    - by ybbest
    When debugging SharePoint2010 project, you need to attach w3wp.exe process, however there are often quite a few of them and it is very hard to figure out which one to attach. Today, I will show you how to find out which process to attach using a tool called process explorer. 1. Download the process explorer and run it after you download it. 2. Find the w3wp.exe processes under wininit.exe right-click the columns header and click Select Columns. 3. Include Command Line under Process Image. 4. Now you can see your IIS site name next to w3wp.exe, in my case I’d like to attach the “SharePoint – BenDev80″.You can see the PID of the process is 2920. 5. From the above process you know the process ID you’d like to attach is 2920, you can then go ahead to attach the process from Visual Studio.

    Read the article

  • Missing Indexes DMV Report, 3 billion Impact!

    - by Tara Kizer
    We’ve been having some major performance issues with one of the applications that I support.  The database is on SQL Server 2005 and is about 150GB in size.  We’ve identified a couple of issues already on the database side.  The first issue is that some query (or maybe several queries) is getting a bad execution plan at some point in time during the day.  When it occurs, database performance comes to a grinding halt.  We know it’s a bad execution plan as running DBCC FREEPROCCACHE immediately resolves the problem system-wide.  As we have not yet identified the problematic query, we’ve put a temporary solution in place that frees the procedure cache on an hourly basis via a SQL Agent job.  This is not ideal, but it is getting us through the day without a major problem.  We are actively working on identifying the problematic query and hope to disable the SQL Agent job soon. Earlier this week, we had a major slowdown for one of the processes of this application.  I was unable to find any database performance issues, but I continued to investigate it.  One of things that I typically do when investigating database performance issues is run the “Missing Indexes DMV Report” (that’s what I call it at least).  When analyzing the output of that report, I immediately dismiss anything under 1 million “Impact” as I want to target the “low-hanging fruit” initially.  When I ran the report earlier this week, I was shocked to find a suggested index with an impact of over 3 billion! Do I win a prize for the highest impact?  Has anyone seen a value higher than mine?  My exact value was 3154284120.67765. The performance issue from earlier this week ended up being an application problem, but it also brought to light a much needed index.  I had previously seen this index come up in that report but always with a much lower impact.  I had never considered it as the index’s selectivity is very low.  It’s a composite index with three columns.  The first column is not selective, the first two columns are not selective, and the three columns together are not selective.  In fact, no matter how I order it, the index will not be selective at all.  I briefly discussed this with Kimberly Tripp, and she said that this was okay for covering indexes.  Selectivity is irrelevant for a covering index.  She indicated that she’s even created indexes with gender as the first column in the index.  I’ve got lots to learn still!

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >