Search Results

Search found 6532 results on 262 pages for 'computed columns'.

Page 62/262 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • SQL Outer joins

    - by dsquaredtech
    Three tables courses,registration,students columns in students firstname,lastname,studentid,major,admitdate,graddate,gender,dob columns in registration courseid,studentid columns in courses coursenumber,coursename,credits select statement I need to modify select lastname as 'Last Name',sum(credits) as 'Credits Registered For' from students as s inner join registration as r on s.studentid = r.studentid inner join courses as c on c.coursenumber = c.courseid group by last name; the question on the lab is... Modify the previous query to show all students, even if they have not registered for a class. You should have 14 rows. Students who are not registered will show NULL in output. I know this requires outer join of some sort but I'm not fully grasping these joins i've read multiple posts on here and other sites but can't seem figure it out.

    Read the article

  • Is using a DataSet's column Expression works in background same as manual calculation?

    - by Harikrishna
    I have one datatable which is not bindided and records are coming from the file by parsing it in the datatable dynamically every time. Now there is three columns in the datatable Marks1,Marks2 and FinalMarks. And their types is decimal. Now for making addition of columns Marks1 and Marks2 's records and store it into FinalMarks column,For that what I do is : datatableResult.Columns["FinalMarks"].Expression="Marks1+Marks2"; It's works properly. It can be done in other way also is foreach (DataRow r in datatableResult.Rows) { r["FinalMarks"]=Convert.ToDecimal(r["Marks1"])+Convert.ToDecimal(r["Marks2"]); } Is first approach same as second approach in background means is both approach same or what? EDIT: I want to know that first approach works in background as second approach.

    Read the article

  • Create macro to move data in a column UP?

    - by user1786695
    I have an excel sheet of which the data was jumbled: for example, the data that should have been in Columns AB and AC were instead in Columns B and C, but on the row after. I have the following written which moved the data from B and C to AB and AC respectively: Dim rCell As Range Dim rRng As Range Set rRng = Sheet1.Range("A:A") i = 1 lastRow = ActiveSheet.Cells(Rows.Count, "A").End(xlUp).Row For Each rCell In rRng.Cells If rCell.Value = "" Then Range("AB" & i) = rCell.Offset(0, 1).Value rCell.Offset(0, 1).ClearContents End If i = i + 1 If i = lastRow + 1 Then Exit Sub End If Next rCell End Sub However, it doesn't fix the problem of the data being on the row BELOW the appropriate row now that they are in the right columns. I am new to VBA Macros so I would appreciate any help to make the data now align. I tried toggling the Offset parameter (-1,0) but it's not working.

    Read the article

  • How to create item in SharePoint2010 document library using SharePoint Web service

    - by ybbest
    Today, I’d like to show you how to create item in SharePoint2010 document library using SharePoint Web service. Originally, I thought I could use the WebSvcLists(list.asmx) that provides methods for working with lists and list data. However, after a bit Googling , I realize that I need to use the WebSvcCopy (copy.asmx).Here are the code used private const string siteUrl = "http://ybbest"; private static void Main(string[] args) { using (CopyWSProxyWrapper copyWSProxyWrapper = new CopyWSProxyWrapper(siteUrl)) { copyWSProxyWrapper.UploadFile("TestDoc2.pdf", new[] {string.Format("{0}/Shared Documents/TestDoc2.pdf", siteUrl)}, Resource.TestDoc, GetFieldInfos().ToArray()); } } private static List<FieldInformation> GetFieldInfos() { var fieldInfos = new List<FieldInformation>(); //The InternalName , DisplayName and FieldType are both required to make it work fieldInfos.Add(new FieldInformation { InternalName = "Title", Value = "TestDoc2.pdf", DisplayName = "Title", Type = FieldType.Text }); return fieldInfos; } Here is the code for the proxy wrapper. public class CopyWSProxyWrapper : IDisposable { private readonly string siteUrl; public CopyWSProxyWrapper(string siteUrl) { this.siteUrl = siteUrl; } private readonly CopySoapClient proxy = new CopySoapClient(); public void UploadFile(string testdoc2Pdf, string[] destinationUrls, byte[] testDoc, FieldInformation[] fieldInformations) { using (CopySoapClient proxy = new CopySoapClient()) { proxy.Endpoint.Address = new EndpointAddress(String.Format("{0}/_vti_bin/copy.asmx", siteUrl)); proxy.ClientCredentials.Windows.ClientCredential = CredentialCache.DefaultNetworkCredentials; proxy.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation; CopyResult[] copyResults = null; try { proxy.CopyIntoItems(testdoc2Pdf, destinationUrls, fieldInformations, testDoc, out copyResults); } catch (Exception e) { System.Console.WriteLine(e); } if (copyResults != null) System.Console.WriteLine(copyResults[0].ErrorMessage); System.Console.ReadLine(); } } public void Dispose() { proxy.Close(); } } You can download the source code here . ******Update********** It seems to be a bug that , you can not set the contentType when create a document item using Copy.asmx. In sp2007 the field type was Choice, however, in sp2010 it is actually Computed. I have tried using the Computed field type with no luck. I have also tried sending the ContentTypeId and this does not work.You might have to write your own web services to handle this.You can check my previous blog on how to get started with you own custom WCF in SP2010 here. References: SharePoint 2010 Web Services SharePoint2007 Web Services SharePoint MSDN Forum

    Read the article

  • DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

    - by Argenis
      Does your CHECKDB hurt, Argenis? There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases. Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why - as long as it was finishing okay and found no consistency errors. Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss. In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations. Dreaded @BlobEater What was CHECKDB doing all the time while doing so little I/O? Eating Blobs. It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it. But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table. Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns. I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak. (There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!) Failed Hypotheses Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head. As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well. I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter]) …Eureka? This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence. After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on. A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column? As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column). To prove that this was the problem, I went and setup a test. Yup, that'll do it The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4). First, create a test database and a test table, which only needs to contain a sparse column: CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000 That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds. Next, we run DBCC CHECKDB against the database: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; This runs extremely fast, as least on my test rig – 198 milliseconds. Now let’s create a filtered non-clustered index on the sparse column: CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; With the index in place now, let’s run DBCC CHECKDB one more time: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds. I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes. Cue the “Argenis is not impressed” meme, please, Mr. LaRock. My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow. Happy CHECKDBing, -Argenis ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.

    Read the article

  • Changing Ogre3D terrain lighting in real time

    - by lezebulon
    I'm looking at the Ogre 3D library and I'm browsing through some examples / tutorials. My question is about terrain. There are a few examples showing how great the terrain system is, but I think that the global illumination and shadows of the terrain have to be pre-computed, which kinda makes it impossible to integrate this with a day / night cycle. Is there a way to change the terrain light sources in real time? If so it is possible to do it and keep a decent FPS?

    Read the article

  • Data Virtualization: Federated and Hybrid

    - by Krishnamoorthy
    Data becomes useful when it can be leveraged at the right time. Not only enterprises application stores operate on large volume, velocity and variety of data. Mobile and social computing are in the need of operating in foresaid data. Replicating and transferring large swaths of data is one challenge faced in the field of data integration. However, smaller chunks of data aggregated from a variety of sources presents and even more interesting challenge in the industry. Over the past few decades, technology trends focused on best user experience, operating systems, high performance computing, high performance web sites, analysis of warehouse data, service oriented architecture, social computing, cloud computing, and big data. Operating on the ‘dark data’ becomes mandatory in the future technology trend, although, no solution can make dark data useful data in a single day. Useful data can be quantified by the facts of contextual, personalized and on time delivery. In most cases, data from a single source may not be complete the picture. Data has to be combined and computed from various sources, where data may be captured as hybrid data, meaning the combination of structured and unstructured data. Since related data is often found across disparate sources, effectively integrating these sources determines how useful this data ultimately becomes. Technology trends in 2013 are expected to focus on big data and private cloud. Consumers are not merely interested in where data is located or how data is retrieved and computed. Consumers are interested in how quick and how the data can be leveraged. In many cases, data virtualization is the right solution, and is expected to play a foundational role for SOA, Cloud integration, and Big Data. The Oracle Data Integration portfolio includes a data virtualization product called ODSI (Oracle Data Service Integrator). Unlike other data virtualization solutions, ODSI can perform both read and write operations on federated/hybrid data (RDBMS, Webservices,  delimited file and XML). The ODSI Engine is built on XQuery, hence ODSI user can perform computations on data either using XQuery or SQL. Built in data and query caching features, which reduces latency in repetitive calls. Rightly positioning ODSI, can results in a highly scalable model, reducing spend on additional hardware infrastructure.

    Read the article

  • Reverse-engineer SharePoint fields, content types and list instance—Part2

    - by ybbest
    Reverse-engineer SharePoint fields, content types and list instance—Part1 Reverse-engineer SharePoint fields, content types and list instance—Part2 In the part1 of this series, I demonstrated how to use VS2010 to Reverse-engineer SharePoint fields, content types and list instances. In the part 2 of this series, I will demonstrate how to do the same using CKS:Dev. CKS:Dev extends the Visual Studio 2010 SharePoint project system with advanced templates and tools. Using these extensions you will be able to find relevant information from your SharePoint environments without leaving Visual Studio. You will have greater productivity while developing SharePoint components and you will have greater deployment capabilities on your local SharePoint installation. You can download the complete solution here. 1. First, download and install appropriate CKS:Dev from CodePlex. If you are using SharePoint Foundation 2010 then download and install the SharePoint Foundation 2010 version If you are using SharePoint Server 2010 then download and install the SharePoint Server 2010 version 2. After installation, you need to restart your visual studio and create empty SharePoint. 3. Go to Viewà Server Explorer 4. Add SharePoint web application connection to the server explorer. 5. After add the connection, you can browse to see the contents for the Web Application. 6. Go to Site Columns à YBBEST (Custom Group of you own choice) and right-click the YBBEST Folder and Click Import Site Columns. 7. Go to ContentTypesà YBBEST (Custom Group of you own choice) and right-click the YBBEST Folder and Click Import Content Types. 8. After the import completes, you can find the fields and contentTypes in the SharePoint project below. Of course you need to do some modification to your current project to make it work. 9. Next, create list instances using list instance item template in Visual Studio 10. Finally, create lookup columns using the feature receivers and the final project will look like this. You can download the complete solution here.

    Read the article

  • SQL Server – Learning SQL Server Performance: Indexing Basics – Video

    - by pinaldave
    Today I remember one of my older cartoon years ago created for Indexing and Performance. Every single time when Performance is discussed, Indexes are mentioned along with it. In recent times, data and application complexity is continuously growing.  The demand for faster query response, performance, and scalability by organizations is increasing and developers and DBAs need to now write efficient code to achieve this. DBA and Developers A DBA’s role is critical, because a production environment has to run 24×7, hence maintenance, trouble shooting, and quick resolutions are the need of the hour.  The first baby step into any performance tuning exercise in SQL Server involves creating, analysing, and maintaining indexes. Though we have learnt indexing concepts from our college days, indexing implementation inside SQL Server can vary.  Understanding this behaviour and designing our applications appropriately will make sure the application is performed to its highest potential. Video Learning Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. We decided to build a course which just addresses the practical aspects of the performance. In this course, we explored some of these indexing fundamentals and we elaborated on how SQL Server goes about using indexes.  At the end of this course of you will know the basic structure of indexes, practical insights into implementation, and maintenance tips and tricks revolving around indexes.  Finally, we will introduce SQL Server 2012 column store indexes.  We have refrained from discussing internal storage structure of the indexes but have taken a more practical, demo-oriented approach to explain these core concepts. Course Outline Here are salient topics of the course. We have explained every single concept along with a practical demonstration. Additionally shared our personal scripts along with the same. Introduction Fundamentals of Indexing Index Fundamentals Index Fundamentals – Visual Representation Practical Indexing Implementation Techniques Primary Key Over Indexing Duplicate Index Clustered Index Unique Index Included Columns Filtered Index Disabled Index Index Maintenance and Defragmentation Introduction to Columnstore Index Indexing Practical Performance Tips and Tricks Index and Page Types Index and Non Deterministic Columns Index and SET Values Importance of Clustered Index Effect of Compression and Fillfactor Index and Functions Dynamic Management Views (DMV) – Fillfactor Table Scan, Index Scan and Index Seek Index and Order of Columns Final Checklist: Index and Performance Well, we believe we have done our part, now waiting for your comments and feedback. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • Silverlight 3 Dynamic DataGrid RowStyle Ignored

    - by antoinne85
    I subclassed the standard DataGrid into SpecialDataGrid so I could override the KeyDown/KeyUp events. Other than that SpecialDataGrid is exactly the same as DataGrid. At run-time I dynamically create a bunch of these SpecialDataGrids. When a user clicks a row in the grid it hightlights, which is fine, but when that grid loses focus, it leaves a residual gray highlight on the last-selected row, which is not fine. I've heavily edited the RowStyle and CellStyle I'm applying to these Grids to more-or-less remove all formatting. I even added a static SpecialDataGrid to the app with test data so I could see if the RowStyle was somehow incorrect, applying the same RowStyle and CellStyle that I'm applying to the dynamically generated one (you'll see it in the code below). What I saw was that the "test grid" showed up exactly as I wanted, and the real grid is ignoring part of the RowStyle! Has anyone run into this issue or have any ideas of how to correct it? Some source and images follow. Creating the SpecialDataGrid: //Set up a datagrid. SpecialDataGrid radio_datagrid = new SpecialDataGrid(); radio_datagrid.ItemsSource = radios; radio_datagrid.AutoGenerateColumns = false; radio_datagrid.HeadersVisibility = DataGridHeadersVisibility.None; radio_datagrid.BorderThickness = new Thickness(0); radio_datagrid.HorizontalAlignment = HorizontalAlignment.Stretch; radio_datagrid.IsReadOnly = true; radio_datagrid.MouseLeftButtonUp += new MouseButtonEventHandler(option_datagrid_MouseLeftButtonUp); radio_datagrid.KeyDown += new KeyEventHandler(radio_datagrid_KeyDown); radio_datagrid.KeyUp += new KeyEventHandler(radio_datagrid_KeyUp); //Radio column. DataGridTemplateColumn temp_col = new DataGridTemplateColumn(); temp_col.CellTemplate = (DataTemplate)this.Resources["RadioColumnTemplate"]; temp_col.Width = new DataGridLength(20); radio_datagrid.Columns.Add(temp_col); //Description column. DataGridTextColumn txt_col = new DataGridTextColumn(); txt_col.Binding = new Binding("optionlabel"); txt_col.Width = new DataGridLength(350); radio_datagrid.Columns.Add(txt_col); //Product code column. txt_col = new DataGridTextColumn(); txt_col.Binding = new Binding("optioncode"); txt_col.Width = new DataGridLength(80); radio_datagrid.Columns.Add(txt_col); //Price column. txt_col = new DataGridTextColumn(); txt_col.Binding = new Binding("optionprice"); txt_col.Width = new DataGridLength(80); radio_datagrid.Columns.Add(txt_col); //View column. temp_col = new DataGridTemplateColumn(); temp_col.CellTemplate = (DataTemplate)this.Resources["HyperlinkButtonColumnTemplate"]; temp_col.Width = new DataGridLength(30); radio_datagrid.Columns.Add(temp_col); radio_datagrid.RowStyle = (Style)this.Resources["StyleDataGridRowNoAlternating"]; radio_datagrid.CellStyle = (Style)this.Resources["Style_DataGridCell_NoHighlight"]; Example Image: The lower DataGrid appears that way regardless of what you do to it. No highlighting of any sort and certainly no residual highlights. Any idea what's keeping this from being applied to the first?

    Read the article

  • Index was out of range. Must be non-negative and less than the size of the collection. Parameter: In

    - by user356973
    Dear Telerik Team, When I am trying to display data using radgrid I am getting error like "Index was out of range. Must be non-negative and less than the size of the collection. Parameter: Index". Here is my code <telerik:RadGrid ID="gvCktMap" BorderColor="White" runat="server" AutoGenerateColumns="true" AllowSorting="true" BackColor="White" AllowPaging="true" PageSize="25" GridLines="None" OnPageIndexChanging="gvCktMap_PageIndexChanging" OnRowCancelingEdit="gvCktMap_RowCancelingEdit" OnRowCommand="gvCktMap_RowCommand" OnRowUpdating="gvCktMap_RowUpdating" OnRowDataBound="gvCktMap_RowDataBound" OnSorting="gvCktMap_Sorting" OnRowEditing="gvCktMap_RowEditing" ShowGroupPanel="True" EnableHeaderContextMenu="true" EnableHeaderContextFilterMenu="true" AllowMultiRowSelection="true" AllowFilteringByColumn="True" AllowCustomPaging="false" OnItemCreated="gvCktMap_ItemCreated" EnableViewState="false" OnNeedDataSource="gvCktMap_NeedDataSource" OnItemUpdated="gvCktMap_ItemUpdated" > <MasterTableView DataKeyNames="sId" UseAllDataFields="true"> <Columns> <telerik:GridBoundColumn UniqueName="sId" HeaderText="sId" DataField="sId" Visible="false"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="orderId" HeaderText="orderId" DataField="orderId"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="REJ" HeaderText="REJ" DataField="REJ"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="Desc" HeaderText="Desc" DataField="Desc"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="CustomerName" HeaderText="CustomerName" DataField="CustomerName"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="MarketName" HeaderText="MarketName" DataField="MarketName"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="HeadendName" HeaderText="HeadendName" DataField="HeadendName"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="SiteName" HeaderText="SiteName" DataField="SiteName"> </telerik:GridBoundColumn> <telerik:GridBoundColumn UniqueName="TaskStatus" HeaderText="TaskStatus" DataField="TaskStatus"> </telerik:GridBoundColumn> </Columns> </telerik:RadGrid> Here is my code behind private void bingGrid() { try { gvCktMap.Columns.Clear(); DataSet dsResult = new DataSet(); DataSet dsEditItems = new DataSet(); dsEditItems.ReadXml(Server.MapPath("XMLS/" + Session["TaskID"].ToString() + ".xml")); clsSearch_BL clsObj = new clsSearch_BL(); clsObj.TaskID = (string)Session["TaskID"]; clsObj.CustName = (string)Session["CustName"]; clsObj.MarketName = (string)Session["MarketName"]; clsObj.HeadendName = (string)Session["HeadendName"]; clsObj.SiteName = (string)Session["SiteName"]; clsObj.TaskStatus = (string)Session["TaskStatus"]; clsObj.OrdType = (string)Session["OrdType"]; clsObj.OrdStatus = (string)Session["OrdStatus"]; clsObj.ProName = (string)Session["ProName"]; clsObj.LOC = (string)Session["LOC"]; dsResult = clsObj.getSearchResults_BL(clsObj); Session["SearchRes"] = dsResult; DataTable dtFilter = new DataTable(); DataColumn dtCol = new DataColumn("FilterBy"); dtFilter.Columns.Add(dtCol); dtCol = new DataColumn("DataType"); dtFilter.Columns.Add(dtCol); gvCktMap.DataSource = dsResult; gvCktMap.DataBind(); } catch (Exception ex) { } } If i remove <MasterTableView></MasterTableView> Its working fine without any error. But for some reasons i need to use <MasterTableView></MasterTableView> Can anyone help me out to fix this error. Thanks In Advance

    Read the article

  • New Skool Crosstabbing

    - by Tim Dexter
    A while back I spoke about having to go back to BIP's original crosstabbing solution to achieve a certain layout. Hok Min has provided a 'man' page for the new crosstab/pivot builder for 10.1.3.4.1 users. This will make the documentation drop but for now, get it here! The old, hand method is still available but this new approach, is more efficient and flexible. That said you may need to get into the crosstab code to tweak it where the crosstab dialog can not help. I had to do this, this week but more on that later. The following explains how the crosstab wizard builds the crosstab and what the fields inside the resulting template structure are there for. To create the crosstab a new XDO command "<?crosstab:...?>" has been created. XDO Command: <?crosstab: ctvarname; data-element; rows; columns; measures; aggregation?> Parameter Description Example Ctvarname Crosstab variable name. This is automatically generated by the Add-in. C123 data-element This is the XML data element that contains the data. "//ROW" Rows This contains a list of XML elements for row headers. The ordering information is specified within "{" and "}". The first attribute is the sort element. Leaving it blank means the sort element is the same as the row header element. The attribute "o" means order. Its value can be "a" for ascending, or "d" for descending. The attribute "t" means type. Its value can be "t" for text, and "n" for numeric. There can be more than one sort elements, example: "emp-full-name {emp-lastname,o=a,t=n}{emp-firstname,o=a,t=n}. This will sort employee by last name and first name. "Region{,o=a,t=t}, District{,o=a,t=t}" In the example, the first row header is "Region". It is sort by "Region", order is ascending, and type is text. The second row header is "District". It is sort by "District", order is ascending, and type is text. Columns This contains a list of XML elements for columns headers. The ordering information is specified within "{" and "}". The first attribute is the sort element. Leaving it blank means the sort element is the same as the column header element. The attribute "o" means order. Its value can be "a" for ascending, or "d" for descending. The attribute "t" means type. Its value can be "t" for text, and "n" for numeric. There can be more than one sort elements, example: "emp-full-name {emp-lastname,o=a,t=n}{emp-firstname,o=a,t=n}. This will sort employee by last name and first name. "ProductsBrand{,o=a,t=t}, PeriodYear{,o=a,t=t}" In the example, the first column header is "ProductsBrand". It is sort by "ProductsBrand", order is ascending, and type is text. The second column header is "PeriodYear". It is sort by "District", order is ascending, and type is text. Measures This contains a list of XML elements for measures. "Revenue, PrevRevenue" Aggregation The aggregation function name. Currently, we only support "sum". "sum" Using the Oracle BI Publisher Template Builder for Word add-in, we are able to construct the following Pivot Table: The generated XDO command for this Pivot Table is as follow: <?crosstab:c547; "//ROW";"Region{,o=a,t=t}, District{,o=a,t=t}"; "ProductsBrand{,o=a,t=t},PeriodYear{,o=a,t=t}"; "Revenue, PrevRevenue";"sum"?> Running the command on the give XML data files generates this XML file "cttree.xml". Each XPath in the "cttree.xml" is described in the following table. Element XPath Count Description C0 /cttree/C0 1 This contains elements which are related to column. C1 /cttree/C0/C1 4 The first level column "ProductsBrand". There are four distinct values. They are shown in the label H element. CS /cttree/C0/C1/CS 4 The column-span value. It is used to format the crosstab table. H /cttree/C0/C1/H 4 The column header label. There are four distinct values "Enterprise", "Magicolor", "McCloskey" and "Valspar". T1 /cttree/C0/C1/T1 4 The sum for measure 1, which is Revenue. T2 /cttree/C0/C1/T2 4 The sum for measure 2, which is PrevRevenue. C2 /cttree/C0/C1/C2 8 The first level column "PeriodYear", which is the second group-by key. There are two distinct values "2001" and "2002". H /cttree/C0/C1/C2/H 8 The column header label. There are two distinct values "2001" and "2002". Since it is under C1, therefore the total number of entries is 4 x 2 => 8. T1 /cttree/C0/C1/C2/T1 8 The sum for measure 1 "Revenue". T2 /cttree/C0/C1/C2/T2 8 The sum for measure 2 "PrevRevenue". M0 /cttree/M0 1 This contains elements which are related to measures. M1 /cttree/M0/M1 1 This contains summary for measure 1. H /cttree/M0/M1/H 1 The measure 1 label, which is "Revenue". T /cttree/M0/M1/T 1 The sum of measure 1 for the entire xpath from "//ROW". M2 /cttree/M0/M2 1 This contains summary for measure 2. H /cttree/M0/M2/H 1 The measure 2 label, which is "PrevRevenue". T /cttree/M0/M2/T 1 The sum of measure 2 for the entire xpath from "//ROW". R0 /cttree/R0 1 This contains elements which are related to row. R1 /cttree/R0/R1 4 The first level row "Region". There are four distinct values, they are shown in the label H element. H /cttree/R0/R1/H 4 This is row header label for "Region". There are four distinct values "CENTRAL REGION", "EASTERN REGION", "SOUTHERN REGION" and "WESTERN REGION". RS /cttree/R0/R1/RS 4 The row-span value. It is used to format the crosstab table. T1 /cttree/R0/R1/T1 4 The sum of measure 1 "Revenue" for each distinct "Region" value. T2 /cttree/R0/R1/T2 4 The sum of measure 1 "Revenue" for each distinct "Region" value. R1C1 /cttree/R0/R1/R1C1 16 This contains elements from combining R1 and C1. There are 4 distinct values for "Region", and four distinct values for "ProductsBrand". Therefore, the combination is 4 X 4 è 16. T1 /cttree/R0/R1/R1C1/T1 16 The sum of measure 1 "Revenue" for each combination of "Region" and "ProductsBrand". T2 /cttree/R0/R1/R1C1/T2 16 The sum of measure 2 "PrevRevenue" for each combination of "Region" and "ProductsBrand". R1C2 /cttree/R0/R1/R1C1/R1C2 32 This contains elements from combining R1, C1 and C2. There are 4 distinct values for "Region", and four distinct values for "ProductsBrand", and two distinct values of "PeriodYear". Therefore, the combination is 4 X 4 X 2 è 32. T1 /cttree/R0/R1/R1C1/R1C2/T1 32 The sum of measure 1 "Revenue" for each combination of "Region", "ProductsBrand" and "PeriodYear". T2 /cttree/R0/R1/R1C1/R1C2/T2 32 The sum of measure 2 "PrevRevenue" for each combination of "Region", "ProductsBrand" and "PeriodYear". R2 /cttree/R0/R1/R2 18 This contains elements from combining R1 "Region" and R2 "District". Since the list of values in R2 has dependency on R1, therefore the number of entries is not just a simple multiplication. H /cttree/R0/R1/R2/H 18 The row header label for R2 "District". R1N /cttree/R0/R1/R2/R1N 18 The R2 position number within R1. This is used to check if it is the last row, and draw table border accordingly. T1 /cttree/R0/R1/R2/T1 18 The sum of measure 1 "Revenue" for each combination "Region" and "District". T2 /cttree/R0/R1/R2/T2 18 The sum of measure 2 "PrevRevenue" for each combination of "Region" and "District". R2C1 /cttree/R0/R1/R2/R2C1 72 This contains elements from combining R1, R2 and C1. T1 /cttree/R0/R1/R2/R2C1/T1 72 The sum of measure 1 "Revenue" for each combination of "Region", "District" and "ProductsBrand". T2 /cttree/R0/R1/R2/R2C1/T2 72 The sum of measure 2 "PrevRevenue" for each combination of "Region", "District" and "ProductsBrand". R2C2 /cttree/R0/R1/R2/R2C1/R2C2 144 This contains elements from combining R1, R2, C1 and C2, which gives the finest level of details. M1 /cttree/R0/R1/R2/R2C1/R2C2/M1 144 The sum of measure 1 "Revenue". M2 /cttree/R0/R1/R2/R2C1/R2C2/M2 144 The sum of measure 2 "PrevRevenue". Lots to read and digest I know! Customization One new feature I discovered this week is the ability to show one column and sort by another. I had a data set that was extracting month abbreviations, we wanted to show the months across the top and some row headers to the side. As you may know XSL is not great with dates, especially recognising month names. It just wants to sort them alphabetically, so Apr comes before Jan, etc. A way around this is to generate a month number alongside the month and use that to sort. We can do that in the crosstab, sadly its not exposed in the UI yet but its doable. Go back up and take a look a the initial crosstab command. especially the Rows and Columns entries. In there you will find the sort criteria. "ProductsBrand{,o=a,t=t}, PeriodYear{,o=a,t=t}" Notice those leading commas inside the curly braces? Because there is no field preceding them it means that the crosstab should sort on the column before the brace ie PeriodYear. But you can insert another column in the data set to sort by. To get my sort working how I needed. <?crosstab:c794;"current-group()";"_Fund_Type_._Fund_Type_Display_{_Fund_Type_._Fund_Type_Sort_,o=a,t=n}";"_Fiscal_Period__Amount__._Amt_Fm_Disp_Abbr_{_Fiscal_Period__Amount__._Amt_Fiscal_Month_Sort_,o=a,t=n}";"_Execution_Facts_._Amt_";"sum"?> Excuse the horribly verbose XML tags, good ol BIEE :0) The emboldened columns are not in the crosstab but are in the data set. I just opened up the field, dropped them in and changed the type(t) value to be 'n', for number, instead of the default 'a' and my crosstab started sorting how I wanted it. If you find other tips and tricks, please share in the comments.

    Read the article

  • OWB 11gR2 &ndash; Degenerate Dimensions

    - by David Allan
    Ever wondered how to build degenerate dimensions in OWB and get the benefits of slowly changing dimensions and cube loading? Now its possible through some changes in 11gR2 to make the dimension and cube loading much more flexible. This will let you get the benefits of OWB's surrogate key handling and slowly changing dimension reference when loading the fact table and need degenerate dimensions (see Ralph Kimball's degenerate dimensions design tip). Here we will see how to use the cube operator to load slowly changing, regular and degenerate dimensions. The cube and cube operator can now work with dimensions which have no surrogate key as well as dimensions with surrogates, so you can get the benefit of the cube loading and incorporate the degenerate dimension loading. What you need to do is create a dimension in OWB that is purely used for ETL metadata; the dimension itself is never deployed (its table is, but has not data) it has no surrogate keys has a single level with a business attribute the degenerate dimension data and a dummy attribute, say description just to pass the OWB validation. When this degenerate dimension is added into a cube, you will need to configure the fact table created and set the 'Deployable' flag to FALSE for the foreign key generated to the degenerate dimension table. The degenerate dimension reference will then be in the cube operator and used when matching. Create the degenerate dimension using the regular wizard. Delete the Surrogate ID attribute, this is not needed. Define a level name for the dimension member (any name). After the wizard has completed, in the editor delete the hierarchy STANDARD that was automatically generated, there is only a single level, no need for a hierarchy and this shouldn't really be created. Deploy the implementing table DD_ORDERNUMBER_TAB, this needs to be deployed but with no data (the mapping here will do a left outer join of the source data with the empty degenerate dimension table). Now, go ahead and build your cube, use the regular TIMES dimension for example and your degenerate dimension DD_ORDERNUMBER, can add in SCD dimensions etc. Configure the fact table created and set Deployable to false, so the foreign key does not get generated. Can now use the cube in a mapping and load data into the fact table via the cube operator, this will look after surrogate lookups and slowly changing dimension references.   If you generate the SQL you will see the ON clause for matching includes the columns representing the degenerate dimension columns. Here we have seen how this use case for loading fact tables using degenerate dimensions becomes a whole lot simpler using OWB 11gR2. I'm sure there are other use cases where using this mix of dimensions with surrogate and regular identifiers is useful, Fact tables partitioned by date columns is another classic example that this will greatly help and make the cube operator much more useful. Good to hear any comments.

    Read the article

  • How to structure classes in the filesystem?

    - by da_b0uncer
    I have a few (view) classes. Table, Tree, PagingColumn, SelectionColumn, SparkLineColumn, TimeColumn. currently they're flat under app/view like this: app/view/Table app/view/Tree app/view/PagingColumn ... I thought about restructuring it, because the Trees and Tables use the columns, but there are some columns, which only work in a tree, some who work in trees and tables and in the future there are probably some who only work in tables, I don't know. My first idea was like this: app/view/Table app/view/Tree app/view/column/PagingColumn app/view/column/SelectionColumn app/view/column/SparkLineColumn app/view/column/TimeColumn But since the SelectionColumn is explicitly for trees, I have the fear that future developers could get the idea of missuse them. But how to restructure it probably? Like this: app/view/table/panel/Table app/view/tree/panel/Tree app/view/tree/column/PagingColumn app/view/tree/column/SelectionColumn app/view/column/SparkLineColumn app/view/column/TimeColumn Or like this: app/view/Table app/view/Tree app/view/column/SparkLineColumn app/view/column/TimeColumn app/view/column/tree/PagingColumn app/view/column/tree/SelectionColumn

    Read the article

  • SQL SERVER – DMV – sys.dm_os_waiting_tasks and sys.dm_exec_requests – Wait Type – Day 4 of 28

    - by pinaldave
    Previously, we covered the DMV sys.dm_os_wait_stats, and also saw how it can be useful to identify the major resource bottleneck. However, at the same time, we discussed that this is only useful when we are looking at an instance-level picture. Quite often we want to know about the processes going in our server at the given instant. Here is the query for the same. This DMV is written taking the following into consideration: we want to analyze the queries that are currently running or which have recently ran and their plan is still in the cache. SELECT dm_ws.wait_duration_ms, dm_ws.wait_type, dm_es.status, dm_t.TEXT, dm_qp.query_plan, dm_ws.session_ID, dm_es.cpu_time, dm_es.memory_usage, dm_es.logical_reads, dm_es.total_elapsed_time, dm_es.program_name, DB_NAME(dm_r.database_id) DatabaseName, -- Optional columns dm_ws.blocking_session_id, dm_r.wait_resource, dm_es.login_name, dm_r.command, dm_r.last_wait_type FROM sys.dm_os_waiting_tasks dm_ws INNER JOIN sys.dm_exec_requests dm_r ON dm_ws.session_id = dm_r.session_id INNER JOIN sys.dm_exec_sessions dm_es ON dm_es.session_id = dm_r.session_id CROSS APPLY sys.dm_exec_sql_text (dm_r.sql_handle) dm_t CROSS APPLY sys.dm_exec_query_plan (dm_r.plan_handle) dm_qp WHERE dm_es.is_user_process = 1 GO You can change CROSS APPLY to OUTER APPLY if you want to see all the details which are omitted because of the plan cache. Let us analyze the result of the above query and see how it can be helpful to identify the query and the kind of wait type it creates. Click to Enlarage The above query will return various columns. There are various columns that provide very important details. e.g. wait_duration_ms – it indicates current wait for the query that executes at that point of time. wait_type – it indicates the current wait type for the query text – indicates the query text query_plan – when clicked on the same, it will display the query plans There are many other important information like CPU_time, memory_usage, and logical_reads, which can be read from the query as well. In future posts on this series, we will see how once identified wait type we can attempt to reduce the same. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Excel Macro Runtime error 428 in Excel 2003

    - by Adam
    Hi I have created a xlt excel template which works fine in Excel 2007 under compatibility mode and shows no errors on compatibility check. The template runs a number of Macros which creates pivot tables and charts. When a colleague tries to run the same xlt on excel 2003 they get a Runtime error 428 (Object does not support this property or method). The runtime error fails at this point; ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R7C1", TableName:="PivotTable2", _ DefaultVersion:=xlPivotTableVersion10 Any help would be appreciated. This is the full Macro; Sub Auto_Open() ' ' ImportData Macro ' Macro to import data, Data must be in your local D: Drive and named raw.csv ' ' Sheets("raw").Select With ActiveSheet.QueryTables.Add(Connection:= _ "TEXT;d:\raw.csv", Destination:=Range _ ("$A$1")) .Name = "raw_1" .FieldNames = True .RowNumbers = False .FillAdjacentFormulas = False .PreserveFormatting = True .RefreshOnFileOpen = False .RefreshStyle = xlInsertDeleteCells .SavePassword = False .SaveData = True .AdjustColumnWidth = True .RefreshPeriod = 0 .TextFilePromptOnRefresh = False .TextFilePlatform = 850 .TextFileStartRow = 1 .TextFileParseType = xlDelimited .TextFileTextQualifier = xlTextQualifierDoubleQuote .TextFileConsecutiveDelimiter = False .TextFileTabDelimiter = False .TextFileSemicolonDelimiter = False .TextFileCommaDelimiter = True .TextFileSpaceDelimiter = False .TextFileColumnDataTypes = Array(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, _ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) .TextFileTrailingMinusNumbers = True .Refresh BackgroundQuery:=False End With ' ' AddMonthColumn Macro ' ' Sheets("raw").Select Range("AK1").Select ActiveCell.FormulaR1C1 = "Month" Range("AK2").FormulaR1C1 = "=DATE(YEAR(RC[-36]),MONTH(RC[-36]),1)" LastRow = ActiveSheet.UsedRange.Rows.Count Range("AK2").AutoFill Destination:=Range("AK2:AK" & LastRow) Columns("AK:AK").EntireColumn.AutoFit Columns("AK:AK").Select Selection.NumberFormat = "mmmm" With Selection .HorizontalAlignment = xlCenter End With Columns("AK:AK").EntireColumn.AutoFit Selection.Copy Selection.PasteSpecial Paste:=xlPasteValues, Operation:=xlNone, SkipBlanks _ :=False, Transpose:=False ' ' Add Report Information [Text] ' Sheets("Frontpage").Select Range("A2:N2").Select Selection.Merge ActiveCell.FormulaR1C1 = "Service Activity Report" With Selection.Font .Size = 20 End With Range("A3:N3").Select Selection.Merge ActiveCell.FormulaR1C1 = InputBox("Customer Name") With Selection .HorizontalAlignment = xlCenter .VerticalAlignment = xlCenter End With Range("A4:N4").Select Selection.Merge ActiveCell.FormulaR1C1 = InputBox("Date Range dd/mm/yyyy - dd/mm/yyyy") With Selection .HorizontalAlignment = xlCenter .VerticalAlignment = xlCenter End With ' ' IncidentsbyPriority Macro ' ' Sheets("Frontpage").Select Range("A7").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R7C1", TableName:="PivotTable2", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(7, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$7:$H$22") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable2").PivotFields("Priority") .Orientation = xlRowField .Position = 1 End With ActiveSheet.PivotTables("PivotTable2").AddDataField ActiveSheet.PivotTables( _ "PivotTable2").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentsbyPriority" ActiveChart.ChartTitle.Text = "Incidents by Priority" Dim RngToCover As Range Dim ChtOb As ChartObject Set RngToCover = ActiveSheet.Range("D7:L16") Set ChtOb = ActiveSheet.ChartObjects("IncidentsbyPriority") ChtOb.Height = RngToCover.Height ' resize ChtOb.Width = RngToCover.Width ' resize ChtOb.Top = RngToCover.Top ' reposition ChtOb.Left = RngToCover.Left ' reposition ' ' IncidentbyMonth Macro ' ' Sheets("Frontpage").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R18C1", TableName:="PivotTable4", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(18, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$18:$H$38") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable4").PivotFields("Month") .Orientation = xlRowField .Position = 1 End With ActiveSheet.PivotTables("PivotTable4").AddDataField ActiveSheet.PivotTables( _ "PivotTable4").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentbyMonth" ActiveChart.ChartTitle.Text = "Incidents by Month" Dim RngToCover2 As Range Dim ChtOb2 As ChartObject Set RngToCover2 = ActiveSheet.Range("D18:L30") Set ChtOb2 = ActiveSheet.ChartObjects("IncidentbyMonth") ChtOb2.Height = RngToCover2.Height ' resize ChtOb2.Width = RngToCover2.Width ' resize ChtOb2.Top = RngToCover2.Top ' reposition ChtOb2.Left = RngToCover2.Left ' reposition ' ' IncidentbyCategory Macro ' ' Sheets("Frontpage").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R38C1", TableName:="PivotTable6", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(38, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$38:$H$119") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable6").PivotFields("Category 2") .Orientation = xlRowField .Position = 1 End With With ActiveSheet.PivotTables("PivotTable6").PivotFields("Category 3") .Orientation = xlPageField .Position = 1 End With ActiveSheet.PivotTables("PivotTable6").AddDataField ActiveSheet.PivotTables( _ "PivotTable6").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentbyCategory" ActiveChart.ChartTitle.Text = "Incidents by Category" Dim RngToCover3 As Range Dim ChtOb3 As ChartObject Set RngToCover3 = ActiveSheet.Range("D38:L56") Set ChtOb3 = ActiveSheet.ChartObjects("IncidentbyCategory") ChtOb3.Height = RngToCover3.Height ' resize ChtOb3.Width = RngToCover3.Width ' resize ChtOb3.Top = RngToCover3.Top ' reposition ChtOb3.Left = RngToCover3.Left ' reposition ' ' IncidentsbySiteandPriority Macro ' ' Sheets("Frontpage").Select Range("A71").Select ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _ "raw!R1C1:R65536C37", Version:=xlPivotTableVersion10).CreatePivotTable _ TableDestination:="Frontpage!R71C1", TableName:="PivotTable3", _ DefaultVersion:=xlPivotTableVersion10 Sheets("Frontpage").Select Cells(71, 1).Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=Range("Frontpage!$A$71:$H$90") ActiveChart.ChartType = xlColumnClustered With ActiveSheet.PivotTables("PivotTable3").PivotFields("Site Name") .Orientation = xlRowField .Position = 1 End With With ActiveSheet.PivotTables("PivotTable3").PivotFields("Priority") .Orientation = xlColumnField .Position = 1 End With ActiveSheet.PivotTables("PivotTable3").AddDataField ActiveSheet.PivotTables( _ "PivotTable3").PivotFields("Case ID"), "Count of Case ID", xlCount ActiveChart.Parent.Name = "IncidentbySiteandPriority" ' ActiveChart.ChartTitle.Text = "Incidents by Site and Priority" Dim RngToCover4 As Range Dim ChtOb4 As ChartObject Set RngToCover4 = ActiveSheet.Range("H71:O91") Set ChtOb4 = ActiveSheet.ChartObjects("IncidentbySiteandPriority") ChtOb4.Height = RngToCover4.Height ' resize ChtOb4.Width = RngToCover4.Width ' resize ChtOb4.Top = RngToCover4.Top ' reposition ChtOb4.Left = RngToCover4.Left ' reposition Columns("A:G").Select Range("A52").Activate Columns("A:G").EntireColumn.AutoFit End Sub

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • MySQL for Excel 1.1.0 GA has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.0 GA, one of our newest products contained in the MySQL Installer suite. You can download it from our official Downloads page at http://dev.mysql.com/downloads/installer/. The 1.1.0 release of MySQL for Excel introduces the following features: Edit MySQL Data. Edit MySQL Data This may be the coolest feature so far; users will be able to edit the data in a MySQL table using MS Excel in a very friendly and intuitive way.  Edit Data supports inserting new rows, deleting existing rows and updating existing data as easy as playing with data in an Excel’s spreadsheet and pushing changes back to the server.  Also this version contains the following bug fixes: Enabled the following checkboxes in the Append Data's Advanced Options dialog and added code in the Append Data dialog to use the checkboxes as follows: Automatically store the column mapping for the given table     If checked the current mapping will be stored automatically after clicking the Append button if the append operation is successful and there is no mapping for the current connection.schema.table already; the new mapping is stored with a proposed name of Mapping. Reload stored column mapping for the selected table automatically     If checked the first Stored Mapping found where all column names in the source grid match all column names in the target grid is automatically selected and applied when the Append Data dialog is loaded. Fixed code in Append Data that applies a stored column mapping to skip target columns where the associated mapping is empty (saved as a -1). Enclosed the Add-In's startup code in a try-catch block in order to log any possible error thrown during startup; and added information messages to the log at the beginning of the Add-In's startup code and at the end of the shutdown code.  Also changed the wrapper method that calls the MySQLUtility to write messages to the log to make logging easier, thus changed the log call throughout all the code that contains a try-catch block. Added code to the main wix configuration file to check if a newer version is already installed and if so abort the installation Fixed code to refresh the Import Procedure Form's preview grid's data source to repaint its contents every time the Call button is pressed. Added code to re-pull connections after connections are migrated from Excel to Workbench. Fixed code so when the Append Data's Automatic Mapping is performed any subsequent change on a mapping resets the mapping to a Manual Mapping. Added code to the InfoDialog class to set the button text to "Show Details" or "Hide Details" depending on the status of the Details text container. Fixed a GUID in the main wix configuration file so now previous versions are uninstalled during a new installation. Added an option to the Export Data's Advanced Options dialog to remove columns with no data, by default the Export Dialog will only flag those columns as Excluded. Added code to display a warning and paint a column red if the column name in the Export Data dialog is not set, display a warning if the table name is not set, and stack warnings but not display them if a column is Excluded, warnings are displayed normally for columns if they are not Excluded anymore.  Added code to prevent the Append and Export of Data if more than 1 selection is made (selecting more than 1 area holding the Ctrl key while selecting Excel cells). Fixed problem that prevented MySQL for Excel from loading when Display settings in Windows 7 is set to Adjust to Best Performance (Oracle bug 14521405 - UNHANDLED EXCEPTION IS THROWN WHEN LOADING MYSQL FOR EXCEL). Fixed code that renames the auto-generated Primary Key column when the Table name changes since it was not detecting if a column with the same name already existed in the table. The column duplication was not actually happening, it looked that way because the automatically generated PK column was not detecting a column had that same name. Fixed code in Export Data dialog to always set an empty string instead of null to the MySQLDataColumn properties that stores MySQL data types (MySQLDataType, RowsFrom1stDataType and RowsFrom2ndDataType). Added code to display a warning and color red a column which Data Type has not been set by the user or has been manually cleared. Added code to output to the application log exception messages consistently in all places where exceptions are catched. A series of blog posts explaining the new Edit MySQL Data feature and the other existing features are coming in this blog. You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.5/en/mysql-for-excel.html You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Customizing Flowcharts in Oracle Tutor

    - by [email protected]
    Today we're going to look at how you can customize the flowcharts within Oracle Tutor procedures, and how you can share those changes with other authors within your company. Here is an image of a flowchart within a Tutor procedure with the default size and color scheme. You may want to change the size of your flowcharts as your end-users might have larger screens or need larger fonts. To change the size and number of columns, navigate to Tutor Author Author Options Flowcharts. The default is to have 4 columns appear in each flowchart, but, if I change it to six, my end-users will see a denser flowchart. This might be too dense for my end-users, so I will change it to 5 columns, and I will also deselect the option to have separate task boxes. Now let's look at how to customize the colors. Within the Flowchart options dialog, there is a button labeled "Colors." This brings up a dialog box of every object on a Tutor flowchart, and I can modify the color of each object, as well as the text within the object. If I click on the background, the "page" object appears in the Item field, and now I can customize the color and the title text by selecting Select Fill Color and/or Select Text Color. A dialog box with color choices appears. If I select Define Custom Colors, I can make my selections even more precise. Each time I change the color of an object, it appears in the selection screen. When the flowchart customization is finished, I can save my changes by naming the scheme. Although the color scheme I have chosen is rather silly looking, perhaps I want others to give me their feedback and make changes as they wish. I can share the color scheme with them by copying the FCP.INI file in the Tutor\Author directory into the same directory on their systems. If the other users have color schemes that they do not want to lose, they can copy the relevant lines from the FCP.INI file into their file. If I flowchart my document with the new scheme, I can see how it looks within the document. Sometimes just one or two changes to the default scheme are enough to customize the flowchart to your company's color palette. I have seen customers who have only changed the Start object to green and the End object to red, and I've seen another customer who changed every object to some variant of black and orange. Experiment! And let us know how you have customized your flowcharts. Mary R. Keane Senior Development Director, Oracle Tutor

    Read the article

  • SharePoint: Problem with BaseFieldControl

    - by Anoop
    Hi All, In below code in a Gird First column is BaseFieldControl from a column of type Choice of SPList. Secound column is a text box control with textchange event. Both the controls are created at rowdatabound event of gridview. Now the problem is that when Steps: 1) select any of the value from BaseFieldControl(DropDownList) which is rendered from Choice Column of SPList 2) enter any thing in textbox in another column of grid. 3) textchanged event fires up and in textchange event rebound the grid. Problem: the selected value becomes the first item or the default value(if any). but if i do not rebound the grid at text changed event it works fine. Please suggest what to do. using System; using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls; using System.Web.UI; using System.Web.UI.WebControls; using System.Data; namespace SharePointProjectTest.Layouts.SharePointProjectTest { public partial class TestBFC : LayoutsPageBase { GridView grid = null; protected void Page_Load(object sender, EventArgs e) { try { grid = new GridView(); grid.ShowFooter = true; grid.ShowHeader = true; grid.AutoGenerateColumns = true; grid.ID = "grdView"; grid.RowDataBound += new GridViewRowEventHandler(grid_RowDataBound); grid.Width = Unit.Pixel(900); MasterPage holder = (MasterPage)Page.Controls[0]; holder.FindControl("PlaceHolderMain").Controls.Add(grid); DataTable ds = new DataTable(); ds.Columns.Add("Choice"); //ds.Columns.Add("person"); ds.Columns.Add("Curr"); for (int i = 0; i < 3; i++) { DataRow dr = ds.NewRow(); ds.Rows.Add(dr); } grid.DataSource = ds; grid.DataBind(); } catch (Exception ex) { } } void tx_TextChanged(object sender, EventArgs e) { DataTable ds = new DataTable(); ds.Columns.Add("Choice"); ds.Columns.Add("Curr"); for (int i = 0; i < 3; i++) { DataRow dr = ds.NewRow(); ds.Rows.Add(dr); } grid.DataSource = ds; grid.DataBind(); } void grid_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { SPWeb web = SPContext.Current.Web; SPList list = web.Lists["Source for test"]; SPField field = list.Fields["Choice"]; SPListItem item=list.Items.Add(); BaseFieldControl control = (BaseFieldControl)GetSharePointControls(field, list, item, SPControlMode.New); if (control != null) { e.Row.Cells[0].Controls.Add(control); } TextBox tx = new TextBox(); tx.AutoPostBack = true; tx.ID = "Curr"; tx.TextChanged += new EventHandler(tx_TextChanged); e.Row.Cells[1].Controls.Add(tx); } } public static Control GetSharePointControls(SPField field, SPList list, SPListItem item, SPControlMode mode) { if (field == null || field.FieldRenderingControl == null || field.Hidden) return null; try { BaseFieldControl webControl = field.FieldRenderingControl; webControl.ListId = list.ID; webControl.ItemId = item.ID; webControl.FieldName = field.Title; webControl.ID = "id_" + field.InternalName; webControl.ControlMode = mode; webControl.EnableViewState = true; return webControl; } catch (Exception ex) { return null; } } } }

    Read the article

  • Which design better when use foreign key instead of a string to store a list of id

    - by Kien Thanh
    I'm building online examination system. I have designed to table, Question and GeneralExam. The table GeneralExam contains info about the exam like name, description, duration,... Now I would like to design table GeneralQuestion, it will contain the ids of questions belongs to a general exam. Currently, I have two ideas to design GeneralQuestion table: It will have two columns: general_exam_id, question_id. It will have two columns: general_exam_id, list_question_ids (string/text). I would like to know which designing is better, or pros and cons of each designing. I'm using Postgresql database.

    Read the article

  • Altering a Column Which has a Default Constraint

    - by Dinesh Asanka
    Setting up a default column is a common task for  developers.  But, are we naming those default constraints explicitly? In the below  table creation, for the column, sys_DateTime the default value Getdate() will be allocated. CREATE TABLE SampleTable (ID int identity(1,1), Sys_DateTime Datetime DEFAULT getdate() ) We can check the relevant information from the system catalogs from following query. SELECT sc.name TableName, dc.name DefaultName, dc.definition, OBJECT_NAME(dc.parent_object_id) TableName, dc.is_system_named  FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id and results would be: Most of the above columns are self-explanatory. The last column, is_system_named, is to identify whether the default name was given by the system. As you know, in the above case, since we didn’t provide  any default name, the  system will generate a default name for you. But the problem with these names is that they can differ from environment to environment.  If example if I create this table in different table the default name could be DF__SampleTab__Sys_D__7E6CC920 Now let us create another default and explicitly name it: CREATE TABLE SampleTable2 (ID int identity(1,1), Sys_DateTime Datetime )   ALTER TABLE SampleTable2 ADD CONSTRAINT DF_sys_DateTime_Getdate DEFAULT( Getdate()) FOR Sys_DateTime If we run the previous query again we will be returned the below output. And you can see that last created default name has 0 for is_system_named. Now let us say I want to change the data type of the sys_DateTime column to something else: ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date This will generate the below error: Msg 5074, Level 16, State 1, Line 1 The object ‘DF_sys_DateTime_Getdate’ is dependent on column ‘Sys_DateTime’. Msg 4922, Level 16, State 9, Line 1 ALTER TABLE ALTER COLUMN Sys_DateTime failed because one or more objects access this column. This means, you need to drop the default constraint before altering it: ALTER TABLE [dbo].[SampleTable2] DROP CONSTRAINT [DF_sys_DateTime_Getdate] ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date   ALTER TABLE [dbo].[SampleTable2] ADD CONSTRAINT [DF_sys_DateTime_Getdate] DEFAULT (getdate()) FOR [Sys_DateTime] If you have a system named default constraint that can differ from environment to environment and so you cannot drop it as before, you can use the below code template: DECLARE @defaultname VARCHAR(255) DECLARE @executesql VARCHAR(1000)   SELECT @defaultname = dc.name FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id WHERE OBJECT_NAME (parent_object_id) = 'SampleTable' AND sc.name ='Sys_DateTime' SET @executesql = 'ALTER TABLE SampleTable DROP CONSTRAINT ' + @defaultname EXEC( @executesql) ALTER TABLE SampleTable ALTER COLUMN Sys_DateTime Date ALTER TABLE [dbo].[SampleTable] ADD DEFAULT (Getdate()) FOR [Sys_DateTime]

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 6

    - by Max
    In this post, we are going to look into implementing lists into our twitter application and also about enhancing the data grid to display the status messages in a pleasing way with the profile images. Twitter lists are really cool feature that they recently added, I love them and I’ve quite a few lists setup one for DOTNET gurus, SQL Server gurus and one for a few celebrities. You can follow them here. Now let us move onto our tutorial. 1) Lists can be subscribed to in two ways, one can be user’s own lists, which he has created and another one is the lists that the user is following. Like for example, I’ve created 3 lists myself and I am following 1 other lists created by another user. Both of them cannot be fetched in the same api call, its a two step process. 2) In the TwitterCredentialsSubmit method we’ve in Home.xaml.cs, let us do the first api call to get the lists that the user has created. For this the call has to be made to https://twitter.com/<TwitterUsername>/lists.xml. The API reference is available here. myService1.AllowReadStreamBuffering = true; myService1.UseDefaultCredentials = false; myService1.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService1.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsRequestCompleted); myService1.DownloadStringAsync(new Uri("https://twitter.com/" + GlobalVariable.getUserName() + "/lists.xml")); 3) Now let us look at implementing the event handler – ListRequestCompleted for this. public void ListsRequestCompleted(object sender, System.Net.DownloadStringCompletedEventArgs e) { if (e.Error != null) { StatusMessage.Text = "This application must be installed first."; parseXML(""); } else { //MessageBox.Show(e.Result.ToString()); parseXMLLists(e.Result.ToString()); } } 4) Now let us look at the parseXMLLists in detail xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.MinWidth = 70; b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } So here what am I doing, I am just dynamically creating a button for each of the lists and put them within a StackPanel and for each of these buttons, I am creating a event handler b_Click which will be fired on button click. We will look into this method in detail soon. For now let us get the buttons displayed. 5) Now the user might have some lists to which he has subscribed to. We need to create a button for these as well. In the end of TwitterCredentialsSubmit method, we need to make a call to http://api.twitter.com/1/<TwitterUsername>/lists/subscriptions.xml. Reference is available here. The code will look like this below. myService2.AllowReadStreamBuffering = true; myService2.UseDefaultCredentials = false; myService2.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService2.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsSubsRequestCompleted); myService2.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/subscriptions.xml")); 6) In the event handler – ListsSubsRequestCompleted, we need to parse through the xml string and create a button for each of the lists subscribed, let us see how. I’ve taken only the “full_name”, you can choose what you want, refer the documentation here. Note the point that the full_name will have @<UserName>/<ListName> format – this will be useful very soon. xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("full_name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.MinWidth = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } Please note, I am setting the button width to be auto based on the content and also giving it a midwidth value. I wanted to create a rounded corner buttons, but for some reason its not working. Also add this StackPanel – TwitterListStack of the Home.xaml <StackPanel HorizontalAlignment="Center" Orientation="Horizontal" Name="TwitterListStack"></StackPanel> After doing this, you would get a series of buttons in the top of the home page. 7) Now the button click event handler – b_Click, in this method, once the button is clicked, I call another method with the content string of the button which is clicked as the parameter. Button b = (Button)e.OriginalSource; getListStatuses(b.Content.ToString()); 8) Now the getListsStatuses method: toggleProgressBar(true); WebRequest.RegisterPrefix("http://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted); if (listName.IndexOf("@") > -1 && listName.IndexOf("/") > -1) { string[] arrays = null; arrays = listName.Split('/'); arrays[0] = arrays[0].Replace("@", " ").Trim(); //MessageBox.Show(arrays[0]); //MessageBox.Show(arrays[1]); string url = "http://api.twitter.com/1/" + arrays[0] + "/lists/" + arrays[1] + "/statuses.xml"; //MessageBox.Show(url); myService.DownloadStringAsync(new Uri(url)); } else myService.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/" + listName + "/statuses.xml")); Please note that the url to look at will be different based on the list clicked – if its user created, the url format will be http://api.twitter.com/1/<CurentUser>/lists/<ListName>/statuses.xml But if it is some lists subscribed, it will be http://api.twitter.com/1/<ListOwnerUserName>/lists/<ListName>/statuses.xml The first one is pretty straight forward to implement, but if its a list subscribed, we need to split the listName string to get the list owner and list name and user them to form the string. So that is what I’ve done in this method, if the listName has got “@” and “/” I build the url differently. 9) Until now, we’ve been using only a few nodes of the status message xml string, now we will look to fetch a new field - “profile_image_url”. Images in datagrid – COOL. So for that, we need to modify our Status.cs file to include two more fields one string another BitmapImage with get and set. public string profile_image_url { get; set; } public BitmapImage profileImage { get; set; } 10) Now let us change the generic parseXML method which is used for binding to the datagrid. public void parseXML(string text) { XDocument xdoc; xdoc = XDocument.Parse(text); statusList = new List<Status>(); statusList = (from status in xdoc.Descendants("status") select new Status { ID = status.Element("id").Value, Text = status.Element("text").Value, Source = status.Element("source").Value, UserID = status.Element("user").Element("id").Value, UserName = status.Element("user").Element("screen_name").Value, profile_image_url = status.Element("user").Element("profile_image_url").Value, profileImage = new BitmapImage(new Uri(status.Element("user").Element("profile_image_url").Value)) }).ToList(); DataGridStatus.ItemsSource = statusList; StatusMessage.Text = "Datagrid refreshed."; toggleProgressBar(false); } We are here creating a new bitmap image from the image url and creating a new Status object for every status and binding them to the data grid. Refer to the Twitter API documentation here. You can choose any column you want. 11) Until now, we’ve been using the auto generate columns for the data grid, but if you want it to be really cool, you need to define the columns with templates, etc… <data:DataGrid AutoGenerateColumns="False" Name="DataGridStatus" Height="Auto" MinWidth="400"> <data:DataGrid.Columns> <data:DataGridTemplateColumn Width="50" Header=""> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Image Source="{Binding profileImage}" Width="50" Height="50" Margin="1"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> <data:DataGridTextColumn Width="Auto" Header="User Name" Binding="{Binding UserName}" /> <data:DataGridTemplateColumn MinWidth="300" Width="Auto" Header="Status"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock TextWrapping="Wrap" Text="{Binding Text}"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> I’ve used only three columns – Profile image, Username, Status text. Now our Datagrid will look super cool like this. Coincidentally,  Tim Heuer is on the screenshot , who is a Silverlight Guru and works on SL team in Microsoft. His blog is really super. Here is the zipped file for all the xaml, xaml.cs & class files pages. Ok let us stop here for now, will look into implementing few more features in the next few posts and then I am going to look into developing a ASP.NET MVC 2 application. Hope you all liked this post. If you have any queries / suggestions feel free to comment below or contact me. Cheers! Technorati Tags: Silverlight,LINQ,Twitter API,Twitter,Silverlight 4

    Read the article

  • mysql join with multiple values in one column

    - by CYREX
    I need to make a query that creates 3 columns that come from 2 tables which have the following relations: TABLE 1 has Column ID that relates to TABLE 2 with column ID2 In TABLE 1 there is a column called user In TABLE 2 there is a column called names There can be 1 unique user but there can be many names associated to that user. If i do the following i get all data BUT the user column repeats itself for each name it has associated. What i want is for use to appear unique but the names columns appear with all the names associated to the user column but separated by commas, like the following: select user,names from TABLE1 left join TABLE2 on TABLE1.id = TABLE2.id This will show the users repeated everytime a name appears for that user. what i want is to appear like this: USER - NAMES cyrex - pedrox, rambo, zelda homeboy - carmen, carlos, tom, sandra jerry - seinfeld, christine ninja - soloboy etc....

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >