Search Results

Search found 25415 results on 1017 pages for 'table resizing'.

Page 7/1017 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Tools for displaying a multidimensional data table?

    - by ShreevatsaR
    [Apologies if this sort of question is off-topic for SuperUser. Please redirect to the right place if so.] There is a 3-dimensional array of values. (That is, instead of a table/2-dimensional array with values in a grid, the values can be thought of in a cube instead.) Is there a way to display this "cube" interactively, ideally on a webpage? Specifically, given the data, it would work something like this: the user selects two of the 3 variables. He then sees a "stack" of tables, one for each value of the third variable (cross-sections, in other words). By selecting the appropriate table from the stack, he can see the (i,j,k) value he wants. The "technology" for displaying such a thing (stacked tables, rotation, etc.) already exists, so this seems the sort of thing that someone ought to have written already. To be clear: I don't need sophisticated graphics necessarily, just the ability to select from cross-sections of variables. But I have no experience with (say, for displaying on a webpage) what web gadgets exist, so I'm clueless how to even search for one. (Google searches like "multidimensional data visualization" didn't throw up anything useful. Google Spreadsheets can do a few kinds of charts which can be embedded on a webpage, but I cannot tell if this is one of them.) [I can imagine how it ought to work for higher dimensions. For four-dimensions, instead of selecting just a stack, you'd first select an (i,j) from an "outer table", which would show all (k,l) values for that (i,j). For higher dimensions, inductively: you select (i,j), and then repeat what you'd do with 2 fewer dimensions.] So has this been written? Is this easy to write? Where ought one to look for such a thing?

    Read the article

  • Excel or OpenOffice Table Summary: how to reconstruct a table from another, with "missing" values

    - by Gilberto
    I have a table of values (partial) with 3 columns: month (from 1 to 12), code and value. E.g., MONTH | CODE | VALUE 1 | aaa | 111 1 | bbb | 222 1 | ccc | 333 2 | aaa | 1111 2 | ccc | 2222 The codes are clients and the values are sales volumes. Each row represents the sales for one month for one client. So I have three clients, namely aaa, bbb, and ccc. For month=1 their sales volumes are: aaa-111, bbb-222, and ccc-333. A client may or may not have sales for every month; for example, for the month 2, the client bbb has no sales. I have to construct a completed summary table for all the MONTH / CODE pairs with their corresponding VALUE (using the value from the "partial" table, if present, otherwise print a string "missing"). MONTH | CODE | VALUE 1 | aaa | 111 1 | bbb | 222 1 | ccc | 333 2 | aaa | 1111 2 | bbb | missing 2 | ccc | 2222 Or, to put it another way, the table is a linear representation of a matrix:                                 and I want to identify the cells for which no value was provided. How can I do that?

    Read the article

  • nested sortable table rows jquery

    - by FLY
    Hi, I am trying to sort rows of a table, and put them as 'child' elements of a table row I found this: http://code.google.com/p/nestedsortables/ this works with ul li lists, but i want to build it for a table. <table> <thead> <th>tablehead</th> </thead> <tbody> <tr><td>somevalue</td></tr> <tr><td>somevalue2</td></tr> </tbody> </table> so yeah you can use jquery.sortable() and sort the rows, but i want 'somevalue' to be become a child element of 'somevalue2' if you drag 'somevalue' over 'somevalue2' I don't know if it is posible with a table. can anyone help me? Thnx!

    Read the article

  • Assert parameters in a table-valued UDF

    - by Clay Lenhart
    Is there a way to create "asserts" on the parameters of a table-valued UDF. I'd like to use a table-valued UDF for performance reasons, however I know that certain parameter combinations (like start and end dates that are more than a month apart) will cause performance issues on the server for all users. End users query the database via Excel using UDFs. UDFs (and table-valued UDFs in particular) are useful when the data is too large for Excel. Users write simple SQL queries that categorizes the data into groups to reduce the number of rows. For example, the user may be interested in weekly aggregates rather than hourly ones. Users write a group by SELECT statement to reduce the rows by 24x7=168 times. I know I can write RAISERROR statements in multistatement UDFs, but table-valued UDFs are integrated in the query optimizer so these queries are more efficient with table-valued UDFs. So, can I define assertions on the parameters passed to a table-valued UDF?

    Read the article

  • Table sorting & pagination with jQuery and Razor in ASP.NET MVC

    - by hajan
    Introduction jQuery enjoys living inside pages which are built on top of ASP.NET MVC Framework. The ASP.NET MVC is a place where things are organized very well and it is quite hard to make them dirty, especially because the pattern enforces you on purity (you can still make it dirty if you want so ;) ). We all know how easy is to build a HTML table with a header row, footer row and table rows showing some data. With ASP.NET MVC we can do this pretty easy, but, the result will be pure HTML table which only shows data, but does not includes sorting, pagination or some other advanced features that we were used to have in the ASP.NET WebForms GridView. Ok, there is the WebGrid MVC Helper, but what if we want to make something from pure table in our own clean style? In one of my recent projects, I’ve been using the jQuery tablesorter and tablesorter.pager plugins that go along. You don’t need to know jQuery to make this work… You need to know little CSS to create nice design for your table, but of course you can use mine from the demo… So, what you will see in this blog is how to attach this plugin to your pure html table and a div for pagination and make your table with advanced sorting and pagination features.   Demo Project Resources The resources I’m using for this demo project are shown in the following solution explorer window print screen: Content/images – folder that contains all the up/down arrow images, pagination buttons etc. You can freely replace them with your own, but keep the names the same if you don’t want to change anything in the CSS we will built later. Content/Site.css – The main css theme, where we will add the theme for our table too Controllers/HomeController.cs – The controller I’m using for this project Models/Person.cs – For this demo, I’m using Person.cs class Scripts – jquery-1.4.4.min.js, jquery.tablesorter.js, jquery.tablesorter.pager.js – required script to make the magic happens Views/Home/Index.cshtml – Index view (razor view engine) the other items are not important for the demo. ASP.NET MVC 1. Model In this demo I use only one Person class which defines Person entity with several properties. You can use your own model, maybe one which will access data from database or any other resource. Person.cs public class Person {     public string Name { get; set; }     public string Surname { get; set; }     public string Email { get; set; }     public int? Phone { get; set; }     public DateTime? DateAdded { get; set; }     public int? Age { get; set; }     public Person(string name, string surname, string email,         int? phone, DateTime? dateadded, int? age)     {         Name = name;         Surname = surname;         Email = email;         Phone = phone;         DateAdded = dateadded;         Age = age;     } } 2. View In our example, we have only one Index.chtml page where Razor View engine is used. Razor view engine is my favorite for ASP.NET MVC because it’s very intuitive, fluid and keeps your code clean. 3. Controller Since this is simple example with one page, we use one HomeController.cs where we have two methods, one of ActionResult type (Index) and another GetPeople() used to create and return list of people. HomeController.cs public class HomeController : Controller {     //     // GET: /Home/     public ActionResult Index()     {         ViewBag.People = GetPeople();         return View();     }     public List<Person> GetPeople()     {         List<Person> listPeople = new List<Person>();                  listPeople.Add(new Person("Hajan", "Selmani", "[email protected]", 070070070,DateTime.Now, 25));                     listPeople.Add(new Person("Straight", "Dean", "[email protected]", 123456789, DateTime.Now.AddDays(-5), 35));         listPeople.Add(new Person("Karsen", "Livia", "[email protected]", 46874651, DateTime.Now.AddDays(-2), 31));         listPeople.Add(new Person("Ringer", "Anne", "[email protected]", null, DateTime.Now, null));         listPeople.Add(new Person("O'Leary", "Michael", "[email protected]", 32424344, DateTime.Now, 44));         listPeople.Add(new Person("Gringlesby", "Anne", "[email protected]", null, DateTime.Now.AddDays(-9), 18));         listPeople.Add(new Person("Locksley", "Stearns", "[email protected]", 2135345, DateTime.Now, null));         listPeople.Add(new Person("DeFrance", "Michel", "[email protected]", 235325352, DateTime.Now.AddDays(-18), null));         listPeople.Add(new Person("White", "Johnson", null, null, DateTime.Now.AddDays(-22), 55));         listPeople.Add(new Person("Panteley", "Sylvia", null, 23233223, DateTime.Now.AddDays(-1), 32));         listPeople.Add(new Person("Blotchet-Halls", "Reginald", null, 323243423, DateTime.Now, 26));         listPeople.Add(new Person("Merr", "South", "[email protected]", 3232442, DateTime.Now.AddDays(-5), 85));         listPeople.Add(new Person("MacFeather", "Stearns", "[email protected]", null, DateTime.Now, null));         return listPeople;     } }   TABLE CSS/HTML DESIGN Now, lets start with the implementation. First of all, lets create the table structure and the main CSS. 1. HTML Structure @{     Layout = null;     } <!DOCTYPE html> <html> <head>     <title>ASP.NET & jQuery</title>     <!-- referencing styles, scripts and writing custom js scripts will go here --> </head> <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th> value </th>                 </tr>             </thead>             <tbody>                 <tr>                     <td>value</td>                 </tr>             </tbody>             <tfoot>                 <tr>                     <th> value </th>                 </tr>             </tfoot>         </table>         <div id="pager">                      </div>     </div> </body> </html> So, this is the main structure you need to create for each of your tables where you want to apply the functionality we will create. Of course the scripts are referenced once ;). As you see, our table has class tablesorter and also we have a div with id pager. In the next steps we will use both these to create the needed functionalities. The complete Index.cshtml coded to get the data from controller and display in the page is: <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </thead>             <tbody>                 @{                     foreach (var p in ViewBag.People)                     {                                 <tr>                         <td>@p.Name</td>                         <td>@p.Surname</td>                         <td>@p.Email</td>                         <td>@p.Phone</td>                         <td>@p.DateAdded</td>                     </tr>                     }                 }             </tbody>             <tfoot>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </tfoot>         </table>         <div id="pager" style="position: none;">             <form>             <img src="@Url.Content("~/Content/images/first.png")" class="first" />             <img src="@Url.Content("~/Content/images/prev.png")" class="prev" />             <input type="text" class="pagedisplay" />             <img src="@Url.Content("~/Content/images/next.png")" class="next" />             <img src="@Url.Content("~/Content/images/last.png")" class="last" />             <select class="pagesize">                 <option selected="selected" value="5">5</option>                 <option value="10">10</option>                 <option value="20">20</option>                 <option value="30">30</option>                 <option value="40">40</option>             </select>             </form>         </div>     </div> </body> So, mainly the structure is the same. I have added @Razor code to create table with data retrieved from the ViewBag.People which has been filled with data in the home controller. 2. CSS Design The CSS code I’ve created is: /* DEMO TABLE */ body {     font-size: 75%;     font-family: Verdana, Tahoma, Arial, "Helvetica Neue", Helvetica, Sans-Serif;     color: #232323;     background-color: #fff; } table { border-spacing:0; border:1px solid gray;} table.tablesorter thead tr .header {     background-image: url(images/bg.png);     background-repeat: no-repeat;     background-position: center right;     cursor: pointer; } table.tablesorter tbody td {     color: #3D3D3D;     padding: 4px;     background-color: #FFF;     vertical-align: top; } table.tablesorter tbody tr.odd td {     background-color:#F0F0F6; } table.tablesorter thead tr .headerSortUp {     background-image: url(images/asc.png); } table.tablesorter thead tr .headerSortDown {     background-image: url(images/desc.png); } table th { width:150px;            border:1px outset gray;            background-color:#3C78B5;            color:White;            cursor:pointer; } table thead th:hover { background-color:Yellow; color:Black;} table td { width:150px; border:1px solid gray;} PAGINATION AND SORTING Now, when everything is ready and we have the data, lets make pagination and sorting functionalities 1. jQuery Scripts referencing <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.pager.js")" type="text/javascript"></script> 2. jQuery Sorting and Pagination script   <script type="text/javascript">     $(function () {         $("table.tablesorter").tablesorter({ widthFixed: true, sortList: [[0, 0]] })         .tablesorterPager({ container: $("#pager"), size: $(".pagesize option:selected").val() });     }); </script> So, with only two lines of code, I’m using both tablesorter and tablesorterPager plugins, giving some options to both these. Options added: tablesorter - widthFixed: true – gives fixed width of the columns tablesorter - sortList[[0,0]] – An array of instructions for per-column sorting and direction in the format: [[columnIndex, sortDirection], ... ] where columnIndex is a zero-based index for your columns left-to-right and sortDirection is 0 for Ascending and 1 for Descending. A valid argument that sorts ascending first by column 1 and then column 2 looks like: [[0,0],[1,0]] (source: http://tablesorter.com/docs/) tablesorterPager – container: $(“#pager”) – tells the pager container, the div with id pager in our case. tablesorterPager – size: the default size of each page, where I get the default value selected, so if you put selected to any other of the options in your select list, you will have this number of rows as default per page for the table too. END RESULTS 1. Table once the page is loaded (default results per page is 5 and is automatically sorted by 1st column as sortList is specified) 2. Sorted by Phone Descending 3. Changed pagination to 10 items per page 4. Sorted by Phone and Name (use SHIFT to sort on multiple columns) 5. Sorted by Date Added 6. Page 3, 5 items per page   ADDITIONAL ENHANCEMENTS We can do additional enhancements to the table. We can make search for each column. I will cover this in one of my next blogs. Stay tuned. DEMO PROJECT You can download demo project source code from HERE.CONCLUSION Once you finish with the demo, run your page and open the source code. You will be amazed of the purity of your code.Working with pagination in client side can be very useful. One of the benefits is performance, but if you have thousands of rows in your tables, you will get opposite result when talking about performance. Hence, sometimes it is nice idea to make pagination on back-end. So, the compromise between both approaches would be best to combine both of them. I use at most up to 500 rows on client-side and once the user reach the last page, we can trigger ajax postback which can get the next 500 rows using server-side pagination of the same data. I would like to recommend the following blog post http://weblogs.asp.net/gunnarpeipman/archive/2010/09/14/returning-paged-results-from-repositories-using-pagedresult-lt-t-gt.aspx, which will help you understand how to return page results from repository. I hope this was helpful post for you. Wait for my next posts ;). Please do let me know your feedback. Best Regards, Hajan

    Read the article

  • Answered: Selecting even table rows from a table element

    - by mvrak
    The issue I am having is: How do I go from a variable pointing at an element to using CSS selectors? I want to make this work: function highlight(table) {$(table " > :even").toggleClass('highlight');} where table is a reference to a table element. I don't want answers that tell me to use $('#table') because that defeats the point of the generality I am trying to make. Thanks

    Read the article

  • Mysql table Crashed during Optimize, table repair taking too long

    - by hellohellosharp
    One of my vital tables was being optimzied and MySQL crashed in the middle of it. It then said the table was corrupt. I am running repair but it is taking over an hour, I need this table up ASAP (I will even truncate it if necessary). Please help me get this solved. The table has about 54 million rows. This is a MyISAM. Any additional information needed, just ask. Here are the contents of my my.cnf: [mysqld] max_connections = 850 max_user_connections = 850 query_cache_size = 128M skip-external-locking key_buffer_size = 64M max_allowed_packet = 8M table_open_cache = 256 sort_buffer_size = 4M net_buffer_length = 16K read_buffer_size = 1M read_rnd_buffer_size = 1M myisam_sort_buffer_size = 32M innodb_file_per_table tmp_table_size = 100M max_heap_table_size = 64M thread_cache_size = 8 wait_timeout=25 interactive_timeout=25 table_cache=600 innodb_buffer_pool_size = 4G innodb_thread_concurrency = 8 innodb_flush_method = O_DIRECT # This setting allows the use of asynchronous I/O in InnoDB. # The following files track usage of this resource: # - /proc/sys/fs/aio-max-nr # - /proc/sys/fs/aio-nr # Default limit is 65536, of which a single instance of mysql uses 2661 out of the box innodb_use_native_aio = 1

    Read the article

  • Table Sorting in Excel 2010 Cannot Parse the Table Headers Correctly

    - by Truth
    I have a rather weird issue I've never faced before. After defining my table with borders and such, and filling out data in my table, I try to sort my table according to the "ratio" (first) column, from biggest to smallest. When I right click the header and select the corresponding option, the table gets sorted, but the first row is omitted by the sorting function. What I mean is that the first line (with 3.50 ratio) will forever stay at the top line, even when I sort otherwise (by a different column, in a different order). This is my table below, it's tab separated so it's not very readable, but I hope you'll manage. ???? ??????? ????? ??? ???? ??? ??? ??? 14 4 0 0 0 0 0 0 0 0 0 0 0 0 0 3.50 3.50 ?????? 23 8 0 0 0 0 0 0 0 0 0 0 0 0 0 2.88 2.88 ???? 16 4 0 0 0 0 0 0 0 0 0 0 0 0 0 4.00 4.00 ??? 7 2 0 0 0 0 0 0 0 0 0 0 0 0 0 3.50 3.50 ???? 13 4 0 0 0 0 0 0 0 0 0 0 0 0 0 3.25 3.25 ????? 12 4 0 0 0 0 0 0 0 0 0 0 0 0 0 3.00 3.00 ???? 10 4 0 0 0 0 0 0 0 0 0 0 0 0 0 2.50 2.50 ??? 38 12 0 0 0 0 0 0 0 0 0 0 0 0 0 3.17 3.17 ???? 14 4 0 0 0 0 0 0 0 0 0 0 0 0 0 3.50 3.50 ????? 31 10 0 0 0 0 0 0 0 0 0 0 0 0 0 3.10 3.10 ???? 24 8 0 0 0 0 0 0 0 0 0 0 0 0 0 3.00 3.00 ????? 23 8 0 0 0 0 0 0 0 0 0 0 0 0 0 2.88 2.88 ???? 14 4 0 0 0 0 0 0 0 0 0 0 0 0 0 3.50 3.50 ???? 16 4 0 0 0 0 0 0 0 0 0 0 0 0 0 4.00 4.00 ???? 24 8 0 0 0 0 0 0 0 0 0 0 0 0 0 3.00 3.00 ???? 30 10 0 0 0 0 0 0 0 0 0 0 0 0 0 3.00 3.00 ????? 21 6 0 0 0 0 0 0 0 0 0 0 0 0 0 3.50 3.50 ???? 42 12 0 0 0 0 0 0 0 0 0 0 0 0 0 3.50 3.50 ??? 11 4 0 0 0 0 0 0 0 0 0 0 0 0 0 2.75 2.75 ???? 5 2 0 0 0 0 0 0 0 0 0 0 0 0 0 2.50 2.50 ???? 4 2 0 0 0 0 0 0 0 0 0 0 0 0 0 2.00 2.00 ??? 4 2 0 0 0 0 0 0 0 0 0 0 0 0 0 2.00 2.00

    Read the article

  • mysql spitting lots of "table marked as crashed" errors

    - by Shawn
    Hi, I have a mysql server(version: 5.5.3-m3-log Source distribution ) and it keeps showing lots of 110214 3:01:48 [ERROR] /usr/local/mysql/libexec/mysqld: Table './mydb/tablename' is marked as crashed and should be repaired 110214 3:01:48 [Warning] Checking table: './mydb/tablename' I'm wondering what can be the possible casues and how to fix it. Here is a full list mysql configuration : connect_errors = 6000 table_cache = 614 external-locking = FALSE max_allowed_packet = 32M sort_buffer_size = 2G max_length_for_sort_data = 2G join_buffer_size = 256M thread_cache_size = 300 #thread_concurrency = 8 query_cache_size = 512M query_cache_limit = 2M query_cache_min_res_unit = 2k default-storage-engine = MyISAM thread_stack = 192K transaction_isolation = READ-COMMITTED tmp_table_size = 246M max_heap_table_size = 246M long_query_time = 3 log-slave-updates = 1 log-bin = /data/mysql/3306/binlog/binlog binlog_cache_size = 4M binlog_format = MIXED max_binlog_cache_size = 8M max_binlog_si ze = 1G relay-log-index = /data/mysql/3306/relaylog/relaylog relay-log-info-file = /data/mysql/3306/relaylog/relaylog relay-log = /data/mysql/3306/relaylog/relaylog expire_logs_days = 30 key_buffer_size = 1G read_buffer_size = 1M read_rnd_buffer_size = 16M bulk_insert_buffer_size = 64M myisam_sort_buffer_size = 2G myisam_max_sort_file_size = 5G myisam_repair_threads = 1 max_binlog_size = 1G interactive_timeout = 64 wait_timeout = 64 skip-name-resolve slave-skip-errors = 1032,1062,126,1114,1146,1048,1396 The box is running on centos-5.5. Thanks for your help.

    Read the article

  • Can not parse table information from html document.

    - by Harikrishna
    I am parsing many html documents.I am using html agility pack And I want to parse the tabular information from each document. And there may be any number of tables in each document.But I want to extract only one table from each document which has column header name NAME,PHONE NO,ADDRESS.And this table can be anywhere in the document,like in the document there is ten tables and from ten table there is one table which has many nested tables and from nested table there may be a table what I want to extract means table can be anywhere in the document and I want to find that table from the document by column header name.If I got that table then I want to then extract the information from that table. Now I can find the table which has column header NAME,PHONE NO,ADDRESS and also can extract the information from that.I am doing for that is, first I find the all tables in a document by foreach (var table in doc.DocumentNode.Descendants("table")) then for each table got I find the row for each table like, var rows = table.Descendants("tr"); and then for each row I am checking that row has that header name NAME,ADDRESS,PHONENO and if it is then I skip that row and extract all information after that row foreach (var row in rows.Skip(rowNo)) { var data = new List<string>(); foreach (var column in row.Descendants("td")) { data.Add(properText); } } Such that I am extracting all information from almost many document. But now problem is sometimes what happened that in some document I can not parse the information.Like a document in which there are like 10 tables and from these 10 tables 1 table is like there are many nested tables in that table. And from these nested tables I want to find the table which tabel has column header like NAME,ADDRESS,PHONE NO.So if table may be anywhere in the document even in the nested tables or anywhere it can be find through column header name.So I can parse the information from that table and skip the outer tabular information from that table.

    Read the article

  • grouping by date in excel and removing time in a pivot table

    - by Ashley DeVan
    My data looks like this: count Added Date 1 8/26/09 3:46 PM 2 8/21/09 6:50 PM 3 8/21/09 3:04 PM 4 8/21/09 3:21 PM 5 5/1/09 6:56 AM 6 5/1/09 8:12 AM 7 5/1/09 8:00 AM 8 5/1/09 8:18 AM 9 5/1/09 8:58 AM 10 5/1/09 8:58 AM 11 5/1/09 9:06 AM 12 5/1/09 9:44 AM 13 5/1/09 9:50 AM 14 5/1/09 11:17 AM 15 5/1/09 11:27 AM 16 5/1/09 11:29 AM 17 5/1/09 11:39 AM 18 5/1/09 12:10 PM 19 5/1/09 12:33 PM When I do a pivot table, I cannot get it to sum by day, it breaks it up by minute. I've even tried parsing the field, but the time always creates an issue. How to I get my pivot table to give me a count by day and ignore the time stamp?

    Read the article

  • how to update the table in sql database from a Paradox DB table using delphi

    - by Sreenath Krishnakumar
    I am doing a project in Delphi6 which enables the user to update the table in SQL server database ; if there is any changes in the table of Paradox DB. An update button has been created and whenever the user clicks it, all the changes made in the paradox db table have to be updated in the SQL server table. Only if there is any changes the table need to be updated else ; it should close automatically. For that I have created a table, "Schedule" ,both in Paradox DB and SQL server. But I am stuck with this Paradox DB thing. Which component can I drop in the form for connecting the table in Paradox DB ? For SQL server I used ado table component. For this Paradox also can I use that ? I am not a regular programmer so I am not well versed this Delphi6 also. So I am seeking a help for this. Can anybody give me an example coding also ?

    Read the article

  • TSQL - How to join 1..* from multiple tables in one resultset?

    - by ElHaix
    A location table record has two address id's - mailing and business addressID that refer to an address table. Thus, the address table will contain up to two records for a given addressID. Given a location ID, I need an sproc to return all tbl_Location fields, and all tbl_Address fields in one resultset: LocationID INT, ClientID INT, LocationName NVARCHAR(50), LocationDescription NVARCHAR(50), MailingAddressID INT, BillingAddressID INT, MAddress1 NVARCHAR(255), MAddress2 NVARCHAR(255), MCity NVARCHAR(50), MState NVARCHAR(50), MZip NVARCHAR(10), MCountry CHAR(3), BAddress1 NVARCHAR(255), BAddress2 NVARCHAR(255), BCity NVARCHAR(50), BState NVARCHAR(50), BZip NVARCHAR(10), BCountry CHAR(3) I've started by creating a temp table with the required fields, but am a bit stuck on how to accomplish this. I could do sub-selects for each of the required address fields, but seems a bit messy. I've already got a table-valued-function that accepts an address ID, and returns all fields for that ID, but not sure how to integrate it into my required result. Off hand, it looks like 3 selects to create this table - 1: Location, 2: Mailing address, 3: Billing address. What I'd like to do is just create a view and use that. Any assistance would be helpful. Thanks.

    Read the article

  • Hiding a column from a pivot table without removing it from the chart

    - by Simon
    I have a pivot table with two columns: number of users who visited a website (impressions) and number of users who registered on the site (regs). The rows are for dates. I want to visualize the percentage of users who registered after visiting the site. Thus, I have the number of users for each cell as a value field, displaying it as percentage of impressions. Generating a pivot chart from the table, impressions and regs are plotted over date as a percentage of impressions. This means there is one line at 100% for impressions (always 100% of itself) and the graph for registrations below that. I'd like to remove the line for impressions, but when I set a filter to do so, registrations vanish as well, since the column for impressions is filtered from the pivot chart as well, turning the value field invalid. How can I just show registrations as a percentage of impressions in the chart?

    Read the article

  • Pivot Table from data with merged cells

    - by Graeme
    I have a energy spreadsheet for multiple sites. the first row has month and year. the next row has columns for date invoice received, KW hours and cost. So there are three columns for each month. I have merged the month cell across the three columns. When i create a pivot table the date kw/h and costs are labled date1, date2, etc. Can I link the months headings to the subheadings to get meaningful headings in the pivot table????

    Read the article

  • Dates not recognized as dates in pivot table pulling directly from SQL Server

    - by Michael K
    My pivot pulls from an external data source with a date column. Excel doesn't see this column as a date and the 'Format Cells' option panel doesn't change how the dates are displayed. The cell data is left-aligned, suggesting a string rather than a date. I have tried cast(myvar as date) and convert(varchar, myvar, 101) and convert(varchar, myvar, 1) in the base table, but none of these have been picked up by Excel as dates. If the column is recognized as a date, I can group by week and month. I understand that if I can't fix this, the next step is to add columns with weeks and months for each date to the table, but I'd like to give formatting the column one more shot before doing that.

    Read the article

  • Showing the right form of total I want in a pivot table

    - by Maria
    I have a pivot table that shows how many condoms have been handed out and on how many distinct occasions. So the value in the pivot table is a number between 1 and 30 (no. of condoms handed out at one specific occasion) and then I can see – for each month – how many times that happened. For example, three times, two condoms were given out, four times, one condom was given out, et cetera. The total is set on Count and it shows the total of how many times condoms have been given out. However, in the total I want it to show the sum of all the condoms that been given out each month – is it possible to change this somehow?

    Read the article

  • Combine two or more tables into a third separate table

    - by Samuel
    Hi community, I have an excel workbook that has three pivot tables in it. What I am wanting to do is create a fourth table that combines the data from all three of the other tables. Essentially I want to concatenate the tables together but still preserve the source tables. Another criteria of what I am wanting to do is if I add a row to any of the source tables it must update the combined table and it must work with x amount of rows where x could be any size. I know I am asking a lot but I would be so grateful if I could get some help working this out. I am comfortable with using either VBA or native excel to solve this. If you guys need examples I will be happy to upload some.

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • COLUMNS_UPDATED() for audit triggers

    - by Piotr Rodak
    In SQL Server 2005, triggers are pretty much the only option if you want to audit changes to a table. There are many ways you can decide to store the change information. You may decide to store every changed row as a whole, either in a history table or as xml in audit table. The former case requires having a history table with exactly same schema as the audited table, the latter makes data retrieval and management of the table a bit tricky. Both approaches also suffer from the tendency to consume...(read more)

    Read the article

  • MySQL table partitioning qn:

    - by JVXR
    I have a table (innodb) that will have billions of records eventually. Every 2nd week I expect ~ 500K records to get dropped into the table. I would want to partition this table based on the date on which the data is imported - luckily this is a field in the table that is of the format yyyy-mm-dd - Is it possible to partition it based on this date column ? I tried looking at the 18th chapter of mysql docs but couldn't figure out if this is possible. -tia

    Read the article

  • MySQL table organization and optimization (Rails)

    - by aguynamedloren
    I've been learning Ruby on Rails over the past few months with no prior programming experience. Lately, I've been thinking about database optimization and table organization. I know there are great books on the subject, but I typically learn by example / as I go. Here's a hypothetical situation: Let's say I am building a social network for a niche community with 250,000 members (users). The users have the ability to attend events. Let's say there are 50,000 past/present/future events. Much like Facebook events, a user can attend any number of events and an event can have any number of attendees. In the database, there would be a table for users and a table for events. Somehow I would have to create an association between the users and events. I could create an "events" column in the users table such that each user row would contain a hash of event IDs, or I could create an "attendees" column in the events table such that each event row would contain a hash of user IDs. Neither of these solutions seem ideal, however. On a users profile page, I want to display the list of events they are associated with, which would require scanning the 50,000 event rows for the user ID of said user if I include an "attendees" column in the events table. Likewise, on an event page, I want to display a list of attendees for the event, which would require scanning the 250,000 user rows for the event ID of said event if I include an "events" column in the users table. Option 3 would be to create a third table that contains the attendee information for each and every event - but I don't see how this would solve any problems. Are these non-issues? Rails makes accessing all of this information easy, but I guess I'm worried about scale. It is entirely possible that I am under-estimating the speed and processing power of modern databases / servers / etc. How long would it take to scan 250,000 user rows for specific event IDs - 10ms? 100ms? 1,000ms? I guess that's not that bad. Am I just over-thinking this?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >