Search Results

Search found 8523 results on 341 pages for 'bobby tables'.

Page 98/341 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • SQL Server database with clustered GUID PKs - switch clustered index or switch to sequential (comb)

    - by Eyvind
    We have a database in which all the PKs are GUIDs, and most of the PKs are also the clustered index for the table. We know that this is bad (due to the random nature of GUIDs). So, it seems there are basically two options here (short of throwing out GUIDs as PKs altogether, which we cannot do (at least not at this time)). We could change the GUID generation algorithm to e.g. the one that NHibernate uses, as detailed in this post, or we could, for the tables that are under the heaviest use, change to a different clustered index, e.g. an IDENTITY column, and keep the "random" GUIDs as PKs. Is it possible to give any general recommendations in such a scenario? The application in question has 500+ tables, the largest one presently at about 1,5 million rows, a few tables around 500 000 rows, and the rest significantly lower (most of them well below 10K). Furthermore, the application is installed at several customer sites already, so we have to take any possible negative effects for existing customer into consideration. Thanks!

    Read the article

  • MS SQL Database with clustered GUID PKs - switch clustered index or switch to sequential (comb) GUID

    - by Eyvind
    We have a database in which all the PKs are GUIDs, and most of the PKs are also the clustered index for the table. We know that this is bad (due to the random nature of GUIDs). So, it seems there are basically two options here (short of throwing out GUIDs as PKs altogether, which we cannot do (at least not at this time)). We could change the GUID generation algorithm to e.g. the one that NHibernate uses, as detailed in this post, or we could, for the tables that are under the heaviest use, change to a different clustered index, e.g. an IDENTITY column, and keep the "random" GUIDs as PKs. Is it possible to give any general recommendations in such a scenario? The application in question has 500+ tables, the largest one presently at about 1,5 million rows, a few tables around 500 000 rows, and the rest significantly lower (most of them well below 10K). Furthermore, the application is installed at several customer sites already, so we have to take any possible negative effects for existing customer into consideration. Thanks!

    Read the article

  • OLAP Web Visualization and Reporting Recommendations

    - by Gok Demir
    I am preparing an offer for a customer. They proide weekly data to different organizations. There is huge amount data suits OLAP that needed to be visualized with charts and pivot tables on web and custom reports will be built by non-it persons (an easy gui). They will enter a date range, location which data columns to be included and generate report and optionally export the data to Excel. They currently prepare reports with MS Excel with Pivot Tables and but they need a better online tool now to show data to their customers. Tables are huge and need of drill-down functionality. My current knowledge Spring, Flex, MySql, Linux. I have some knowledge of PostgreSQL and MSSQL and Windows. What is the easiest way of doing this project. Do you think that SSRP (haven't tried yet) and ASP.NET better suits for this kind of job. Actually I prefer open source solutions. Flex have OLAP Data Grid control which do aggregation on client side. JasperServer seems promising but it seems I need enterprise version (multiple organizations and ad hoc queries). What about Modrian + Flex + PostgreSQL solution? Any previous experience will be appreciated. Yes I am confused with options.

    Read the article

  • How do I pass a lot of parameters to views in Django?

    - by Mark
    I'm very new to Django and I'm trying to build an application to present my data in tables and charts. Till now my learning process went very smooth, but now I'm a bit stuck. My pageview retrieves large amounts of data from a database and puts it in the context. The template then generates different html-tables. So far so good. Now I want to add different charts to the template. I manage to do this by defining <img src=".../> tags. The Matplotlib chart is generate in my chartview an returned via: response=HttpResponse(content_type='image/png') canvas.print_png(response) return response Now I have different questions: the data is retrieved twice from the database. Once in the pageview to render the tables, and again in the chartview for making the charts. What is the best way to pass the data, already in the context of the page to the chartview? I need a lot of charts, each with different datasets. I could make a chartview for each chart, but probably there is a better way. How do I pass the different dataset names to the chartview? Some charts have 20 datasets, so I don't think that passing these dataset parameters via the url (like: <imgm src="chart/dataset1/dataset2/.../dataset20/chart.png />) is the right way. Any advice?

    Read the article

  • Export the datagrid data to text in asp.net

    - by SRIRAM
    Problem:It will asks there is no assembly reference/namespace for Database Database db = DatabaseFactory.CreateDatabase(); DBCommandWrapper selectCommandWrapper = db.GetStoredProcCommandWrapper("sp_GetLatestArticles"); DataSet ds = db.ExecuteDataSet(selectCommandWrapper); StringBuilder str = new StringBuilder(); for(int i=0;i<=ds.Tables[0].Rows.Count - 1; i++) { for(int j=0;j<=ds.Tables[0].Columns.Count - 1; j++) { str.Append(ds.Tables[0].Rows[i][j].ToString()); } str.Append("<BR>"); } Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=FileName.txt"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/vnd.text"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new HtmlTextWriter(stringWrite); Response.Write(str.ToString()); Response.End();

    Read the article

  • Can YAML have inheritance?

    - by Jason
    This question involves a lot of symfony but it should be easy enough for someone to follow who only knows YAML and not symfony. My symfony models come from a three-step process: First, I create the tables in MySQL. Second, I run a symfony command (symfony doctrine:build-schema) to convert my table structure into a YAML file. Third, I run another symfony command (symfony doctrine:build-model) to convert the YAML file into PHP code. Here's the problem: there are some tables in the database that I don't want to end up in my symfony code. For example, let's say I have two tables: one called my_table and another called wordpress. The YAML file I end up with might look like this: MyTable: connection: doctrine tableName: my_table Wordpress: connection: doctrine tableName: wordpress That's great except the wordpress table has nothing to do with my symfony models. The result is that every single time I make a change to my database and generate this YAML file, I have to manually remove wordpress. It's annoying! I'd like to be able to create a file called baseConfig.php or something that looks like this: $config = array( 'MyTable' => array( 'connection' => 'doctrine', 'tableName' => 'my_table', ), 'Wordpress' => array( 'connection' => 'doctrine', 'tableName' => 'wordpress', ), ); And then I could have a separate file called config.php or something where I could make modifications to the base config: unset($config['Wordpress']); So my question is: is there any way to convert YAML into executable PHP code (as opposed to load YAML INTO PHP code like what sfYaml::load() does) to achieve this sort of thing? Or is there maybe some other way to achieve YAML inheritance? Thanks, Jason

    Read the article

  • Increase Query Speed in PostgreSQL

    - by Anthoni Gardner
    Hello, First time posting here, but an avid reader. I am experiancing slow query times on my database (all tested locally thus far) and not sure how to go about it. The database itself has 44 tables and some of them tables have over 1 Million records (mainly the movies, actresses and actors tables). The table is made via JMDB using the flat files on IMDB. Also the SQL query that I am about to show is from that said program (that too experiances very slow search times). I have tried to include as much information as I can, such as the explain plan etc. "QUERY PLAN" "HashAggregate (cost=46492.52..46493.50 rows=98 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Append (cost=39094.17..46491.79 rows=98 width=46)" " - HashAggregate (cost=39094.17..39094.87 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on movies (cost=0.00..39093.65 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Nested Loop (cost=0.00..7395.94 rows=28 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on akatitles (cost=0.00..7159.24 rows=28 width=4)" " Output: akatitles.movieid, akatitles.language, akatitles.title, " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Index Scan using movies_pkey on movies (cost=0.00..8.44 rows=1 width=46)" " Output: public.movies.movieid, public.movies.title, public.movies.year, public.movies.imdbid" " Index Cond: (public.movies.movieid = akatitles.movieid)" SELECT * FROM ((SELECT DISTINCT title, movieid, year FROM movies WHERE title ILIKE '%Babe%' AND NOT (title ILIKE '"%}')) UNION (SELECT movies.title, movies.movieid, movies.year FROM movies INNER JOIN akatitles ON movies.movieid=akatitles.movieid WHERE akatitles.title ILIKE '%Babe%' AND NOT (akatitles.title ILIKE '"%}'))) AS union_tmp2; Returns 612 Rows in 9078ms Database backup (plain text) is 1.61GB It's a really complex query and I am not fully cognizant on it, like I said it was spat out by JMDB. Do you have any suggestions on how I can increase the speed ? Regards Anthoni

    Read the article

  • Displaying Many-To-Many Database relationship in VB.NET 2008 with DataGrid, MS SQL 2008

    - by user337501
    Computer bombed while posting this, couldnt find a duplicate question but if there is one, forgive me. So, I've run into a wall. And rather than use a ladder to avoid it, I'd like go through it. I'm setting up what I can best describe as a many-to-many relationship in a database. To examplify, imagine I have three primary tables: Items, Categories, Sections(nevermind the potential redundancy) Then I have another table, Properties. Items, Categories, and Sections can be associated with many properties. A single property can be associated with one, all, or none of the other tables. The best way I can figure to do this is to have join tables make the relationship. i.e. tblItems----(Foreign Key)----tblItems_To_Properties----(Foreign Key)----tblProperties In this example, tblItems simply has an "ItemID" Primary Key. tblItems_To_Properties has its own Primary Key(tblItems_To_PropertiesID), a Foreign Key to the Item(ItemID) and a Foreign key to the Property(PropertyID). The Properties table simply has its primary key(PropertyID) I hope this example isnt too confusing...if I have to I can find a way to put a diagram up or something. My problem is, I want to display this in a DataGrid using the Master-Detail method(DevExpress GridControl). I use the tblItems as a test, and I can see the Items in the parent view, but in the child view I see(understandably) the join table and that is it. My goal is to make it so the Grid ignores the join table and shows the Properties table as the only child. Any help on this method or insight into another solution would be muuuuuuuch appreciat

    Read the article

  • Is CSS Inheritance in Internet Explorer 8 still buggy?

    - by rrrr
    I have a situation that I am looking at where certain CSS properties will not be inherited. This revolves around tables and IE8. Using the sample HTML below I cannot get the text within the table to inherit the green colour. This works in Firefox and Chrome, but not IE8 and from reading up this seems to have always been a problem in IE but was meant to be working in version 8 from what I read. I have tried to specify the inherit value everywhere possible, but to no avail so the question is whether the CSS inheritance support in IE8 is buggy, or am I missing something? I don't want answer changing inline CSS to be classes and I certainly dont wan't any comments on tables as this all stems from building and designing HTML emails where inline CSS and tables are essential. <html> <head></head> <body> <table style="color: green;"> <tr> <td> <span>Span</span> <p>Paragraph</p> <div>Div</div> <table style="color:inherit;"> <tr> <td>Table</td> </tr> </table> </td> </tr> </table> </body> </html>

    Read the article

  • EAV Database Sheme

    - by GLO
    Hello Stackoverflow comunity! I believe that my question has to do with all db guru here! Do you know the EAV DB Scheme ( http://en.wikipedia.org/wiki/Entity-attribute-value_model ) and what they say about the performing of this model. I wonder, If I break this model into smaller tables what the result is? Let's talk about it. I have a db with more that 100K records. A lot of categories and many items ( with different properties per category ) Everything is stored in a EAV. If I try to break this scheme and create for any category a unique table is something that will I have to avoid? Yes, I know that probably I'll have a lot of tables and I'll need to ALTER them if I want to add an extra field, BUT is this so wrong? I have also read that as many tables I have, the db will be populate with more files and this isn't good for any filesystem. Any suggestion? Thank you!

    Read the article

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

  • should this database table be normalized?

    - by oo
    i have taken over a database that stores fitness information and we were having a debate about a certain table and whether it should stay as one table or get broken up into three tables. Today, there is one table called: workouts that has the following fields id, exercise_id, reps, weight, date, person_id So if i did 2 sets of 3 different exercises on one day, i would have 6 records in that table for that day. for example: id, exercise_id, reps, weight, date, person_id 1, 1, 10, 100, 1/1/2010, 10 2, 1, 10, 100, 1/1/2010, 10 3, 1, 10, 100, 1/1/2010, 10 4, 2, 10, 100, 1/1/2010, 10 5, 2, 10, 100, 1/1/2010, 10 6, 2, 10, 100, 1/1/2010, 10 So the question is, given that there is some redundant data (date, personid, exercise_id) in multiple records, should this be normalized to three tables WorkoutSummary: - id - date - person_id WorkoutExercise: - id - workout_id (foreign key into WorkoutSummary) - exercise_id WorkoutSets: - id - workout_exercise_id (foreign key into WorkoutExercise) - reps - weight I would guess the downside is that the queries would be slower after this refactoring as now we would need to join 3 tables to do the same query that had no joins before. The benefit of the refactoring allows up in the future to add new fields at the workout summary level or the exercise level with out adding in more duplication. any feedback on this debate?

    Read the article

  • search dataset from xml file

    - by Anelim
    Hi, I need to filter the results I obtain when I load my xml file. For example I need to search the xml data for items with keyword "Chemistry" for example. The below xml example is a summary of my xml file. The data is loaded in a gridview. Could you help? Thanks! Xml File (summary): <CONTRACTS> <CONTRACT> <CONTRACTID>779</CONTRACTID> <NAME>ContractName</NAME> <KEYWORDS>Chemistry, Engineering, Chemical</KEYWORDS> <CONTRACTSTARTDATE>1/8/2005</CONTRACTSTARTDATE> <CONTRACTENDDATE>31/7/2008</CONTRACTENDDATE> <COMMODITIES><COMMODITY><COMMODITYCODE>CHEM</COMMODITYCODE> <COMMODITYNAME>Chemicals</COMMODITYNAME></COMMODITY></COMMODITIES> </CONTRACT></CONTRACTS> My code behind code is: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim ds As DataSet = New DataSet() ds.ReadXml(AppDomain.CurrentDomain.BaseDirectory + "/testxml.xml") Dim dtContract As DataTable = ds.Tables(0) Dim dtJoinCommodities As DataTable = ds.Tables(1) Dim dtCommodity As DataTable = ds.Tables(2) dtContract.Columns.Add("COMMODITYCODE") dtContract.Columns.Add("COMMODITYNAME") Dim count As Integer = 0 Dim commodityCode As String = Nothing Dim commodityName As String = Nothing Dim dRowJoinCommodity As DataRow Dim trimChar As Char() = {","c, " "c} Dim textboxstring As String = "KEYWORDS like 'pencil'" For Each dRow As DataRow In dtContract.Select(textboxstring) commodityCode = "" commodityName = "" count = dtContract.Rows.IndexOf(dRow) dRowJoinCommodity = dtJoinCommodities.Rows(count) For Each dRowCommodities As DataRow In dtCommodity.Rows If dRowCommodities("COMMODITIES_Id").ToString() = dRowJoinCommodity("COMMODITIES_ID").ToString() Then commodityCode = commodityCode + dRowCommodities("COMMODITYCODE").ToString() + ", " commodityName = commodityName + dRowCommodities("COMMODITYNAME").ToString() + ", " End If Next commodityCode = commodityCode.TrimEnd(trimChar) commodityName = commodityName.TrimEnd(trimChar) dRow("COMMODITYCODE") = commodityCode dRow("COMMODITYNAME") = commodityName Next GridView1.DataSource = dtContract GridView1.DataBind() End Sub

    Read the article

  • Subselecting with MDX

    - by Vince
    Greetings stack overflow community. I've recently started building an OLAP cube in SSAS2008 and have gotten stuck. I would be grateful if someone could at least point me towards the right direction. Situation: Two fact tables, same cube. FactCalls holds information about calls made by subscribers, FactTopups holds topup data. Both tables have numerous common dimensions one of them being the Subscriber dimension. FactCalls             FactTopups SubscriberKey      SubscriberKey CallDuration         DateKey CallCost               Topup Value ... What I am trying to achieve is to be able to build FactCalls reports based on distinct subscribers that have topped up their accounts within the last 7 days. What I am basically looking for an MDX equivalent to SQL's: select * from FactCalls where SubscriberKey in ( select distinct SubscriberKey from FactTopups where ... ); I've tried creating a degenerate dimension for both tables containing SubscriberKey and doing: Exist( [Calls Degenerate].[Subscriber Key].Children, [Topups Degenerate].[Subscriber Key].Children ) Without success. Kind regards, Vince

    Read the article

  • Looking for a .Net ORM

    - by SLaks
    I'm looking for a .Net 3.5 ORM framework with a rather unusual set of requirements: I need to create and alter tables at runtime with schemas defined by my end-users. (Obviously, that wouldn't be strongly-typed; I'm looking for something like a DataTable there) I also want regular strongly-typed partial classes for rows in non-dynamic tables, with custom validation and other logic. (Like normal ORMs) I want to load the entire database (or some entire tables) once, and keep it in memory throughout the life of the (WinForms) GUI. (I have a shared SQL Server with a relatively slow connection) I also want regular LINQ support (like LINQ-to-SQL) for ASP.Net on the shared server (which has a fast connection to SQL Server) In addition to SQL Server, I also want to be able to use a single-file database that would support XCopy deployment (without installing SQL CE on the end-user's machine). (Probably Access or SQLite) Finally, it has to be free (unless it's OpenAccess) I'll probably have to write it myself, as I don't think there is an existing ORM that meets these requirements. However, I don't want to re-invent the wheel if there is one, hence this question. I'm using VS2010, but I don't know when my webhost (LFC) will upgrade to .Net 4.0

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • h2 (embedded mode ) database files problem

    - by aeter
    There is a h2-database file in my src directory (Java, Eclipse): h2test.db The problem: starting the h2.jar from the command line (and thus the h2 browser interface on port 8082), I have created 2 tables, 'test1' and 'test2' in h2test.db and I have put some data in them; when trying to access them from java code (JDBC), it throws me "table not found exception". A "show tables" from the java code shows a resultset with 0 rows. Also, when creating a new table ('newtest') from the java code (CREATE TABLE ... etc), I cannot see it when starting the h2.jar browser interface afterwards; just the other two tables ('test1' and 'test2') are shown (but then the newly created table 'newtest' is accessible from the java code). I'm inexperienced with embedded databases; I believe I'm doing something fundamentally wrong here. My assumption is, that I'm accessing the same file - once from the java app, and once from the h2 console-browser interface. I cannot seem to understand it, what am I doing wrong here?

    Read the article

  • Ruby on Rails ActiveRecord/Include/Associations can't get my query to work

    - by Cypher
    I just started learning Rails and I'm just trying to set up query via associations. All the queries I try to write seem to be doing bizzare things and end up trying to query two tables parsed together with an '_' as one table. I have no clue why this would ever happen My tables are as follows: schools: id name variables: id name type var_entries: id variable_id entry school_entries: id school_id var_entry_id my rails association tables are $local = { :adapter => "mysql", :host => "localhost", :port => "3306".to_i, :database => "spy_2", :username =>"root", :password => "vertrigo" } class School < ActiveRecord::Base establish_connection $local has_many :school_entries has_many :var_entries, :through => school_entries end class Variable < ActiveRecord::Base establish_connection $local has_many :var_entries has_many :school_entries, :through => :var_entries end class VarEntry < ActiveRecord::Base establish_connection $local has_many_and_belongs_to :school_entries belongs_to :variables end class SchoolEntry < ActiveRecord::Base establish_connection $local belongs_to :school has_many :var_entries end I want to do this sql query: SELECT school_id, variable_id,rank FROM school_entries, variables, var_entries, schools WHERE var_entries.variable_id = variables.id AND school_entries.var_entry_id = var_entries.id AND schools.id = school_entries.school_id AND variables.type = 'number'; and put it into Rails notation: here is one of my many failed attempts schools = VarEntry.all(:include => [:school_entries, :variables], :conditions => "variables.type = 'number'") the error: 'const_missing': uninitialized constant VarEntry::Variables (NameError) if i remove variables schools = VarEntry.all(:include => [:school_entries, :variables], :conditions => "type = 'number'") the error is: Mysql::Error: Unkown column 'type' in 'where clause': SELECT * FROM 'var_entries' WHERE (type=number) (ActiveRecord::StatementInvalid) Can anyone tell me where I'm going horribly wrong?

    Read the article

  • Help! The log file for database 'tempdb' is full. Back up the transaction log for the database to fr

    - by michael.lukatchik
    We're running SQL Server 2000. In our database, we have an "Orders" table with approximately 750,000 rows. We can perform simple SELECT statements on this table. However, when we want to run a query like SELECT TOP 100 * FROM Orders ORDER BY Date_Ordered DESC, we receive the following message: Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space. We have other tables in our database which are similar in size of the amount of records that are in the tables (i.e. 700,000 records). On these tables, we can run any queries we'd like and we never receive a message about 'tempdb being full'. To resolve this, we've backed up our database, shrunk the actual database and also shrunk the database and files in the tempdb system database, but this hasn't resolved the issue. The size of our log file is set to autogrow. We're not sure where to go next. Are there any ideas why we still might be receiving this message? Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space.

    Read the article

  • SQL Joins Excluding Data

    - by Andrew
    Say I have three tables: Fruit (Table 1) ------ Apple Orange Pear Banana Produce Store A (Table 2 - 2 columns: Fruit for sale => Price) ------------------------- Apple => 1.00 Orange => 1.50 Pear => 2.00 Produce Store B (Table 3 - 2 columns: Fruit for sale => Price) ------------------------ Apple => 1.10 Pear => 2.50 Banana => 1.00 If I would like to write a query with Column 1: the set of fruit offered at Produce Store A UNION Produce Store B, Column 2: Price of the fruit at Produce Store A (or null if that fruit is not offered), Column 3: Price of the fruit at Produce Store B (or null if that fruit is not offered), how would I go about joining the tables? I am facing a similar problem (with more complex tables), and no matter what I try, if the "fruit" is not at "produce store a" but is at "produce store b", it is excluded (since I am joining produce store a first). I have even written a subquery to generate a full list of fruits, then left join Produce Store A, but it is still eliminating the fruits not offered at A. Any Ideas?

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • How should I manage my many-to-many relationships?

    - by wes
    Hello all, I have a database containing a couple tables: files and users. This relationship is many-to-many, so I also have a table called users_files_ref which holds foreign keys to both of the above tables. Here's the schema of each table: files - file_id, file_name users - user_id, user_name users_files_ref - user_file_ref_id, user_id, file_id I'm using Codeigniter to build a file host application, and I'm right in the middle of adding the functionality that enables users to upload files. This is where I'm running into my problem. Once I add a file to the files table, I will need that new file's id to update the users_files_ref table. Right now I'm adding the record to the files table, and then I imagined I'd run a query to grab the last file added, so that I can get the ID, and then use that ID to insert the new users_files_ref record. I know this will work on a small scale, but I imagine there is a better way of managing these records, especially in a heavy-traffic scenario. I am new to relational database stuff but have been around PHP for a while, so please bear with me here :-) I have primary and foreign keys set up correctly for the files, users, and users_files_ref tables, I'm just wondering how to manage the adding of file records for this scenario? Thanks for any help provided, it's much appreciated. -Wes

    Read the article

  • What's a good Java API for creating Word documents?

    - by Bill James
    I have a new app I'll be working on where I have to generate a Word document that contains tables, graphs, a table of contents and text. What's a good API to use for this? How sure are you that it supports graphs, ToCs, and tables? What are some hidden gotcha's in using them? Some clarifications: I can't output a PDF, they want a Word doc. They're using MS Word 2003 (or 2007), not OpenOffice Application is running on *nix app-server It'd be nice if I could start with a template doc and just fill in some spaces with tables, graphs, etc. Thanks for the help. Edit: Several good answers below, each with their own faults as far as my current situation. Hard to pick a "final answer" from them. Think I'll leave it open, and hope for better solutions to be created. Edit: The OpenOffice UNO project does seem to be closest to what I asked for. While POI is certainly more mainstream, it's too immature for what I want.

    Read the article

  • What is the best way to auto-generate INSERT statements for a SQL Server table?

    - by JosephStyons
    We are writing a new application, and while testing, we will need a bunch of dummy data. I've added that data by using MS Access to dump excel files into the relevant tables. Every so often, we want to "refresh" the relevant tables, which means dropping them all, re-creating them, and running a saved MS Access append query. The first part (dropping & re-creating) is an easy sql script, but the last part makes me cringe. I want a single setup script that has a bunch of INSERTs to regenerate the dummy data. I have the data in the tables now. What is the best way to automatically generate a big list of INSERT statements from that dataset? I'm thinking of something like in TOAD (for Oracle) where you can right-click on a grid and click Save As-Insert Statements, and it will just dump a big sql script wherever you want. The only way I can think of doing it is to save the table to an excel sheet and then write an excel formula to create an INSERT for every row, which is surely not the best way. I'm using the 2008 Management Studio to connect to a SQL Server 2005 database.

    Read the article

  • Reporting tool for OLAP, *not* OLTP!

    - by Stefan Moser
    I'm looking for a control that I can put on top of an already existing OLAP star schema to allow the user to define their own "queries" and generate reports. Right now I have some predefined reports built on top of the cubes, but I'd like to allow the user to define their own criteria based on the cubes that I've created. I've found lots of products that will allow you to treat a transactional table like an OLAP cube, but nothing specifically for pre-existing cubes. EDIT: Let me be clear, I know there are countless reporting tools out there that claim to report on OLAP cubes. The problem is they all assume they are looking at transactional data and try to create their own cubes. I have tables that contain tens, if not hundreds of millions of records. Most tools crash when handling this much data, the others just run incredible slowly. I don't want a tool that is targeting the business people. I want a tool that understands what a star and snowflake schema is. I want to be able to tell it what the fact tables are and what the dimension tables are, and then creates a UI on top of them. This is an easier problem to solve for the tool vendor because I am spoon feeding them the cubes. I want to rely on the fact that cubes are a standardized pattern and I want a tool that takes advantage of this fact. I want a tool that targets developers and starts with the assumption that I actually know how to manage my data, it just needs to build pretty reports for me and not crumble under the weight of my data.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >