Search Results

Search found 31891 results on 1276 pages for 'database schema'.

Page 104/1276 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • Define tables from a part of my ER Diagram.

    - by M R Jafari
    I have a ER-Diagram (Show in http://www.4freeimagehost.com/show.php?i=f82997ca4d5d.png). In the diagram you see 2 entities and a 1:N relataion together. Project has 2 columns as ProjectID, ProjectName. Employee has 3 colums as EmployeeID, EmployeeName and ProjectID. A project has ONLY 1 project-manager and project-manager is a employee. What columns add them?

    Read the article

  • How to store data in mysql, to get the fastest performance?

    - by Oden
    Hey, I'm thinking about it, witch of the following two query types would give me the fastest performance for a user messaging module inside my site: The first one i thought about is a multi table setup, witch has a connection table, and a main table. The connection table holds the connection between accounts, and the messaging table. In this case a query would look like following, to get some data of the author, and the messages he has sent: SELECT m.*, a.username FROM messages AS m LEFT JOIN connection_table ON (message_id = m.id) LEFT JOIN accounts AS a ON (account_id = a.id) WHERE m.id = '32341' Inserting into it is a little bit more "complicated". My other idea, and in my thought the better solution of this problem is that i store the data i would use in a connection table in the same table where is store the data of the mail. Sounds like i would get lots of duplicated entries, but no, because i have a field witch has text type and holds user ids like this: *24*32*249* If I want to query them, i use the mysql LIKE method. Deleting is an other problem, but for this i have one more field where i store who has deleted the post. Sad about that i don't know how to join this. So what would you recommend? Are there other ways?

    Read the article

  • MySQL triggers cannot update rows in same table the trigger is assigned to. Suggested workaround?

    - by Cory House
    MySQL doesn't currently support updating rows in the same table the trigger is assigned to since the call could become recursive. Does anyone have suggestions on a good workaround/alternative? Right now my plan is to call a stored procedure that performs the logic I really wanted in a trigger, but I'd love to hear how others have gotten around this limitation. Edit: A little more background as requested. I have a table that stores product attribute assignments. When a new parent product record is inserted, I'd like the trigger to perform a corresponding insert in the same table for each child record. This denormalization is necessary for performance. MySQL doesn't support this and throws: Can't update table 'mytable' in stored function/trigger because it is already used by statement which invoked this stored function/trigger. A long discussion on the issue on the MySQL forums basically lead to: Use a stored proc, which is what I went with for now. Thanks in advance!

    Read the article

  • Star schema [fact 1:n dimension]...how?

    - by Mike Gates
    I am a newcomer to data warehouses and have what I hope is an easy question about building a star schema: If I have a fact table where a fact record naturally has a one-to-many relationship with a single dimension, how can a star schema be modeled to support this? For example: Fact Table: Point of Sale entry (the measurement is DollarAmount) Dimension Table: Promotions (these are sales promotions in effect when a sale was made) The situation is that I want a single Point Of Sale entry to be associated with multiple different Promotions. These Promotions cannot be their own dimensions as there are many many many promotions. How do I do this?

    Read the article

  • Long-running Database Query

    - by JamesMLV
    I have a long-running SQL Server 2005 query that I have been hoping to optimize. When I look at the actual execution plan, it says a Clustered Index Seek has 66% of the cost. Execuation Plan Snippit: <RelOp AvgRowSize="31" EstimateCPU="0.0113754" EstimateIO="0.0609028" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="10198.5" LogicalOp="Clustered Index Seek" NodeId="16" Parallel="false" PhysicalOp="Clustered Index Seek" EstimatedTotalSubtreeCost="0.0722782"> <OutputList> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="quoteDate" /> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="price" /> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="1067" ActualEndOfScans="1" ActualExecutions="1" /> </RunTimeInformation> <IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" NoExpandHint="false"> <DefinedValues> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="quoteDate" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="price" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </DefinedValue> </DefinedValues> <Object Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Index="[_dta_index_Indices_14_320720195__K5_K2_K1_3]" Alias="[I]" /> <SeekPredicates> <SeekPredicate> <Prefix ScanType="EQ"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="HedgeProduct" ComputedColumn="true" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="(1)"> <Const ConstValue="(1)" /> </ScalarOperator> </RangeExpressions> </Prefix> <StartRange ScanType="GE"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[@StartMonth]"> <Identifier> <ColumnReference Column="@StartMonth" /> </Identifier> </ScalarOperator> </RangeExpressions> </StartRange> <EndRange ScanType="LE"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[@EndMonth]"> <Identifier> <ColumnReference Column="@EndMonth" /> </Identifier> </ScalarOperator> </RangeExpressions> </EndRange> </SeekPredicate> </SeekPredicates> </IndexScan> </RelOp> From this, does anyone see an obvious problem that would be causing this to take so long? Here is the query: (SELECT quotedate, tenure, price, ActualVolume, HedgePortfolioValue, Price AS UnhedgedPrice, ((ActualVolume*Price - HedgePortfolioValue)/ActualVolume) AS HedgedPrice FROM ( SELECT [quoteDate] ,[price] , tenure ,isnull(wf_1.[Risks].[HedgePortValueAsOfDate2](1,tenureMonth,quotedate,price),0) as HedgePortfolioValue ,[TotalOperatingGasVolume] as ActualVolume FROM [wf_1].[dbo].[Indices] I inner join ( SELECT DISTINCT tenureMonth FROM [wf_1].[Risks].[KnowRiskTrades] WHERE HedgeProduct = 1 AND portfolio <> 'Natural Gas Hedge Transactions' ) B ON I.tenure=B.tenureMonth inner join ( SELECT [Month],[TotalOperatingGasVolume] FROM [wf_1].[Risks].[ActualGasVolumes] ) C ON C.[Month]=B.tenureMonth WHERE HedgeProduct = 1 AND quoteDate>=dateadd(day, -3*365, tenureMonth) AND quoteDate<=dateadd(day,-3,tenureMonth) )A )

    Read the article

  • How do I create a free server-side database to be accessed via Windows forms and/or browser?

    - by NoCatharsis
    I have no formal education in databasing or programming, but I've learned enough SQL, C++, and C# to at least get started setting up a small database on my company's server. Using MS SQL Server 2008 R2, I have created the database and set up columns with proper data types. However, there seems to be a lot of tweaks and details that are way over my head. Since I would like these data to be accessible to the other 7 or 8 people in my office (preferably via web browser), I'm wondering whether this is the best setup for my situation. The other option I've read about is a LAMPP server, which I assume is the competing free option to Microsoft's Express packages. I know nothing of LAMP servers except from the articles I've read on how to set them up (and I believe I even saw a detailed tutorial somewhere). To summarize, my question is this: Which of these (or any other) server setups would best suit my purposes, keeping in mind that I'm a true novice (but willing to learn), and would like to keep it free until I get more experience?

    Read the article

  • InnoDB or MyISAM - Why not both?

    - by Skoder
    Hey. I'm new to databases, and I've read various threads about which is better between InnoDB and MyISAM. It seems that the debates are to use or the other. Is it not possible to use both, depending on the table? What would be the disadvantages in doing this? As far as I can tell, the engine can be set during the CREATE TABLE command. Therefore, certain tables which are often read can be set to MyISAM, but tables that need transaction support can use InnoDB. I'm sure there must be a problem, otherwise this would be the ultimate answer :).

    Read the article

  • How to change password schema for Dovecot user authentication for an already existing mail server

    - by deb_lrnr
    Hello, I have an email server setup on Debian Lenny with Postfix, Dovecot, SASL and MySQL. Currently, the password scheme in my dovecot-sql.conf file is set to: CRYPT default_pass_scheme = CRYPT I would like to globally change the scheme to something stronger like SSHA, or MD5-CRYPT and re-hash all passwords with SSHA. What is the best way to do this? The Dovecot wiki mentions how passwords that don't follow the default scheme defined in dovecot-sql.conf can be prefixed with "{ssha}password", but I couldn't see anything regarding changing an already-existing scheme to a new one for all passwords that are already in the database. Thanks for your help!

    Read the article

  • problem with doctrine:build-schema

    - by Ying
    Hi, im using symfony with doctrine, and Im trying to generate the schema yml file from a db. I get an this error SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'group' at line 1. Failing Query: "DESCRIBE group"I have a table named group in my database, and I suspect that this is screwing it up. Its not access problems as ive checked my db previleges. Any suggestions on what I can do? I have tried the quote-identifier attribute but with no luck : ( Im also struggling to find some good doctrine documentation. i cant seem to find where a list of attributes are, say for a creating a column in the schema.yml file. I tried reaching out to symfony forums but they are not responsive! Any help would be greatly appreciated! Thanks..

    Read the article

  • Is it possible to listen to relational database update?

    - by Morgan Cheng
    Is it possible to listen to relation database update? For example, my web app want to send data update to client through Comet technology. I can have the program to poll the database periodically, but that would not be performant and scalable. If app can hood to a "event handler" of database, then app can get notification every time given database table data is updated. This sounds more promising, but I didn't find any concrete example for it. This is listener pattern. Does common relational database support such feature?

    Read the article

  • Relational MySQL - fetched properties?

    - by Kelso.b
    I'm currently using the following PHP code: // Get all subordinates $subords = array(); $supervisorID = $this->session->userdata('supervisor_id'); $result = $this->db->query(sprintf("SELECT * FROM users WHERE supervisor_id=%d AND id!=%d",$supervisorID, $supervisorID)); $user_list_query = 'user_id='.$supervisorID; foreach($result->result() as $user){ $user_list_query .= ' OR user_id='.$user->id; $subords[$user->id] = $user; } // Get Submissions $submissionsResult = $this->db->query(sprintf("SELECT * FROM submissions WHERE %s", $user_list_query)); $submissions = array(); foreach($submissionsResult->result() as $submission){ $entriesResult = $this->db->query(sprintf("SELECT * FROM submittedentries WHERE timestamp=%d", $submission->timestamp)); $entries = array(); foreach($entriesResult->result() as $entries) $entries[] = $entry; $submissions[] = array( 'user' => $subords[$submission->user_id], 'entries' => $entries ); $entriesResult->free_result(); } Basically I'm getting a list of users that are subordinates of a given supervisor_id (every user entry has a supervisor_id field), then grabbing entries belonging to any of those users. I can't help but think there is a more elegant way of doing this, like SELECT FROM tablename where user->supervisor_id=2222 Is there something like this with PHP/MySQL? Should probably learn relational databases properly sometime. :(

    Read the article

  • What are some useful SQL statements that should be known by all developers who may touch the Back en

    - by Jian Lin
    What are some useful SQL statements that should be known by all developers who may touch the Back end side of the project? (Update: just like in algorithm, we know there are sorting problems, shuffling problems, and we know some solutions to them. This question is aiming at the same thing). For example, ones I can think of are: Get a list of Employees and their boss. Or one with the employee's salary greater than the boss. (Self-join) Get a list of the most popular Classes registered by students, from the greatest number to the smallest. (Count, group by, order by) Get a list of Classes that are not registered by any students. (Outer join and check whether the match is NULL, or by Get from Classes table, all ClassIDs which are NOT IN (a subquery to get all ClassIDs from the Registrations table)) Are there some SQL statements that should be under the sleeve of all developers that might touch back end data?

    Read the article

  • How do you store sets in Cassandra?

    - by Ben W
    I'd like to convert this JSON to a data model in Cassandra, where each of the arrays is a set with no duplicates: var data = { "data1": { "100": [1, 2, 3], "200": [3, 4] }, "data2": { "k1", [1], "k2", [4, 5] } } I'd like to query like this: data["data1"]["100"] to retrieve the sets. Anyone know how you might model this in Cassandra? (The only thing I came up with was columns whose name was a set value and the value of the column was an empty string, but that felt wrong.) It's not OK to serialize the sets as JSON or some other string, which would make this much easier. Also, I should note that it's OK to split data1 and data2 into separate ColumnFamilies, it's not necessary that they're keys in the same one.

    Read the article

  • Whats the best solution for a database used in conjunction with Maps in Android?

    - by Andrew
    Could someone please point me in the right direction. My project involves a database where users enter their address and other info from my website. This database is then referenced in my android application to show the locations of these addresses in my database. I have yet to start and just came up with this idea. My question is, what would be the best method to create a database easily modified through my website (mySQL, php, etc), and also easily referenced easily through Android and the Google Maps API? I need some ideas on the languages I will need to use to create this database and website so I can go buy the necessary books to start reading up. Thanks so much

    Read the article

  • Using "CASE" in Where clause to choose various column harm the performance

    - by zivgabo
    I have query which needs to be dynamic on some of the columns, meaning I get a parameter and according its value I decide which column to fetch in my Where clause. I've implemented this request using "CASE" expression: (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) >= DATEADD(mi, -@TZOffsetInMins, @sTime) AND (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) < DATEADD(mi, -@TZOffsetInMins, @fTime) If @isArrivalTime = 1 then chose ArrivalTime column else chose PickedupTime column. I have a clustered index on ArrivalTime and nonclustered index on PickedupTime. I've noticed that when I'm using this query (with @isArrivalTime = 1), my performance is a lot worse comparing to only using ArrivalTime. Maybe the query optimizer can't use\choose the index properly in this way? I compared the execution plans an noticed that when I'm using the CASE 32% of the time is being wasted on the index scan, but when I didn't use the CASE(just usedArrivalTime`) only 3% were wasted on this index scan. Anyone know the reason for this?

    Read the article

  • How to use Unique Composite Key

    - by LifeH2O
    I have a table Item(ItemName*, ItemSize*, Price, Notes) I was making composite key of (ItemName,ItemSize) to uniquely identify item. And now after reading some answers on stackoverflow suggesting the use of UNIQUE i revised it as Item(ItemID*, ItemName, ItemSize, Price, Notes) But How to apply UNIQUE constraint on ItemName and ItemSize please correct if there is something wrong in question

    Read the article

  • DB Design to store custom fields for a table

    - by Fazal
    Hi All, this question came up based on the responses I got for the question http://stackoverflow.com/questions/2785033/getting-wierd-issue-with-to-number-function-in-oracle As everyone suggested that storing Numeric values in VARCHAR2 columns is not a good practice (which I totally agree with), I am wondering about a basic Design choice our team has made and whether there are better way to design. Problem Statement : We Have many tables where we want to give certain number of custom fields. The number of required custom fields is known, but what kind of attribute is mapped to the column is available to the user E.g. I am putting down a hypothetical scenario below Say you have a laptop which stores 50 attribute values for every laptop record. Each laptop attributes are created by the some admin who creates the laptop. A user created a laptop product lets say lap1 with attributes String, String, numeric, numeric, String Second user created laptop lap2 with attributes String,numeric,String,String,numeric Currently there data in our design gets persisted as following Laptop Table Id Name field1 field2 field3 field4 field5 1 lap1 lappy lappy 12 13 lappy 2 lap2 lappy2 13 lappy2 lapp2 12 This example kind of simulates our requirement and our design Now here if somebody is lookinup records for lap2 table doing a comparison on field2, We need to apply TO_NUMBER. select * from laptop where name='lap2' and TO_NUMBER(field2) < 15 TO_NUMBER fails in some cases when query plan decides to first apply to_number instead of the other filter. QUESTION Is this a valid design? What are the other alternative ways to solve this problem One of our team mates suggested creating tables on the fly for such cases. Is that a good idea How do popular ORM tools give custom fields or flex fields handling? I hope I was able to make sense in the question. Sorry for such a long text.. This causes us to use TO_NUMBER when queryio

    Read the article

  • What are some useful SQL statements / usage patterns that should be known by all developers who may

    - by Jian Lin
    What are some useful SQL statements that should be known by all developers who may touch the Back end side of the project? (Update: just like in algorithm, we know there are sorting problems, shuffling problems, and we know some solutions to them. This question is aiming at the same thing). For example, ones I can think of are: Get a list of Employees and their boss. Or one with the employee's salary greater than the boss. (Self-join) Get a list of the most popular Classes registered by students, from the greatest number to the smallest. (Count, group by, order by) Get a list of Classes that are not registered by any students. (Outer join and check whether the match is NULL, or by Get from Classes table, all ClassIDs which are NOT IN (a subquery to get all ClassIDs from the Registrations table)) Are there some SQL statements that should be under the sleeve of all developers that might touch back end data?

    Read the article

  • Error(2,7): PLS-00428: an INTO clause is expected in this SELECT statement

    - by omgzor
    I'm trying to create this trigger and getting the following compiler errors: create or replace TRIGGER RESTAR_PLAZAS AFTER INSERT ON PLAN_VUELO BEGIN SELECT F.NRO_VUELO, M.CAPACIDAD, M.CAPACIDAD - COALESCE(( SELECT count(*) FROM PLAN_VUELO P WHERE P.NRO_VUELO = F.NRO_VUELO ), 0) as PLAZAS_DISPONIBLES FROM VUELO F INNER JOIN MODELO M ON M.ID = F.CODIGO_AVION; END RESTAR_PLAZAS; Error(2,7): PL/SQL: SQL Statement ignored Error(8,5): PL/SQL: ORA-00933: SQL command not properly ended Error(8,27): PLS-00103: Encountered the symbol "end-of-file" when expecting one of the following: begin case declare end exception exit for goto if loop mod null pragma raise return select update while with <an identifier> <a double-quoted delimited-identifier> <a bind variable> << close current delete fetch lock insert open rollback savepoint set sql execute commit forall merge pipe Error(2,1): PLS-00428: an INTO clause is expected in this SELECT statement What's wrong with this trigger?

    Read the article

  • My database has been deleted suddenly on the server how to recover it?

    - by user2728312
    I'm running an application on windows server that connect to a SQL Server database. Today, when I opened SQL Server Management Studio, I was surprised the database is not in the list of the databases! I don't know what's the reason. I searched in the server files but I can't find the database and also in the recycle bin. I put my database in C:\db\myWeb.mdf and suddenly it's been removed! Can anyone tell me how to recover the database?

    Read the article

  • Fastest way to perform subset test operation on a large collection of sets with same domain

    - by niktech
    Assume we have trillions of sets stored somewhere. The domain for each of these sets is the same. It is also finite and discrete. So each set may be stored as a bit field (eg: 0000100111...) of a relatively short length (eg: 1024). That is, bit X in the bitfield indicates whether item X (of 1024 possible items) is included in the given set or not. Now, I want to devise a storage structure and an algorithm to efficiently answer the query: what sets in the data store have set Y as a subset. Set Y itself is not present in the data store and is specified at run time. Now the simplest way to solve this would be to AND the bitfield for set Y with bit fields of every set in the data store one by one, picking the ones whose AND result matches Y's bitfield. How can I speed this up? Is there a tree structure (index) or some smart algorithm that would allow me to perform this query without having to AND every stored set's bitfield? Are there databases that already support such operations on large collections of sets?

    Read the article

  • SQL Server (2008) Creating link tables with unique rows

    - by peteski22
    Hi guys, I'm having trouble getting in touch with SQL Server Managemen Studio 2008! I want to create a link-table that will link an Event to many Audiences (EventAudience). An example of the data that could be contained: EventId | AudienceId 4 1 5 1 4 2 However, I don't want this: EventId | AudienceId 4 1 4 1 I've been looking at relationships and constraints.. but no joy so far! As a sneaky second part to the question, I would like to set up the Audience table such that if a row is deleted from Audience, it will clear down the EventAudience link table in a cascading manner. As always, ANY help/advice appreciated! Thanks Pete

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >