Search Results

Search found 37714 results on 1509 pages for 'database documentation'.

Page 350/1509 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • merging two tables, while applying aggregates on the duplicates (max,min and sum)

    - by cloudraven
    I have a table (let's call it log) with a few millions of records. Among the fields I have Id, Count, FirstHit, LastHit. Id - The record id Count - number of times this Id has been reported FirstHit - earliest timestamp with which this Id was reported LastHit - latest timestamp with which this Id was reported This table only has one record for any given Id Everyday I get into another table (let's call it feed) with around half a million records with these fields among many others: Id Timestamp - Entry date and time. This table can have many records for the same id What I want to do is to update log in the following way. Count - log count value, plus the count() of records for that id found in feed FirstHit - the earliest of the current value in log or the minimum value in feed for that id LastHit - the latest of the current value in log or the maximum value in feed for that id. It should be noticed that many of the ids in feed are already in log. The simple thing that worked is to create a temporary table and insert into it the union of both as in Select Id, Min(Timestamp) As FirstHit, MAX(Timestamp) as LastHit, Count(*) as Count FROM feed GROUP BY Id UNION ALL Select Id, FirstHit,LastHit,Count FROM log; From that temporary table I do a select that aggregates Min(firsthit), max(lasthit) and sum(Count) Select Id, Min(FirstHit),Max(LastHit),Sum(Count) FROM @temp GROUP BY Id; and that gives me the end result. I could then delete everything from log and replace it with everything with temp, or craft an update for the common records and insert the new ones. However, I think both are highly inefficient. Is there a more efficient way of doing this. Perhaps doing the update in place in the log table?

    Read the article

  • Any way to make this PostgreSQL count query any faster?

    - by Ben Dauphinee
    I'm running a case-insensitive search on a table with 7.2 million rows, and I was wondering if there was any way to make this query any faster? Currently, it takes approx 11.6 seconds to execute, with just one search parameter, and I'm worried that as soon as I add more than one, this query will become massively slow. SELECT count(*) FROM "exif_parse" WHERE (description ~* 'canon')

    Read the article

  • Set to null a parent record so that children are removed: howto?

    - by EugeneP
    How to delete a child row (on delete cascade ?) when setting a null value on a parent? Here's the db design. table A [id, b_id_1, b_id_2] table B [id, other fields...] b_id_1 and b_id_2 can be NULL if any of them is null, it means NO B records for corresponding FK (there are 2 of them) so (b_id_1,b_id_2) can be (null,null), (100, null), (null, 100_or_any_other_number) etc How in one SQL query both set b_id_1 or b_id_2 to null and delete all rows from B that have this id? What FK design should be applied to the 2 tables? what foreign keys should be added? A - B (FK_1: A.b_id_1 references B.id, FK_2: A.b_id_2 references B.id) and also B-A (FK_3: B.id references A.b_id_1, FK_4: B.id references A.b_id_2) ? But again, setting an A's b_id_1 or A's b_id_2 to null - will it remove any of B's records? I don't think so. So how to do that?

    Read the article

  • How to design model for multi-tiered data?

    - by Chris
    Say I have three types of object: Area, Subarea and Topic. I want to be able to display an Area, which is just a list of Subareas and the Topics contained in those Subareas. I never want to be able to display Subareas separately - they're just for breaking up the Topics. Topics can, however, appear in multiple Areas (but probably under the same Subarea). How would I design a model for this? I could use ForeignKey from Topic to Subarea and from Subarea to Area, but it seems unnecessarily complex given that I never want to interact with subareas themselves. Also, none of these objects are ever altered or added to by the user. They're just for me to represent information. Maybe there is a better way to represent it all?

    Read the article

  • Evaluate column value into rows

    - by Hugo Palma
    I have a column whose value is a json array. For example: [{"att1": "1", "att2": "2"}, {"att1": "3", "att2": "4"}, {"att1": "5", "att2": "6"}] What i would like is to provide a view where each element of the json array is transformed into a row and the attributes of each json object into columns. Keep in mind that the json array doesn't have a fixed size. Any ideas on how i can achieve this ?

    Read the article

  • Three-way full outer join in SQLite

    - by Vince
    I have three tables with a common key field, and I need to join them on this key. Given SQLite doesn't have full outer or right joins, I've used the full outer join without right join technique on Wikipedia with much success. But I'm curious, how would one use this technique to join three tables by a common key? What are the efficiency impacts of this (the current query takes about ten minutes)? Thanks!

    Read the article

  • Is there better documentation then the original OpenAL docs? [closed]

    - by mystify
    OpenAL is such a huge thing, and the documentation doesn't tell what values are acceptable for properties. That's really bad. I'm using the document: "OpenAL_Programmers_Guide.pdf" Whenever I look up a property I'm left in the dark what value might be ok. For example, take AL_PITCH. What value? Maybe someone wrote a better one? Or is there something like a wiki place with more details?

    Read the article

  • DB management for Heroku apps

    - by zetarun
    Hi all, I'm fairly new to both Rails and Heroku but I'm seriously thinking of using it as a platform to deploy my Ruby/Rails applications. I want to use all the power of Heroku, so I prefer the "embedded" PostgreSQL managed by Heroku instead of the addon for Amazon RDS for MySQL, but I'm not so confident without the possibility to access my data in a SQL client... I know that in a well made app you have no need to access DB, but there are some situations (add rows to a config table, see data not mapped in a view, update some columns for debugging issues, performance monitoring, running queries for reporting, etc.) when this can be good... How do you solve this problem? What's you experience in a real life app powered by Heroku? Thanks!

    Read the article

  • How to create multiple inputs in yii + Q&A admin page

    - by user1764357
    In yii, i am developing page for Creating Question with multiple options using forms. So i have two tables Question QuestionId Question optionId Option OptionId Option To create multiple options, option field should be provided with add button so that after clicking add button, it should display new textbox control to get the new option. All these options displayed in a gridview. So can you please help me. I am very new to yii.

    Read the article

  • what does "out of range" mean?

    - by user329820
    Hi I have checked these statements with mysql and no error will happen and also the out put will be 0 rows BUT my friend checked it and he found an error for SELECT becaouse it is out of range !! IS he correct? thanks CREATE TABLE T1(A INTEGER NULL); SELECT * FROM T1;

    Read the article

  • PHP - How to retrieve session in php

    - by Klaus Jasper
    I created a table that contains id - names - jobs and page that shows the names only and beside each name there is button Job and session that contains the id. this is my code $query = mysql_query("SELECT * FROM table"); while($fetch = mysql_fetch_array("$query")){ $name = $fetch['names']; $id = $fetch['id']; echo '</br>'; echo $name; $_SESSION['name'] = $id; echo "<button>Job</button>"; } I want when the user click on button Job redirect to a page that contains the job of that session. so how can I do it?

    Read the article

  • help with delete where not in query

    - by kralco626
    I have a lookup table (##lookup). I know it's bad design because I'm duplicating data, but it speeds up my queries tremendously. I have a query that populates this table insert into ##lookup select distinct col1,col2,... from table1...join...etc... I would like to simulate this behavior: delete from ##lookup insert into ##lookup select distinct col1,col2,... from table1...join...etc... This would clearly update the table correctly. But this is a lot of inserting and deleting. It messes with my indexes and locks up the table for selecting from. This table could also be updated by something like: delete from ##lookup where not in (select distinct col1,col2,... from table1...join...etc...) insert into ##lookup (select distinct col1,col2,... from table1...join...etc...) except if it is already in the table The second way may take longer, but I can say "with no lock" and I will be able to select from the table. Any ideas on how to write the query the second way?

    Read the article

  • manipulating 15+ million records in mysql with php?

    - by Nithish
    Hey, I got a user table containing 15+ million records and while doing the registration function i wish to check whether the username already exist. I did indexing for username column and when i run the query "select count(uid) from users where username='webdev'" ,. hmmm, its keep on loading blank screen finally hanged up. I'm doing this in my localhost with php 5 & mysql 5. So suggest me some technique to handle this situation. Is that mongodb is good alternative for handling this process in our local machine? Thanks, Nithish.

    Read the article

  • extract transform load

    - by mitch
    Wikipedia defines a 'typical' ETL cycle as : Cycle initiation Build reference data Extract (from sources) Validate Transform (clean, apply business rules, check for data integrity, create aggregates or disaggregates) Stage (load into staging tables, if used) Audit reports (for example, on compliance with business rules. Also, in case of failure, helps to diagnose/repair) Publish (to target tables) Archive Clean up ..What is meant by 'Build reference data'?

    Read the article

  • About curse of dimensionality

    - by Dan
    My question is about this topic I've been reading about a bit. Basically my understanding is that in higher dimensions all points end up being very close to each other. The doubt I have is whether this means that calculating distances the usual way (euclidean for instance) is valid or not. If it were still valid, this would mean that when comparing vectors in high dimensions, the two most similar wouldn't differ much from a third one even when this third one could be completely unrelated. Is this correct? Then in this case, how would you be able to tell whether you have a match or not?

    Read the article

  • using dummy row with NOT NULL to solve DEFAUT NULL

    - by Tony38
    I know having DEFAULT NULLS is not a good practice but I have many optional lookup values which are FK in the system so to solve this issue here is what i am doing: I use NOT NULL for every FK / lookup valve field. I have the first row in every table which is PK id = 1 as a dummy row with just "none" in all the columns. this way I can use NOT NULL in my schema and if needed reference to the none row values which should be null. Is this a good design or any other work arounds?

    Read the article

  • In a SQL XDL File, how do I read the waitresource attribute on process nodes which are deadlocking?

    - by skimania
    On SQL Server 2005, I'm getting a deadlock when updating two different keys in the same table. note from below that these two waitresources have the same beginning part, but different ending parts. waitresource="KEY: 6:72057594090487808 (d900ed5a6cc6)" and waitresource="KEY: 6:72057594090487808 (d900fb5261bb)" These two keys are locking, and I need to figure out why. The question: If the values in parenthesis are different, why are the first half of the key's the same? <deadlock-list> <deadlock victim="processffffffff8f5863e8"> <process-list> <process id="processaf02f8" taskpriority="0" logused="0" waitresource="KEY: 6:72057594090487808 (d900fb5261bb)" waittime="2281" ownerId="1370264705" transactionname="user_transaction" lasttranstarted="2010-06-17T00:35:25.483" XDES="0x69453a70" lockMode="U" schedulerid="3" kpid="7624" status="suspended" spid="339" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-06-17T00:35:25.483" lastbatchcompleted="2010-06-17T00:35:25.483" clientapp=".Net SqlClient Data Provider" hostname="RISKBBG_VM" hostpid="5848" loginname="RiskOpt" isolationlevel="read committed (2)" xactid="1370264705" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="MKP_RISKDB.dbo.MarketDataCurrentRtUpload" line="14" stmtstart="840" stmtend="1220" sqlhandle="0x03000600005f9d24c8878f00849d00000100000000000000"> UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID </frame> <frame procname="adhoc" line="1" sqlhandle="0x010006004a58132228bf8d73000000000000000000000000"> MarketDataCurrentBlbgRtUpload </frame> </executionStack> <inputbuf> MarketDataCurrentBlbgRtUpload </inputbuf> </process> <process id="processffffffff8f5863e8" taskpriority="0" logused="0" waitresource="KEY: 6:72057594090487808 (d900ed5a6cc6)" waittime="2281" ownerId="1370264646" transactionname="user_transaction" lasttranstarted="2010-06-17T00:35:25.450" XDES="0x1cb72be8" lockMode="U" schedulerid="5" kpid="1880" status="suspended" spid="287" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-06-17T00:35:25.450" lastbatchcompleted="2010-06-17T00:35:25.450" clientapp=".Net SqlClient Data Provider" hostname="RISKAPPS_VM" hostpid="1424" loginname="RiskOpt" isolationlevel="read committed (2)" xactid="1370264646" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="MKP_RISKDB.dbo.MarketDataCurrent_BulkUpload" line="28" stmtstart="1062" stmtend="1720" sqlhandle="0x03000600a28e5e4ef4fd8e00849d00000100000000000000"> UPDATE c WITH (ROWLOCK) SET LastUpdate = getdate(), Value = t.Value, Source = @source FROM MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid WHERE c.lastUpdate &lt; @updateTime and c.mdid not in (select mdid from MarketData where BloombergTicker is not null and PriceSource like &apos;Live.%&apos;) and c.value &lt;&gt; t.value </frame> <frame procname="adhoc" line="1" stmtstart="88" sqlhandle="0x01000600c1653d0598706ca7000000000000000000000000"> exec MarketDataCurrent_BulkUpload @clearBefore, @source </frame> <frame procname="unknown" line="1" sqlhandle="0x000000000000000000000000000000000000000000000000"> unknown </frame> </executionStack> <inputbuf> (@clearBefore datetime,@source nvarchar(10))exec MarketDataCurrent_BulkUpload @clearBefore, @source </inputbuf> </process> </process-list> <resource-list> <keylock hobtid="72057594090487808" dbid="6" objectname="MKP_RISKDB.dbo.MarketDataCurrent" indexname="PK_MarketDataCurrent" id="lock64ac7940" mode="U" associatedObjectId="72057594090487808"> <owner-list> <owner id="processffffffff8f5863e8" mode="U"/> </owner-list> <waiter-list> <waiter id="processaf02f8" mode="U" requestType="wait"/> </waiter-list> </keylock> <keylock hobtid="72057594090487808" dbid="6" objectname="MKP_RISKDB.dbo.MarketDataCurrent" indexname="PK_MarketDataCurrent" id="lockffffffffb8d2dd40" mode="U" associatedObjectId="72057594090487808"> <owner-list> <owner id="processaf02f8" mode="U"/> </owner-list> <waiter-list> <waiter id="processffffffff8f5863e8" mode="U" requestType="wait"/> </waiter-list> </keylock> </resource-list> </deadlock> </deadlock-list>

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >