Search Results

Search found 34962 results on 1399 pages for 'drop database'.

Page 335/1399 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • Multi-variable indexes in postgres

    - by Jackson Davis
    Im looking at an application where I will be doing quite a few SELECTs where I am trying to find column_a = x AND column_b = y. Is the correct to create that index that something like the following? CREATE INDEX index_name ON table (column_a, column_b)

    Read the article

  • Do non-clustered indexes slow down inserts?

    - by mikeinmadison
    I'm working in Sql Server 2005. I have an event log table that tracks user actions, and I want to make sure that inserts into the table are as fast as possible. Currently the table doesn't have any indexes. Does adding a single non-clustered index slow down inserts at all? Or is it only clustered indexes that slow down inserts? Or should I just add a clustered index and not worry about it?

    Read the article

  • What is a good DBMS for archiving?

    - by Thomas.Winsnes
    I've been stuck in a MsSql/MySql world now for a few years, and I've decided to spread my wings a little further. At the moment I'm researching which DBMS is good at things needed when archiving data. Eg. lots of writes and low reads. I've seen the NoSQL crusade, but I have a very RDBMS mindset, so I'm a bit skeptical. Anyone have any suggestions? Or even any pointers to where there are some benchmarks etc for this kind of stuff. Thank you :) Thomas

    Read the article

  • How Implement a system to determine if a milestone has been reached

    - by Luc M
    I have a table named stats player_id team_id match_date goal assist` 1 8 2010-01-01 1 1 1 8 2010-01-01 2 0 1 9 2010-01-01 0 5 ... I would like to know when a player reach a milestone (eg 100 goals, 100 assists, 500 goals...) I would like to know also when a team reach a milestone. I want to know which player or team reach 100 goals first, second, third... I thought to use triggers with tables to accumulate the totals. Table player_accumulator (and team_accumulator) table would be player_id total_goals total_assists 1 3 6 team_id total_goals total_assists 8 3 1 9 0 5 Each time a row is inserted in stats table, a trigger will insert/update player_accumulator and team_accumulator tables. This trigger could also verify if player or team has reached a milestone in milestone table containing numbers milestone 100 500 1000 ... A table player_milestone would contains milestone reached by player: player_id stat milestone date 1 goal 100 2013-04-02 1 assist 100 2012-11-19 There is a better way to implements a "milestone" ? There is an easiest way without triggers ? I'm using PostgreSQL

    Read the article

  • cached data base

    - by radi
    hi , in my project i need a tow tables each of it has about 2000 row , i want my application to be speed so my db should load into memory (cached) when the app start and before it close the db have to be saved on the disk . i am using java and i want to use sql

    Read the article

  • SQL SERVER - Detecting non-indexed columns but used in WHERE clause

    - by Vadi
    How to detect a column included in WHERE clause but used in indexed? Little Background: Until the time the table has few number of records things will be okay, once it started having millions of records then index should be created for a column which is used in WHERE clauses in stored procs, inline queries etc., Since we have hundreds of stored procs and queries that often gets changed by the devs I wanted to have a automated way of identifying those columns that are used in WHERE clauses but not an index is created. How to do that in SQL SERVER 2008?

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • YAHOO QUERY LANGUAGE BUG!

    - by Damiano
    Hello everybody! Today, I've started with Yahoo Query Language. I would use it to retrive stocks details, so I'm talking about Yahoo Finance. I think there is a bug on this language. This is my query: select * from yahoo.finance.quoteslist where symbol='@^GSPC' I ALWAYS get 51 results! it's impossible, take a look at: http://it.finance.yahoo.com/q/cp?s=^GSPC There are 500 results! I also tried some paging parameters. select * from yahoo.finance.quoteslist(50,30) where symbol='@^GSPC' (to get from 50 to 80) select * from yahoo.finance.quoteslist(100) where symbol='@^GSPC' (to get the first 100 results) select * from yahoo.finance.quoteslist where symbol='@^GSPC' limit 30 offset 50 but ALWAYS the last stock is: <quote symbol="BBY"> <Symbol>BBY</Symbol> <LastTradePriceOnly>41.03</LastTradePriceOnly> <LastTradeDate>5/7/2010</LastTradeDate> <LastTradeTime>4:00pm</LastTradeTime> <Change>-0.48</Change> <Open>41.35</Open> <DaysHigh>42.35</DaysHigh> <DaysLow>39.60</DaysLow> <Volume>14129531</Volume> </quote> Why do I have this kind of problem? Thank you so much for your support! (P.S. I've tested it on Yahoo YQL console)

    Read the article

  • How to model a mutually exclusive relationship in sql server

    - by littlechris
    Hi, I have to add functionality to an existing application and I've run into a data situation that I'm not sure how to model. I am being restricted to the creation of new tables and code. If I need to alter the existing structure I think my client may reject the proposal..although if its the only way to get it right this is what I will have to do. I have an Item table that can me link to any number of tables, and these tables may increase over time. The Item can only me linked to one other table, but the record in the other table may have many items linked to it. Examples of the tables/entities being linked to are "Person", "Vehicle", "Building", "Office". These are all separate tables. Example of Items are "Pen", "Stapler", "Cushion", "Tyre", "A4 Paper", "Plastic Bag", "Poster", "Decoration" For instance a "Poster" may be allocated to a "Person" or "Office" or "Building". In the future if they add a "Conference Room" table it may also be added to that. My intital thoughts are: Item { ID, Name } LinkedItem { ItemID, LinkedToTableName, LinkedToID } The LinkedToTableName field will then allow me to identify the correct table to link to in my code. I'm not overly happy with this solution, but I can't quite think of anything else. Please help! :) Thanks!

    Read the article

  • Using memcache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Non-Access or -base QBE tool?

    - by idbin.cath0br0
    Hello. I'm currently looking for a QBE tool that can execute queries on PostgreSQL or MySQL. OS doesn't really matter. Reason is that we've got to do QBE at school but I don't want to use neither Microsoft Access nor OpenOffice.org Base (lack of features). Any help would be appreciated.

    Read the article

  • Is it possible to combine these 3 mySQL queries?

    - by Greenie
    I know the $downloadfile - and I want the $user_id. By trial and error I found that this does what I want. But it's 3 separate queries and 3 while loops. I have a feeling there is a better way. And yes, I only have a very little idea about what I'm doing :) $result = pod_query("SELECT ID FROM wp_posts WHERE guid LIKE '%/$downloadfile'"); while ($row = mysql_fetch_assoc($result)) { $attachment = $row['ID']; } $result = pod_query("SELECT pod_id FROM wp_pods_rel WHERE tbl_row_id = '$attachment'"); while ($row = mysql_fetch_assoc($result)) { $pod_id = $row['pod_id']; } $result = pod_query("SELECT tbl_row_id FROM wp_pods_rel WHERE tbl_row_id = '$pod_id' AND field_id = '28'"); while ($row = mysql_fetch_assoc($result)) { $user_id = $row['pod_id']; }

    Read the article

  • Using a remote PHP service with Flex (Flash Builder) AIR Application?

    - by Chrisc
    Hello, I'm developing a Adobe AIR application using Flash Builder 4. This app needs to access a remote PHP service which is being hosted on a remote web server. I am having troubles figuring out how to add a PHP data service which uses a remote service. I can add the PHP data service in Flash Builder as a service hosted on localhost, but given that this will not be feasible when the application is deployed, will not work. Does anyone know how to connect a Flash Builder (Flex) project to a remote PHP data service? Thanks, Chris

    Read the article

  • mysql subquery strangely slow

    - by aviv
    I have a query to select from another sub-query select. While the two queries look almost the same the second query (in this sample) runs much slower: SELECT user.id ,user.first_name -- user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); This query takes 1.2s Explain on this query results: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user index first_name 152 141192 Using where; Using index 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where The second query: SELECT -- user.id -- user.first_name user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); Takes 45sec to run, with explain: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user ALL 141192 Using where 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where Why is it slower if i query only by index fields? Why both queries scans the full length of the user table? Any ideas how to improve? Thanks.

    Read the article

  • Tool Compare the tables in two different databeses

    - by user191124
    I am using Toad. Frequently i need to compare tables in two different test environments. the tables present in them are same but the data differs. i just need to know what are the differences in the same tables which are in two different data bases.Are there any tools which can be installed on windows and use it to compare. Much appreciate your help:)

    Read the article

  • What's wrong with this SQL query?

    - by ThinkingInBits
    I have two tables: photographs, and photograph_tags. Photograph_tags contains a column called photograph_id (id in photographs). You can have many tags for one photograph. I have a photograph row related to three tags: boy, stream, and water. However, running the following query returns 0 rows SELECT p.* FROM photographs p, photograph_tags c WHERE c.photograph_id = p.id AND (c.value IN ('dog', 'water', 'stream')) GROUP BY p.id HAVING COUNT( p.id )=3 Is something wrong with this query?

    Read the article

  • what is the question for the query?

    - by Kevinniceguy
    Sorry...I mean what question will be for this query? SELECT SUM(price) FROM Room r, Hotel h WHERE r.hotelNo = h.hotelNo and hotelName = 'Paris Hilton' and roomNo NOT IN (SELECT roomNo FROM Booking b, Hotel h WHERE (dateFrom <= CURRENT_DATE AND dateTo >= CURRENT_DATE) AND b.hotelNo = h.hotelNo AND hotelName = 'Paris Hilton');

    Read the article

  • How to display SUM fields from a detailed table in a master table

    - by max
    What is the best approach to display the summery of DETAILED.Fields in its master table? E.g. I have a master table called 'BILL' with all the bill related data and a detailed table ('BILL_DETAIL') with the bill detailed related data, like NAME, PRICE, TAX, ... Now I want to list all BILLS, without the details, but with the sum of the PRICE and TAX stored in the detail table. Here is a simplified schema of that tables: TABLE BILL ---------- - ID - NAME - ADDRESS - ... TABLE BILL_DETAIL ----------------- - ID - BILLID - PORDUCT_NAME - PRICE - TAX - ... The retrieved table row should look like this: BILL.CUSTOMER_NAME, BILL.CUSTOMER_ADDRESS, sum(BILL_DETAIL.PRICE), sum(BILL.DETAIL.TAX), ... Any sugguestions?

    Read the article

  • How do I add a one-to-one relationship in MYSQL?

    - by alex
    +-------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+-------------+------+-----+---------+-------+ | pid | varchar(99) | YES | | NULL | | +-------+-------------+------+-----+---------+-------+ 1 row in set (0.00 sec) +-------+---------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+---------------+------+-----+---------+-------+ | pid | varchar(2000) | YES | | NULL | | | recid | varchar(2000) | YES | | NULL | | +-------+---------------+------+-----+---------+-------+ 2 rows in set (0.00 sec) This is my table. pid is just the id of the user. "recid" is a recommended song for that user. I hope to have a list of pid's, and then recommended songs for each person. Of course, in my 2nd table, (pid, recid) would be unique key. How do I do a one-to-one query for this ?

    Read the article

  • Optimize a MySQL count each duplicate Query

    - by Onema
    I have the following query That gets the city name, city id, the region name, and a count of duplicate names for that record: SELECT Country_CA.City AS currentCity, Country_CA.CityID, globe_region.region_name, ( SELECT count(Country_CA.City) FROM Country_CA WHERE City LIKE currentCity ) as counter FROM Country_CA LEFT JOIN globe_region ON globe_region.region_id = Country_CA.RegionID AND globe_region.country_code = Country_CA.CountryCode ORDER BY City This example is for Canada, and the cities will be displayed on a dropdown list. There are a few towns in Canada, and in other countries, that have the same names. Therefore I want to know if there is more than one town with the same name region name will be appended to the town name. Region names are found in the globe_region table. Country_CA and globe_region look similar to this (I have changed a few things for visualization purposes) CREATE TABLE IF NOT EXISTS `Country_CA` ( `City` varchar(75) NOT NULL DEFAULT '', `RegionID` varchar(10) NOT NULL DEFAULT '', `CountryCode` varchar(10) NOT NULL DEFAULT '', `CityID` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`City`,`RegionID`), KEY `CityID` (`CityID`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; AND CREATE TABLE IF NOT EXISTS `globe_region` ( `country_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_name` varchar(50) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`country_code`,`region_code`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; The query on the top does exactly what I want it to do, but It takes way too long to generate a list for 5000 records. I would like to know if there is a way to optimize the sub-query in order to obtain the same results faster. the results should look like this City CityID region_name counter sheraton 2349269 British Columbia 1 sherbrooke 2349270 Quebec 2 sherbrooke 2349271 Nova Scotia 2 shere 2349273 British Columbia 1 sherridon 2349274 Manitoba 1

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >