Search Results

Search found 33257 results on 1331 pages for 'django database'.

Page 61/1331 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Database solutions for storing/searching EXIF data

    - by webdestroya
    I have thousands of photos on my site (each with a numeric PhotoID) and I have EXIF data (photos can have different EXIF tags as well). I want to be able to store the data effectively and search it. Some photos have more EXIF data than others, some have the same, so on.. Basically, I want to be able to query say 'Select all photos that have a GPS location' or 'All photos with a specific camera' I can't use MySQL (tried it, it doesn't work). I thought about Cassandra, but I don't think it lets me query on fields. I looked at SimpleDB, but I would rather: not pay for the system, and I want to be able to run more advanced queries on the data. Also, I use PHP and Linux, so it would be awesome if it could interface nicely to PHP. Any ideas?

    Read the article

  • Best way to auto-restore db on an houlry basis

    - by aron
    Hello, I have a demo site where anyone can login and test a management interface. Every hour I would like to flush all the data in the SQL 2008 Database and restore it from the original. Rae Gate sql has some awesome tools for this, however they are beyond my budget right now. Could I simply make a backup copy of the database's data file, then have a c# console app that deletes it and copies over the original. Then I can have a windows schedule task to run the .exe every hour. It's simple and free... would this work? I'm using SQL Server 2008 R2 Web edition I understand that red gate is technically better because I can set it to analyze the db and only update the records that were altered, and the approach I have above is like a "sledge hammer".

    Read the article

  • How do I normalise this database design?

    - by Ian Roke
    I am creating a rowing reporting and statistics system for a client where I have a structure at the moment similar to the following: ----------------------------------------------------------------------------- | ID | Team | Coaches | Rowers | Event | Position | Time | ----------------------------------------------------------------------------- | 18 | TeamName | CoachName1 | RowerName1 | EventName | 1 | 01:32:34 | | | | CoachName2 | RowerName2 | | | | | | | | RowerName3 | | | | | | | | RowerName4 | | | | ----------------------------------------------------------------------------- This is an example row of data but I would like to expand this out to a Rowers table and Coaches table and so on but I don't know how best to then link that back to the Entries table which is what this is. Has anybody got any words of wisdom they could share with me? Update A Team can have any number of Coaches and Rowers, a Rower can be in many Teams (Team A, B, C etc) and a Team can have many Coaches.

    Read the article

  • Database model for saving random boolean expressions

    - by zarko.susnjar
    I have expressions like this: (cat OR cats OR kitten OR kitty) AND (dog OR dogs) NOT (pigeon OR firefly) Anyone having idea how to make tables to save those? Before I got request for usage of brackets, I limited usage of operators to avoid ambiguous situations. So only ANDs and NOTs or only ORs and saved those in this manner: operators id | name 1 | AND 2 | OR 3 | NOT keywords id | keyword 1 | cat 2 | dog 3 | firefly expressions id | operator | keywordId 1 | 0 | 1 1 | 1 | 2 1 | 3 | 3 which was: cat AND dog NOT firefly But now, I'm really puzzled...

    Read the article

  • Rating System Database Structure

    - by Harsha M V
    I have two entity groups. Restaurants and Users. Restaurants can be rated (1-5) by users. And rating fromeach user should be retrievable. Resturant(id, name, ..... , total_number_of_votes, total_voting_points ) User (id, name ...... ) Rating (id, restaurant_id, user_id, rating_value) Do i need to store the avg value so that it need not be calculated every time ? which table is the best place to store avg_rating, total_no_of_votes, total_voting_points ?

    Read the article

  • Database that consumes less disk space

    - by Hugo Palma
    I'm looking at solutions to store a massive quantity of information consuming the less possible disk space. The information structure is very simple and the queries will also be very simple. I've looked at solutions like Apache Cassandra and relations databases but couldn't find a comparison where disk usage is mentioned. Any ideas on this would be great.

    Read the article

  • Need to work out database structure

    - by jim smith
    Hi, Just need a little kickstart with this. I have Mysql/PHP, and I have 5,000 products. I have 30 companies I need to store some data for those 30 companies for each product as follows: a) prices b) stock qty I also need to store data historically on a daily basis. So the table... It makes sense that the records will be the products because there's 5000, and if I put the companies as the columns, I can store the prices, but what about the stock quantities? I could create two columns for each compoany, one for prices, one for qty. Then make the tablename the date for that day...so theer would be a new table for every day with 5000 products in it? is this the correct way? Some idea on how I'll be retreiving data the top 5 lowest prices (and the company) by product for a certain date the price and stock changes in the past 7 days by product

    Read the article

  • Database Modelling - Conceptually different entities but with near identical fields

    - by Andrew Shepherd
    Suppose you have two sets of conceptual entities: MarketPriceDataSet which has multiple ForwardPriceEntries PoolPriceForecastDataSet which has multiple PoolPriceForecastEntry Both different child objects have near identical fields: ForwardPriceEntry has MarketPriceDataSetId (foreign key to parent table) StartDate EndDate SimulationItemId ForwardPrice PoolPriceForecastEntry has PoolPriceForecastDataSetId (foreign key to parent table) StartDate EndDate SimulationItemId ForecastPoolPrice If I modelled them as separate tables, the only difference would be the foreign key, and the name of the price field. There has been a debate as to whether the two near identical tables should be merged into one. Options I've thought of to model this is: Just keep them as two independent, separate tables Have both sets in the one table with an additional "type" field, and a parent_id equalling a foreign key to either parent table. This would sacrifice referential integrity checks. Have both sets in the one table with an additional "type" field, and create a complicated sequence of joining tables to maintain referential integrity. What do you think I should do, and why?

    Read the article

  • Why is it bad to use boolean flags in databases? And what should be used instead?

    - by David Chanin
    I've been reading through some of guides on database optimization and best practices and a lot of them suggest not using boolean flags at all in the DB schema (ex http://forge.mysql.com/wiki/Top10SQLPerformanceTips). However, they never provide any reason as to why this is bad. Is it a peformance issue? is it hard to index or query properly? Furthermore, if boolean flags are bad, what should you use to store boolean values in a database? Is it better to store boolean flags as an integer and use a bitmask? This seems like it would be less readable.

    Read the article

  • Database: Storing Dates as Numeric Values

    - by Chin
    I'm considering storing some date values as ints. i.e 201003150900 Excepting the fact that I lose any timezone information, is there anything else I should be concerned about with his solution? Any queries using this column would be simple 'where after or before' type lookups. i.e Where datefield is less than 201103000000 (before March next year). currently the app is using MSSQL2005. Any pointers to pitfalls appreciated.

    Read the article

  • database is normalized but the following is a problem please help

    - by user287745
    but the prob is there are relations ships which are so huge that after normalizing they have like a 20 primary keys( composite keys) which are really foreign keys but have to be declared as primary keys to identify the relationship uniquely. so please help? is it correct and i apologize to the expert community for not accepting answers, i was not aware that accepting is possible, the TICK MARK is that visible :-)

    Read the article

  • How to map to tables in database PHPMyAdmin

    - by thegrede
    I'm working now on a project which a user can save their own coupon codes on the websites, so I want to know what is the best to do that, Lets say, I have 1 table with the users, like this, userId | firstName | lastName | codeId and then I have a table of the coupon codes, like this, codeId | codeNumber So what I can do is to connect the codeId to userId so when someone saves the coupons goes the codeId from the coupon table into the codeId of the users table, But now what if when a user have multiple coupons what do I do it should be connected to the user? I have 2 options what to do, Option 1, Saving the codeId from coupons table into the codeId of users table like 1,2,3,4,5, Option 2 To make a new row into the coupons table and to connect the user to the code with adding another field in the coupon table userId and putting into it the user which has added the coupon his userId of the users table, So what of the two options is better to do? Thanks you guys.

    Read the article

  • friendship database schema

    - by Daniel Hertz
    I'm creating a db schema that involves users that can be friends, and I was wondering what the best way to model the ability for these friends to have friendships. Should it be its own table that simply has two columns that each represent a user? Thanks!

    Read the article

  • Database table relationships: Always also relate to specified value (Linq to SQL in .NET Framework)

    - by sinni800
    I really can not describe my question better in the title. If anyone has suggestions: Please tell! I use the Linq to SQL framework in .NET. I ran into something which could be easily solved if the framework supported this, it would be a lot of extra coding otherwise: I have a n to n relation with a helper table in between. Those tables are: Items, places and the connection table which relates items to places and the other way. One item can be found in many places, so can one place have many items. Now of course there will be many items which will be in ALL places. Now there is a problem: Places can always be added. So I need a place-ID which encompasses ALL places, always. Like maybe a place-id "0". If the helper table has a row with the place-id of zero, this should be visible in all places. In SQL this would be a simple "Where [...] or place-id = 0", but how do I do this in Linq relations? Also, for a little side question: How could I manage "all but this place" kind of exclusions?

    Read the article

  • Database Modelling - Conceptually different entities with near identical fields

    - by Andrew Shepherd
    Suppose you have two sets of conceptual entities: MarketPriceDataSet which has multiple ForwardPriceEntries PoolPriceForecastDataSet which has multiple PoolPriceForecastEntry Both different child objects have near identical fields: ForwardPriceEntry has StartDate EndDate SimulationItemId ForwardPrice MarketPriceDataSetId (foreign key to parent table) PoolPriceForecastEntry has StartDate EndDate SimulationItemId ForecastPoolPrice PoolPriceForecastDataSetId (foreign key to parent table) If I modelled them as separate tables, the only difference would be the foreign key, and the name of the price field. There has been a debate as to whether the two near identical tables should be merged into one. Options I've thought of to model this is: Just keep them as two independent, separate tables Have both sets in the one table with an additional "type" field, and a parent_id equalling a foreign key to either parent table. This would sacrifice referential integrity checks. Have both sets in the one table with an additional "type" field, and create a complicated sequence of joining tables to maintain referential integrity. What do you think I should do, and why?

    Read the article

  • Database design - table relationship question

    - by iama
    I am designing schema for a simple quiz application. It has 2 tables - "Question" and "Answer Choices". Question table has 'question ID', 'question text' and 'answer id' columns. "Answer Choices" table has 'question ID', 'answer ID' and 'answer text' columns. With this simple schema it is obvious that a question can have multiple answer choices & hence the need for the answer choices table. However, a question can have only one correct answer and hence the need for the 'answer ID' in the question table. However, this 'answer ID' column in the question table provides a illusion as though there can be multiple questions for a single answer which is not correct. The other alternative to eliminate this illusion is to have another table just for correct answer that will have just 2 columns namely the question ID and the answer ID with a 1-1 relationship between the two tables. However, I think this is redundant. Any recommendation on how best to design this thereby enforcing the rules that a question can have multiple answer choices but only one correct answer? Many Thanks.

    Read the article

  • Generic version control strategy for select table data within a heavily normalized database

    - by leppie
    Hi Sorry for the long winded title, but the requirement/problem is rather specific. With reference to the following sample (but very simplified) structure (in psuedo SQL), I hope to explain it a bit better. TABLE StructureName { Id GUID PK, Name varchar(50) NOT NULL } TABLE Structure { Id GUID PK, ParentId GUID (FK to Structure), NameId GUID (FK to StructureName) NOT NULL } TABLE Something { Id GUID PK, RootStructureId GUID (FK to Structure) NOT NULL } As one can see, Structure is a simple tree structure (not worried about ordering of children for the problem). StructureName is a simplification of a translation system. Finally 'Something' is simply something referencing the tree's root structure. This is just one of many tables that need to be versioned, but this one serves as a good example for most cases. There is a requirement to version to any changes to the name and/or the tree 'layout' of the Structure table. Previous versions should always be available. There seems to be a few possibilities to tackle this issue, like copying the entire structure, but most approaches causes one to 'loose' referential integrity. Example if one followed this approach, one would have to make a duplicate of the 'Something' record, given that the root structure will be a new record, and have a new ID. Other avenues of possible solutions are looking into how Wiki's handle this or go a lot further and look how proper version control systems work. Currently, I feel a bit clueless how to proceed on this in a generic way. Any ideas will be greatly appreciated. Thanks leppie

    Read the article

  • Naming of boolean column in database table

    - by Space Cracker
    I have 'Service' table and the following column description as below Is User Verification Required for service ? Is User's Email Activation Required for the service ? Is User's Mobile Activation required for the service ? I Hesitate in naming these columns as below IsVerificationRequired IsEmailActivationRequired IsMobileActivationRequired or RequireVerification RequireEmailActivation RequireMobileActivation I can't determined which way is the best .So, Is one of the above suggested name is the best or is there other better ones ?

    Read the article

  • Upgrading from SQL2000 database to SQL Express 2008 R2

    - by itwb
    Hi, We have a web application which uses a MSSQL 2000 backend database. We are currently paying a ridiculous amount for Shared Hosting, with the database costs alone costing us $150 per month (MSSQL 100mb extra space is $40 per month). Our database size is 896.38 MB I am looking at getting a Virtual Private Server and upgrading the database to a MSSQL2008 Express database. I am aware that the Express version is limited to a 10GB database (with R2), and is constrained to a single CPU. I have also been offered SQL Server 2008 Web Edition for $19/per month, but I cannot find many details on the difference between Express and Web. Any suggestions here? What I would also like to know is: If we upgrade the database to MSSQL 2008 database, is there any issues with possible data transformations in the future? I.e. Is it possible to download and mount it with SQL Server 2008 Standard edition? I'm more concerned about how to get data in and out of the database through SQL Management tools. Are there any other issues that I might face? Thanks, Mike

    Read the article

  • MySQL root user can't access database

    - by Ed Schofield
    Hi all, We have a MySQL database ('myhours') on a production database server that is accessible to one user ('edsf') only, but not to the root user. The command 'SHOW DATABASES' as the root user does not list the 'myhours' database. The same command as the 'edsf' user lists the database: mysql> SHOW DATABASES; +--------------------+ | Database | +--------------------+ | information_schema | | myhours | +--------------------+ 2 rows in set (0.01 sec) Only the 'edsf' user can access the 'myhours' database with 'USE myhours'. Neither user seems to have permission to grant further permissions for this database. My questions are: Q1. How is it that the root user does not have permission to use the database? How could this have come about? The output of SHOW GRANTS FOR 'root'@'localhost'; looks fine to me: GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY PASSWORD '*xxx' WITH GRANT OPTION Q2. How can I recover this situation to make this database visible to the MySQL root user and grant further permissions on it? Thanks in advance for any help! -- Ed

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >