Search Results

Search found 30474 results on 1219 pages for 'relational database'.

Page 94/1219 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • The good SQL database to process a lot of data?

    - by Dorian
    I have to process like 10-100 millions records. I have to give the data to the client when it's finish. The data is givent as SQL requests to execute in the database. He have a powerful server with MySQL, I think it will be fast enough. The issue is my computer is not as powerful as his server, so I would like to use an other SQL server who is compatible (I export his database and import it in my computer) with MySQL but more powerful. What should I use? Or am I doomed to use MySQL?

    Read the article

  • SQL Server database synchronization issues

    - by George2
    Hello everyone, I have two SQL Server 2008 Enterprise databases (on two machines), and one of the databases is master database and another database is slave database (master database is read/write, slave database is readonly). I want to have daily update from master database to slave database (i.e. new data inserted/updated/deleted in master database could be synchronized to slave database daily or manually controlled). I only need to sync several tables of the databases, not all of the database. Any solutions or documents? thanks in advance, George

    Read the article

  • Suggestion on Database structure for relational data

    - by miccet
    Hi there. I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Query' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old ( 1.weeks) records. SO, I'm asking you guys, how would you solve this? The query that I know is causing the slowness is generated by Rails and looks like this: SELECT count(*) AS count_all FROM blog_views WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1); The tables have the following structures: CREATE TABLE IF NOT EXISTS 'blogs' ( 'id' int(11) NOT NULL auto_increment, 'name' varchar(255) default NULL, 'perma_name' varchar(255) default NULL, 'author_id' int(11) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, 'blog_picture_id' int(11) default NULL, 'blog_picture2_id' int(11) default NULL, 'page_id' int(11) default NULL, 'blog_picture3_id' int(11) default NULL, 'active' tinyint(1) default '1', PRIMARY KEY ('id'), KEY 'index_blogs_on_author_id' ('author_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; And CREATE TABLE IF NOT EXISTS 'blog_views' ( 'id' int(11) NOT NULL auto_increment, 'blog_id' int(11) default NULL, 'ip' varchar(255) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, PRIMARY KEY ('id'), KEY 'index_blog_views_on_blog_id' ('blog_id'), KEY 'created_at' ('created_at') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

  • Nested and complicated select statement

    - by Selase
    What i want to do here is simple...display an ivestigators ID and him corresponding name... That can be easily done from the users table by selecting based on the user type. However i want to select only some type of investigators. The analogy here is investigators are assigned to an exhibit for them to investigate. One investigator can be assigned to a maximum of 3 cases only. Now during the assigning of investigators, i want to write a select statement that would retrieve only investigatorID's that have been assigned to less than or equal to 2 cases. I have included exhibit and users table that shows sample data below. Now i sort of have an idea that i will have to first of all pick out all the investigators by their ID from the users list and then filter them through the exhibit table by dropping those assigned to 3 cases and leaving just those with two cases. then afterwards i use this IDs to select the Investigators name. the big questions is how do i write the statement??

    Read the article

  • How to bring coordination between file system and database?

    - by Lock up
    I am working on a online file management project. We are storing references on the database (sql server) and files data on the on file system. We are facing a problem of coordination between file system and database while we are uploading a file and also in case of deleting a file. First we create a reference in the database or store files on file system. The problem is that if I create a reference in the database first and then store a file on file system, but while storing files on the file system any type of error occur, then the reference for that file is created in the database but no file data exist on the file system. Please give me some solution how to deal with such situation. I am badly in need of it. This case happens also while we deleting a file?

    Read the article

  • MS Access: index optimisation

    - by Patrick Honorez
    Let's say we have a [Valuations] table containing several values per date and per fund: -FundId -ValDate -Value1 -Value2... The Primary key is obviously FundId+ValDate. I have also indexed the ValDate field since I often query for values on a specific date. My question is: should I also create a specific index for the FundId, or is MsAccess clever enough to use the Primary key when querying on a specific FundId ?

    Read the article

  • All connections in pool are in use

    - by veljkoz
    We currently have a little situation on our hands - it seems that someone, somewhere forgot to close the connection in code. Result is that the pool of connections is relatively quickly exhausted. As a temporary patch we added Max Pool Size = 500; to our connection string on web service, and recycle pool when all connections are spent, until we figure this out. So far we have done this: SELECT SPId FROM MASTER..SysProcesses WHERE DBId = DB_ID('MyDb') and last_batch < DATEADD(MINUTE, -15, GETDATE()) to get SPID's that aren't used for 15 minutes. We're now trying to get the query that was executed last using that SPID with: DBCC INPUTBUFFER(61) but the queries displayed are various, meaning either something on base level regarding connection manipulation was broken, or our deduction is erroneous... Is there an error in our thinking here? Does the DBCC / sysprocesses give results we're expecting or is there some side-effect catch? (for example, connections in pool influence?) (please, stick to what we could find out using SQL since the guys that did the code are many and not all present right now)

    Read the article

  • GridView edit problem If primary key is editable (design problem)

    - by Nassign
    I would like to ask about the design of table based on it's editability in a Grid View. Let me explain. For example, I have a table named ProductCustomerRel. Method 1 CustomerCode varchar PK ProductCode varchar PK StoreCode varchar PK Quantity int Note text So the combination of the CustomerCode, StoreCode and ProductCode must be unique. The record is displayed on a gridview. The requirement is that you can edit the customer, product and storecode but when the data is saved, the PK constraint must still persist. The problem here is it would be natural for a grid to be able to edit the 3 primary key, you can only achieve the update operation of the grid view by first deleting the row and then inserting the row with the updated data. An alternative to this is to just update the table and add a SeqNo, and just enforce the unique constraint of the 3 columns when inserting and updating in the grid view. Method 2 SeqNo int PK CustomerCode varchar ProductCode varchar StoreCode varchar Quantity int Note text My question is which of the two method is better? or is there another way to do this?

    Read the article

  • How to create a better tables Structure.

    - by user160820
    For my website i have tables Category :: id | name Product :: id | name | categoryid Now each category may have different sizes, for that I have also created a table Size :: id | name | categoryid | price Now the problem is that each category has also different ingredients that customer can choose to add to his purchased product. And these ingredients have different prices for different sizes. For that I also have a table like Ingredient :: id | name | sizeid | categoryid | price I am not sure if this Structure really normalized is. Can someone please help me to optimize this structure and which indexed do i need for this Structure?

    Read the article

  • I Need Help Fixing My Small Time Sheet Table - Relational DB - SQL Server

    - by user327387
    I have a TimeSheet table as: CREATE TABLE TimeSheet ( timeSheetID employeeID setDate timeIn outToLunch returnFromLunch timeOut ); Employee will set his/her time sheet daily, i want to ensure that he/she doesn't cheat. What should i do? Should i create a column that gets date/time of the system when insertion/update happens to the table and then compare the created date/time with the time employee's specified - If so in this case i will have to create date/time column for timeIn, outToLunch, returnFromLunch and timeOut. I don't know, what do you suggest? Note: i'm concerned about tracking these 4 columns timeIn, outToLunch, returnFromLunch and timeOut

    Read the article

  • which is better, creating a view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same datasets from several mysql tables. I am thinking of creating a table or view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

  • Too many columns to index - use mySQL Partitions?

    - by Christopher Padfield
    We have an application with a table with 20+ columns that are all searchable. Building indexes for all these columns would make write queries very slow; and any really useful index would often have to be across multiple columns increasing the number of indexes needed. However, for 95% of these searches, only a small subset of those rows need to be searched upon, and quite a small number - say 50,000 rows. So, we have considered using mySQL Partition tables - having a column that is basically isActive which is what we divide the two partitions by. Most search queries would be run with isActive=1. Most queries would then be run against the small 50,000 row partition and be quick without other indexes. Only issue is the rows where isActive=1 is not fixed; i.e. it's not based on the date of the row or anything fixed like that; we will need to update isActive based on use of the data in that row. As I understand it that is no problem though; the data would just be moved from one partition to another during the UPDATE query. We do have a PK on id for the row though; and I am not sure if this is a problem; the manual seemed to suggest the partition had to be based on any primary keys. This would be a huge problem for us because the primary key ID has no basis on whether the row isActive.

    Read the article

  • Best Embedded SQL DB for write performance?

    - by max.minimus
    Has anybody done any benchmarking/evaluation of the popular open-source embedded SQL DBs for performance, particularly write performance? I've some 1:1 comparisons for sqlite, Firebird Embedded, Derby and HSQLDB (others I am missing?) but no across the board comparisons... Also, I'd be interested in the overall developer experience for any of these (for a Java app).

    Read the article

  • Table in DB for generating primary keys?

    - by Sapphire
    Do you ever use a separate table for "generating" artificial primary keys for DB (and why)? What I mean is to have a table with two columns, table name and current ID - with which you could get new "ID" for some table by simply locking the row with that table name, getting the current value of the key, increment it by one, and unlock the row. Why would you prefer this over standard integer identity column? P.S. The "idea" is from Fowlers Patterns of Enterprise Application Architecture, btw...

    Read the article

  • Easiest solution to sync an offline (local desktop application) database with a central server and multiple pc's?

    - by tyfius
    I have a desktop application which uses a local database. (This can be SQLite, SqlCe, PostgreSQL or any other database I will be able to install locally, I haven't decided which one to use yet.) The plan is to achieve the following: A user can subscribe to some kind of cloud service. If he does his local database should be synced with the online database (one for all users, or one per user, whatever the easiest solution is) so he will be able to sync his local database data between multiple PC's, can access his data online. (Much like dropbox does for files.) What is the best, easiest (and preferably cheapest) solution to achieve this? I am looking into DataObjects.net but I can't find much documentation about their Sync feature. Or, are there other alternatives? For example, I start with some kind of cloud service which allows local caching and use the local caching for users who do not subscribe to the service. Any pointers, tips or experiences are welcome.

    Read the article

  • how to store data with many categories and many properties efficiently?

    - by Mickey Shine
    We have a large number of data in many categories with many properties, e.g. category 1: Book properties: BookID, BookName, BookType, BookAuthor, BookPrice category 2: Fruit properties: FruitID, FruitName, FruitShape, FruitColor, FruitPrice We have many categories like book and fruit. Obviously we can create many tables for them (MySQL e.g.), and each category a table. But this will have to create too many tables and we have to write many "adapters" to unify manipulating data. The difficulties are: 1) Every category has different properties and this results in a different data structure. 2) The properties of every categoriy may have to be changed at anytime. 3) Hard to manipulate data if each category a table (too many tables) How do you store such kind of data?

    Read the article

  • Foreign keys and NULL in mySQL

    - by Industrial
    Hi everyone, Can I have a column in my values table (value) referenced as a foreign key to knownValues table, and let it be NULL whenever needed, like in the example: Table: values product type value freevalue 0 1 NULL 100 1 2 NULL 25 3 3 1 NULL Table: types id name prefix 0 length cm 1 weight kg 2 fruit NULL Table: knownValues id Type name 0 2 banana Note: The types in the table values & knownValues are of course referenced into the types table. Thanks!

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >