Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 126/1330 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Easiest solution to sync an offline (local desktop application) database with a central server and multiple pc's?

    - by tyfius
    I have a desktop application which uses a local database. (This can be SQLite, SqlCe, PostgreSQL or any other database I will be able to install locally, I haven't decided which one to use yet.) The plan is to achieve the following: A user can subscribe to some kind of cloud service. If he does his local database should be synced with the online database (one for all users, or one per user, whatever the easiest solution is) so he will be able to sync his local database data between multiple PC's, can access his data online. (Much like dropbox does for files.) What is the best, easiest (and preferably cheapest) solution to achieve this? I am looking into DataObjects.net but I can't find much documentation about their Sync feature. Or, are there other alternatives? For example, I start with some kind of cloud service which allows local caching and use the local caching for users who do not subscribe to the service. Any pointers, tips or experiences are welcome.

    Read the article

  • which is better, creating a view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same datasets from several mysql tables. I am thinking of creating a table or view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

  • Foreign keys and NULL in mySQL

    - by Industrial
    Hi everyone, Can I have a column in my values table (value) referenced as a foreign key to knownValues table, and let it be NULL whenever needed, like in the example: Table: values product type value freevalue 0 1 NULL 100 1 2 NULL 25 3 3 1 NULL Table: types id name prefix 0 length cm 1 weight kg 2 fruit NULL Table: knownValues id Type name 0 2 banana Note: The types in the table values & knownValues are of course referenced into the types table. Thanks!

    Read the article

  • Running a sharded DB from a single machine

    - by ming yeow
    This sounds kinda dumb, but I have a sharded DB that I no longer think I need to run on 2 machines, and would like to run on one single machine instead. Any ideas on how that can potentially be done? There are lots of resources on how i can achieve the converse, but very little on how this can be done

    Read the article

  • One on One table relation - is it harmful to keep relation in both tables?

    - by EBAGHAKI
    I have 2 tables that their rows have one on one relation.. For you to understand the situation, suppose there is one table with user informations and there is another table that contains a very specific informations and each user can only link to one these specific kind of informations ( suppose second table as characters ) And that character can only assign to the user who grabs it, Is it against the rules of designing clean databases to hold the relation key in both tables? User Table: user_id, name, age, character_id Character Table: character_id, shape, user_id I have to do it for performance, how do you think about it?

    Read the article

  • How do I sync a subset of tables between two databases on the same mysql database server

    - by Mike
    would like to be able to sync a subset of tables between two mysql databases that are running on the same server. One of the databases acts as the master where inserts, updates and deletes can be made. The second database uses those same tables for read-only operations. I do not want to use federated tables to achieve this. The long term goal will be to separate the 2 databases to multiple servers, The second database that has the subset of tables as read-only may also be replicated a few times over to distribute geographically for load and performance purposes each with unqiue data.... Once that is achieved, I plan to use binlog to replicate those specific tables on the secondary databases. In the meantime, I'd like to keep these tables in sync. Is there a more elegant way to do this than other than using a cronjob and mysqldump?

    Read the article

  • is Payment table needed when you have an invoice table like this?

    - by EBAGHAKI
    this is my invoice table: Invoice Table: invoice_id creation_date due_date payment_date status enum('not paid','paid','expired') user_id total_price I wonder if it's Useful to have a payment table in order to record user payments for invoices. payment table can be like this: payment_id payment_date invoice_id price_paid status enum('successful', 'not successful')

    Read the article

  • delete rows using sql 'like' command using data from another table

    - by Captastic
    Hi All, I am trying to delete rows from a table ("lovalarm") where a field ("pointid") is like any one of a number of strings. Currently I am entering them all manually however I need to be able to have a list of over 100,000 options. My thoughts are to have a table ("lovdata") containing all possible strings and running a query to delete rows where the field is 'like' any of the strings in the other table. Can anyone point me in the right direction as to if/how I can use like in this way? Many thanks, Cap

    Read the article

  • What does ON [PRIMARY] mean?

    - by Icono123
    I'm creating an SQL setup script and I'm using someone else's script as an example. Here's an example of the script: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[be_Categories]( [CategoryID] [uniqueidentifier] ROWGUIDCOL NOT NULL CONSTRAINT [DF_be_Categories_CategoryID] DEFAULT (newid()), [CategoryName] [nvarchar](50) NULL, [Description] [nvarchar](200) NULL, [ParentID] [uniqueidentifier] NULL, CONSTRAINT [PK_be_Categories] PRIMARY KEY CLUSTERED ( [CategoryID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO Does anyone know what the ON [PRIMARY] command does? Regards.

    Read the article

  • IntegrityError with Booleand Fields and Postgresql

    - by xRobot
    I have this simple Blog model: class Blog(models.Model): title = models.CharField(_('title'), max_length=60, blank=True, null=True) body = models.TextField(_('body')) user = models.ForeignKey(User) is_public = models.BooleanField(_('is public'), default = True) When I insert a blog in admin interface, I get this error: IntegrityError at /admin/blogs/blog/add/ null value in column "is_public" violates not-null constraint Why ???

    Read the article

  • What is the proper design of storing temporary users? [closed]

    - by Mendy
    In SO site both real users and temporary users can add a new questions. I assume each user type has a different table. My question is how can I attach the question to the right user? I assuming the temp users have their own table from the following reasons: Temp users don't have all the data that real users have. like: email, password, and all users details. On the other hand, temp users are a lot more then real users. So it make more sense to have they in their own table.

    Read the article

  • which sql query is more efficient: select count(*) or select ... where key>value?

    - by davka
    I need to periodically update a local cache with new additions to some DB table. The table rows contain an auto-increment sequential number (SN) field. The cache keeps this number too, so basically I just need to fetch all rows with SN larger than the highest I already have. SELECT * FROM table where SN > <max_cached_SN> However, the majority of the attempts will bring no data (I just need to make sure that I have an absolutely up-to-date local copy). So I wander if this will be more efficient: count = SELECT count(*) from table; if (count > <cache_size>) // fetch new rows as above I suppose that selecting by an indexed numeric field is quite efficient, so I wander whether using count has benefit. On the other hand, this test/update will be done quite frequently and by many clients, so there is a motivation to optimize it.

    Read the article

  • Are there actually lag times to remove an email address from "the system"? [closed]

    - by Alex Gosselin
    For example, you send an unsubscribe message to a legitimate company or a spam, they reply that they will remove you and it may take up to 72 hours to take effect. I find it hard to believe anything that simple could take more than 3/4 of a second to take effect system wide. Another example would be when you call the visa activation line, there is a "delay" of several minutes while they try to sell you some kind of insurance. Usually just as you get the point across that you don't want it they will tell you your card has been activated and let you go. Are these delays real?

    Read the article

  • Getting deadlocks in MySQL

    - by at
    We're very frustratingly getting deadlocks in MySQL. It isn't because of exceeding a lock timeout as the deadlocks happen instantly when they do happen. Here's the SQL code that is executing on 2 separate threads (with 2 separate connections from the connection pool) that produces a deadlock: UPDATE Sequences SET Counter = LAST_INSERT_ID(Counter + 1) WHERE Sequence IS NULL Sequences table has 2 columns: Sequence and Counter The LAST_INSERT_ID allows us to retrieve this updated counter value as per MySQL's recommendation. That works perfect for us, but we get these deadlocks! Why are we getting them and how can we avoid them?? Thanks so much for any help with this.

    Read the article

  • replicating master tables mapping in transaction tables

    - by NoDisplay
    I have three master tables for location information Country {ID, Name} State {ID, Name, CountryID} City {ID, Name, StateID} Now I have one transcation table called Person which hold the person name and his location information. My Question is shall I have only CityID in the Person table like this: Person {ID, Name, CityID}' And have view of join query which give me detail like "Person{ID,Name,City,State,Country}" or Shall I replicate the mapping Person {ID, Name, CityID, StateID, CountryID} Please suggest which do you feel is to be selected and why? if there is any other option available, please suggest. Thanks in advance.

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – NuoDB in Sixty Seconds – SQL in Sixty Seconds #053

    - by Pinal Dave
    Earlier this week, I have done five part blog series on NuoDB and it was very well received by audience. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. t is an operational NewSQL database built on a patented emergent architecture with full support for SQL and ACID guarantees. In this blog post, I will explore how one can download and install NuoDB database. In this video I explain how one can install NuoDB in very few seconds and set up the entire environment in additional few seconds. One can get going with installation of NuoDB and sample database in total of less than 60 seconds. Let us see the same concept in following SQL in Sixty Seconds Video: You can Download NuoDB and reproduce the same Sixty Seconds experience. Related Tips in SQL in Sixty Seconds: Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB Part 5 - NuoDB and Third Party Explorer What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Interview Questions and Answers, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Identity

    Read the article

  • Software Developer Interview Question - Fair or Unfair

    - by user607018
    I just phone interviewed with a company for a graduate software developer position and was asked the following questions. I should add that the company concerned are not a database vendor. How does a query optimiser work? If a database was performing badly how would you use the performance logs to find out the problem. I have asked whether they ask such questions of all candidate software developers (graduate or experienced) in a first phone interview. They replied that they like to test their candidates knowledge of database development. I want to write to the company to say that these questions are unreasonable to ask at a software developer interview and to request that my interview be done over. I would like to check the reasonableness of the following assumptions a) Those questions cannot be fairly classified as database development questions. b) I think the questions are appropriate for a DBA interview but wholly unreasonable for a software developer interview (experienced or not). c) The first question is only relevant to a database vendor. d) The second question is not fair because software developers typically don't deal with database performance logs as that is the job of the DBA. Perhaps some of you will be kind enough to comment on my assumptions or may have any other suggestions, before I write to the company.

    Read the article

  • Generic Repository with SQLite and SQL Compact Databases

    - by Andrew Petersen
    I am creating a project that has a mobile app (Xamarin.Android) using a SQLite database and a WPF application (Code First Entity Framework 5) using a SQL Compact database. This project will even eventually have a SQL Server database as well. Because of this I am trying to create a generic repository, so that I can pass in the correct context depending on which application is making the request. The issue I ran into is my DataContext for the SQL Compact database inherits from DbContext and the SQLite database inherits from SQLiteConnection. What is the best way to make this generic, so that it doesn't matter what kind of database is on the back end? This is what I have tried so far on the SQL Compact side: public interface IRepository<TEntity> { TEntity Add(TEntity entity); } public class Repository<TEntity, TContext> : IRepository<TEntity>, IDisposable where TEntity : class where TContext : DbContext { private readonly TContext _context; public Repository(DbContext dbContext) { _context = dbContext as TContext; } public virtual TEntity Add(TEntity entity) { return _context.Set<TEntity>().Add(entity); } } And on the SQLite side: public class ElverDatabase : SQLiteConnection { static readonly object Locker = new object(); public ElverDatabase(string path) : base(path) { CreateTable<Ticket>(); } public int Add<T>(T item) where T : IBusinessEntity { lock (Locker) { return Insert(item); } } }

    Read the article

  • Databases and the CI server

    - by mlk
    I have a CI server (Hudson) which merrily builds, runs unit tests and deploys to the development environment but I'd now like to get it running the integration tests. The integration tests will hit a database and that database will be consistently being changed to contain the data relevant to the test in question. This however leads to a problem - how do I make sure the database is not being splatted with data for one test and then that data being override by a second project before the first set of tests complete? I am current using the "hope" method, which is not working out too badly at the moment, but mostly due to the fact that we only have a small number of integration tests set up on CI. As I see it I have the following options: Test-local (in memory) databases I'm not sure if any in-memory databases handle all the scaryness of Oracles triggers and packages etc, and anything less I don't feel would be a worth while test. CI Executor-local databasesA fair amount of work would be needed to set this up and keep 'em up to date, but defiantly an option (most of the work is already done to keep the current CI database up-to-date). Single "integration test" executorLikely the easiest to implement, but would mean the integration tests could fall quite far behind. Locking the database (or set of tables) I'm sure I've missed some ways (please add them). How do you run database-based integration tests on the CI server? What issues have you had and what method do you recommend? (Note: While I use Hudson, I'm happy to accept answers for any CI server, the ideas I'm sure will be portable, even if the details are not). Cheers,      Mlk

    Read the article

  • One codebase - lots of hosted services (similar to a basecamp style service) - planning structure

    - by RickM
    We have built a service (PHP Based) for a client, and are now looking to offer it to other clients as a hosted service. For this example, think of it like a hosted forum service, where a client signs up on our site, and is given a subdomain or can use their own domain, and the code picks up the domain, checks it against a 'master' users table, and then loads the content as needed. I'm trying to work out the best way of handling multiple clients. At the moment I can only think of two options that would work: Option 1 - Have 1 set of database tables, but on each table have a column called 'siteid' - this would mean every query has to check the siteid. This would effectively work with just 1 codebase, and 1 database. Option 2 - Have 1 'master' database with all the core stuff such as the client details and their domain. Then when the systen checks the domain, it pulls the clients database details (username/password/dbname) from a table, and loads a second database. The issue here is security of the mysql server details, however it does have the benefit that they are running their own database instead of sharing one. Which option would I be better taking here, and why? Ideally I want it to be fairly easy to convert the 'standalone' script to the 'multi-domain' script as we're on a tight deadline.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >