Search Results

Search found 32538 results on 1302 pages for 'restore database'.

Page 335/1302 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • need help with db-query on sql-server 2005.

    - by Avinash
    We're seeing strange behavior when running two versions of a query on SQL Server 2005: version A: SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = 1234 ORDER BY name ASC version B: DECLARE @Id AS INT; SET @Id = 1234; SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @Id ORDER BY name ASC Both queries return 1000 rows; version A takes on average 15s; version B on average takes 4s. Could anyone help us understand the difference in execution times of these two versions of SQL? If we invoke this query via named parameters using NHibernate, we see the following query via SQL Server profiler: EXEC sp_executesql N'SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @id ORDER BY name ASC', N'@id INT', @id=1234; ...and this tends to perform as badly as version A. Thanks in advance,

    Read the article

  • mysql: storing arbitrary data

    - by Hailwood
    Background: I was asking a question on stack overflow regarding creating tables on the fly where this conversation ensued: This smells like a terrible idea! In fact, it smells just like this one. What in the world do you want to use this for? – deceze @deceze: very true, However, How else would you store the contents of these CSV files. They must be stored in mysql for indexing. The only solid fact about them is that they all have a mobile column with a standard format. The CSV can have an arbitrary amount of columns with an arbitrary amount of rows. They can (with no exaggeration) range from a single row, 35 column csv to an 80k row single column CSV. I am open to other ideas. – Hailwood There are many solutions for this, from attribute-value schemas to JSON storage and NoSQL storage. Open a new question about it. Whatever you do though, don't dynamically create tables! – deceze Question: So my question is, What would you say is the best way to store this data? Are you in agreement with deceze about not creating dynamic tables?

    Read the article

  • has anyone produced an in-memory GIT repository?

    - by Andrew Matthews
    I would like to be able to take advantage of the benefits of GIT (and its workflows), but without the cost of disk access - I just would like to leverage the distributed revision control capabilities of GIT to produce something like a hybrid of memcached and GIT. (preferably in .NET) Is there such a beast out there?

    Read the article

  • How to test code built to save/restore Lifecycle of an Activity?

    - by Pentium10
    How can I test all of the following methods code? I want to play scenarios when all of them are happening to see if my code works for save/restore process of an activity. So what should I do in the Emulator to get all methods tested? public class Activity extends ApplicationContext { protected void onCreate(Bundle savedInstanceState); protected void onStart(); protected void onRestart(); protected void onResume(); protected void onPause(); protected void onStop(); protected void onDestroy(); }

    Read the article

  • Rails - Scalable calculation model

    - by H O
    I currently have a calculation structure in my rails app that has models metric, operand and operation_type. Presently, the metric model has many operands, and can perform calculations based on the operation_type (e.g. sum, multiply, etc.), and each operand is defined as being right or left (i.e. so that if the operation is division, the numerator and denominator can be identified). Presently, an operand is always an attribute of some model, e.g. @customer.sales.selling_price.sum. In order to make this scalable, in need to allow an operand to be either an attribute of some kind, or the results of a previous operation, i.e. an operand can be a metric. I have included a diagram of how my models currently look: Can anyone assist me with the most elegant way of allowing an operand to be an actual operand, or another metric? Thanks! EDIT: It seems based on the only answer so far that perhaps polymorphic associations are the way to go on this, but the answer is so brief I have no idea how they could be used in this way - can anyone elaborate? EDIT 2: OK, I think I'm getting somewhere - essentially i presently have a metric, which has_many operands, and an operand has_many metrics. I need a polymorphic self join, where a metric can also have many metrics - do I need to call this something else, perhaps calculated_metrics, so that the metric model can use itself? That would leave me with a situation where a metric has_many operands, and a metric has many calculated_metrics.

    Read the article

  • Growing MS Access File Size problem

    - by user55886
    I have a large MS Access application with a lot of computations in VBA code. When I run it it eventually crashes due to excessive file size. There are a lot of intermediate tables and queries created and subsequently deleted, but Access does not reclaim the space. I have diligently closed all intermediate record sets and set all temporary objects to nothing, but nothing helps. The only way I can get my code to run is to run part of it, stop and repair/compress the file then restart the code. Isn't there a better way? Thanks

    Read the article

  • "select * into table" Will it work for inserting data into existing table

    - by Shantanu Gupta
    I am trying to insert data from one of my existing table into another existing table. Is it possible to insert data into any existing table using select * into query. I think it can be done using union but in that case i need to record all data of my existing table into temporary table, then drop that table and finally than apply union to insert all records into same table eg. select * into #tblExisting from tblExisting drop table tblExisting select * into tblExisting from #tblExisting union tblActualData Here tblExisting is the table where I actually want to store all data tblActualData is the table from where data is to be appended to tblExisting. Is it right method. Do we have some other alternative ?

    Read the article

  • Decimal rounding strategies in enterprise applications

    - by Sapphire
    Well, I am wondering about a thing with rounding decimals, and storing them in DB. Problem is like this: Let's say we have a customer and a invoice. The invoice has total price of $100.495 (due to some discount percentage which is not integer number), but it is shown as $100.50 (when rounded, just for print on invoice). It is stored in the DB with the price of $100.495, which means that when customer makes a deposit of $100.50 it will have $0.005 extra on the account. If this is rounded, it will appear as $0, but after couple of invoices it would keep accumulating, which would appear wrong (although it actually is not). What is best to do in this case. Store the value of $100.50, or leave everything as-is?

    Read the article

  • iPhone Core Data - Access deep attributes with to many relationships

    - by ncohen
    Hi everyone, Let say I have an entity user which has a one to many relationship with the entity menu which has a one to many relationship with the entity meal which has a many to one relationship with the entity recipe which has a one to many relationship with the entity element. What I would like to do is to select the elements which belong to a particular user (username = myUsername) and particular menu*s* (minDate < menu.date < maxDate). Does anyone have an idea how to get them? Thanks

    Read the article

  • Challege: merging csv files intelligently!

    - by Evenz495
    We are in the middle of changing web store platform and we need to import products' data from different sources. We currently have several different csv files from different it systems/databases because each system is missing some information. Fortunatly the product ids are the same so it's possible to relate the data using ids. We need to merge this data into one big csv file so we can import in into our new e-commerce site. My question: is there a general approach when you need to merge csv files with related data into one csv file? Are there any applications or tools that helps you out?

    Read the article

  • Porting Oracle Date Manipulation

    - by Grasper
    I need to port this following from Oracle syntax to Postgresql. Both FLO_END_DT and FLO_START_DATE are of type DATE in Oracle, and TIMESTAMP WITHOUT TIME ZONE in Postgresql: SELECT TRUNC( TO_CHAR(ROUND(( FL.FLO_END_DT- FL.FLO_START_DT)* 24), '9999D99'),2) FROM FLOWS FL I am not familiar enough with Oracle to know what it is trying to accomplish. Any ideas?

    Read the article

  • How to build a SQL statement when any combination of user input to the table is possible?

    - by Greg McNulty
    Example: the user fills in everything but the product name. I need to search on what is supplied, so in this case everything but productName= This example could be for any combination of input. Is there a way to do this? Thanks. $name = $_POST['n']; $cat = $_POST['c']; $price = $_POST['p']; if( !($name) ) { $name = some character to select all? } $sql = "SELECT * FROM products WHERE productCategory='$cat' and productName='$name' and productPrice='$price' "; EDIT Solution does not have to protect from attacks. Specifically looking at the dynamic part of it.

    Read the article

  • Scalable way of doing self join with many to many table

    - by johnathan
    I have a table structure like the following: user id name profile_stat id name profile_stat_value id name user_profile user_id profile_stat_id profile_stat_value_id My question is: How do I evaluate a query where I want to find all users with profile_stat_id and profile_stat_value_id for many stats? I've tried doing an inner self join, but that quickly gets crazy when searching for many stats. I've also tried doing a count on the actual user_profile table, and that's much better, but still slow. Is there some magic I'm missing? I have about 10 million rows in the user_profile table and want the query to take no longer than a few seconds. Is that possible?

    Read the article

  • PostgreSQL - select only when specific multiple apperance in column

    - by Horse SMith
    I'm using PostgreSQL. I have a table with 3 fields person, recipe and ingredient person = creator of the recipe recipe = the recipe ingredient = one of the ingredients in the recipe I want to create a query which results in every person who whenever has added carrot to a recipe, the person must also have added salt to the same recipe. More than one person can have created the recipe, in which case the person who added the ingredient will be credited for adding the ingredient. Sometimes the ingredient is used more than once, even by the same person. If this the table: person1, rec1, carrot person1, rec1, salt person1, rec1, salt person1, rec2, salt person1, rec2, pepper person2, rec1, carrot person2, rec1, salt person2, rec2, carrot person2, rec2, pepper person3, rec1, sugar person3, rec1, carrot Then I want this result: person1 Because this person is the only one who whenever has added carrot also have added salt.

    Read the article

  • IF I have multiple candidate keys which one is a primary key and justify your choice ??

    - by zahrani
    Given R = { Account , Analyst , Assets, Broker, Client, Commission, Company, Dividend, Exchange, Investment, Office, Profile, Return, Risk_profile, Stock, Volume} and a set of functional dependencies F{fd1, fd2,fd3, fd4, fd5,fd6, fd7, fd8, fd9, fd10, fd11} where: fd1: Client - Office fd2: Stock - Exchange, Dividend fd3: Broker - Profile fd4: Company - Stock fd5: Client - Risk_profile, Analyst fd6: Analyst - Broker fd7: Stock, Broker - Invenstment, Volume fd8: Stock - Company fd9: Investment,Commission - Return fd10: Stock, Broker - Client fd11: Account - Assests these are candidate key(s) : (Account, Commission,Analyst ,Company) (Account, Commission,Analyst ,Stock) (Account ,Commission,Broker ,Company) (Account ,Commission,Broker ,Stock) (Account ,Commission,Client, Company) (Account ,Commission,Client ,Stock) (Q) Select a primary key and justify your choice ? I was select (Account ,Commission,Broker ,Stock) as a primary key ??? I chose that because it has the most direct dependencies compared to other ones. e.g. more attributes are functionally dependent on this primary key. please check if my answer is it true ? or Not I'm waiting your answer asap thank you

    Read the article

  • about Select statement !

    - by user329820
    hi, I have read that after select we use column-names but I have found a statement that was like this: SELECT 'A' FROM T WHERE A = NULL; would you lease help me? thanks (A is a column- name here?) my DBMS is MySQL

    Read the article

  • How to make notification to group users with read/unread flag?

    - by user2335065
    I am making a notification system so that when users in a group perform an action, it will notify all the other users in the group. I want the notification to be marked "read" or "unread" for each user of the group. With that, I can easily retreive any unread notification for a user and display it. I am think about creating a notification table that have the following fields. +----------------------+ | notification | +----------------------+ | id | | userid | | content | | status (read/unread) | | time | +----------------------+ My question is: Whether it is the correct way of making the system? As it means that when there is 1,000 users in a group, then I have to insert 1,000 rows to the table. If not, what is the better way of doing this? If it is the way to do this, how can I write the php/mysql codes to do the looping of inserting the rows? Thanks!

    Read the article

  • backing up user data

    - by shani
    in my app the user saves data in archive using core data and sqllite. 1. is there a way letting him the option to back up his data and restoring it in the future? 2. does the user info is backed up with the iphone regular back up? thanks shani

    Read the article

  • android SQLite vs Flat Files

    - by mixm
    hi. im creating a game right now and im a bit stuck on how to implement storage of levels. i need to be able to download level files from the internet ota. im not so familiar with transferring files ota, but i have some experience with databases (mysql). what would be a better way of storing the game's level data?

    Read the article

  • How to connect 2 mysql tables with 2 connection string

    - by denonth
    Hi all I need to connect 2 tables from 2 mysql databases that have 2 different connection strings and each is on the different server. I have this query: cmd = new MySqlCommand(String.Format("INSERT INTO {0} (a,b,c,d) SELECT (a,b,c,d) FROM {1}", ConfigSettings.ReadSetting("main_table"), ConfigSettings.ReadSetting("main_table")), con); So both table have the same columns. Thats why I have only one ConfigSettings.ReadSetting("main_table") for both of them as they are same. I have 2 connection strings and each is pointing to their server: con.ConnectionString = ConfigurationManager.ConnectionStrings["con1"].ConnectionString; con2.ConnectionString = ConfigurationManager.ConnectionStrings["con2"].ConnectionString; How to make this cmd to be workking with 2 different connection strings and with the same name for the table. Table name will change that's why it is saved in config.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >