Search Results

Search found 49554 results on 1983 pages for 'database users'.

Page 376/1983 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

  • DB Design - Linking to a parent without circular reference issues

    - by zSysop
    Hi all, I'm having trouble coming up with a solution for the following issue. Lets say i have a db that looks something like the following: Issue Table Id | Details | CreateDate | ClosedDate Issue Notes Table Id | ObjectId | Notes | NoteDate Issue Assignment Table Id | ObjectId | AssignedToId| AssignedDate I'd like allow the linking of an issue to another issue. I thought about adding a column to the Issue table called ParentIssueId and that would allow me the ability to link issues, but i foresee circular references occurring within the issue table if i go through with this implementation. Is there a better way to go about doing this, and if so, how? Thanks

    Read the article

  • Designing a table to store EXIF data

    - by rafale
    I'm looking to get the best performance out of querying a table containing EXIF data. The queries in question will only search the EXIF data for the specified strings and return the row index on a match. With that said, would it better to store the EXIF data in a table with separate columns for each of the tags, or would storing all of the tags in a single column as one long delimited string suit me just as well? There are around 115 EXIF tags I'll be storing, and each record would be around 1500 to 2000 chars in length if concatenated into a single string.

    Read the article

  • Strange data swapping error occurs when I attempt to update rows in my table from another table in m

    - by Wesley
    So I have a table of data that is 10,000 lines long. Several of the columns in the table simply describe information about one of the columns, meaning, that only one column has the content, and the rest of the columns describe the location of the content (its for a book). Right now, only 6,000 of the 10,000 rows' content column is filled with its content. Rows 6-10,000's content column simply says null. I have another table in the db that has the content for rows 6,000-10,000, with the correct corresponding primary key which would (seemingly) make it easy to update the 10,000 row table. I have been trying an update query such as the following: UPDATE table(10,000) SET content_column = (SELECT content FROM table(6,000-10,000) WHERE table(10,000).id = table(6-10,000.id) Which kind of works, the only problem is that it pulls in the data from the second table just fine, but it replaces the existing content column with null. So rows 1-6,000's content column become null, and rows 6-10,000's content column have the correct values...Pretty strange I thought anyway. Does anybody have any thoughts about where I am going wrong? If you could show me a better sql query, I would appreciate it! Thanks

    Read the article

  • sql combine two subqueries

    - by Claudiu
    I have two tables. Table A has an id column. Table B has an Aid column and a type column. Example data: A: id -- 1 2 B: Aid | type ----+----- 1 | 1 1 | 1 1 | 3 1 | 1 1 | 4 1 | 5 1 | 4 2 | 2 2 | 4 2 | 3 I want to get all the IDs from table A where there is a certain amount of type 1 and type 3 actions. My query looks like this: SELECT id FROM A WHERE (SELECT COUNT(type) FROM B WHERE B.Aid = A.id AND B.type = 1) = 3 AND (SELECT COUNT(type) FROM B WHERE B.Aid = A.id AND B.type = 3) = 1 so on the data above, just the id 1 should be returned. Can I combine the 2 subqueries somehow?

    Read the article

  • make db connection persistent throught zend framework

    - by kamikaze_pilot
    I'm using zend framework. currently everytime I need to use the db I go ahead and connect to the DB: function connect(){ $connParams = array("host" => $host, "port" => $port, "username" => $username, "password" => $password, "dbname" => $dbname); $db = new Zend_Db_Adapter_Pdo_Mysql($connParams); return $db } so I would just call the connect() function everytime I need to use the db My question is...suppose I want to reuse $db everywhere in my site and only connect once in the very initial stage of the site load and then close the connection right before the site gets sent to the user, what would be the best practice to accomplish this? Which file in Zend should I save $db in, what method should I use to save it (global variable?), and which file should I do the connection closing in?

    Read the article

  • sqlite3 date operations when joining two tables in a view?

    - by duncan
    In short, how to add minutes to a datetime from an integer located in another table, in one select statement, by joining them? I have a table P(int id, ..., int minutes) and a table S(int id, int p_id, datetime start) I want to generate a view that gives me PS(S.id, P.id, S.start + P.minutes) by joining S.p_id=P.id The problem is, if I was generating the query from the application, I can do stuff like: select datetime('2010-04-21 14:00', '+20 minutes'); 2010-04-21 14:20:00 By creating the string '+20 minutes' in the application and then passing it to sqlite. However I can't find a way to create this string in the select itself: select p.*,datetime(s.start_at, formatstring('+%s minutes', p.minutes)) from p,s where s.p_id=p.id; Because sqlite as far the documentation tells, does not provide any string format function, nor can I see any alternative way of expressing the date modifiers.

    Read the article

  • Can I raise System Error in sql Server in a stored procedure.

    - by Shantanu Gupta
    I am writing a stored procedure where i m using try catch block. Now i have a unique column in a table. When i try to insert duplicate value it throws exception with exception no 2627. I want this to be done like this if (exists(select * from tblABC where col1='value')=true) raiseError(2627)--raise system error that would have thrown if i would have used insert query to insert duplicate value And which method will be better, using insert query or checking for duplicate value before insertion using Select query ?

    Read the article

  • Can't create a MySQL query that generates 4 rows for each row in the table it references.

    - by UkraineTrain
    I need to create a MySQL query that generates 4 rows for each row in the table it references. I need some of the information in those rows to repeat and some to be different. In the table each row stands for one day. I need to break the day up in 6 hour increments, hence the four rows for each entry. I need to create one column which for each day will have '12AM', '6AM', '12PM', and '6PM' values and another column will have the corresponding numeric values calculated for those entries. Thanks a lot in advance and I will really appreciate any help on this.

    Read the article

  • what's wrong with this code?

    - by user329820
    Hi this is my code which will not work correctly ! what is wrong with its data type :( thanks CREATE TABLE T1 (A INTEGER NOT NULL); CREATE TABLE T3 (A SMALLINT NOT NULL); INSERT T1 VALUES (32768.5); SELECT * FROM T1; INSERT T3 SELECT * FROM T1; SELECT * FROM T3;

    Read the article

  • MySQL query returning mysql_error

    - by Sebastian
    This returns mysql_error: <?php $name = $_POST['inputName2']; $email = $_POST['inputEmail2']; $instruments = $_POST['instruments']; $city = $_POST['inputCity']; $country = $_POST['inputCountry']; $distance = $_POST['distance']; // ^^ These all echo properly ^^ // CONNECT TO DB $dbhost = "xxx"; $dbname = "xxx"; $dbuser = "xxx"; $dbpass = "xxx"; $con = mysqli_connect("$dbhost", "$dbuser", "$dbpass", "$dbname"); if (mysqli_connect_errno()) { echo "Failed to connect to MySQL: " . mysqli_connect_error(); } $query = "INSERT INTO depfinder (name, email, instrument1, instrument2, instrument3, instrument4, instrument5, city, country, max_distance) VALUES ($name, $email, $instruments[0], $instruments[1], $instruments[2], $instruments[3], $instruments[4], $city, $country, $max_distance)"; $result = mysqli_query($con, $query) or die(mysqli_error($con)); // script fails here if (!$result) { echo "There was a problem with the signup process. Please try again later."; } else { echo "Success"; } } ?> N.B. I'm not sure whether it's relevant, but the user may not choose five instruments so some $instrument[] array values may be empty. Bonus question: is my script secure enough or is there more I could do?

    Read the article

  • How would you structure your entity model for storing arbitrary key/value data with different data t

    - by Nathan Ridley
    I keep coming across scenarios where it will be useful to store a set of arbitrary data in a table using a per-row key/value model, rather than a rigid column/field model. The problem is, I want to store the values with their correct data type rather than converting everything to a string. This means I have to choose either a single table with multiple nullable columns, one for each data type, or a set of value tables, one for each data type. I'm also unsure as to whether I should use full third normal form and separate the keys into a separate table, referencing them via a foreign key from the value table(s), or if it would be better to keep things simple and store the string keys in the value table(s) and accept the duplication of strings. Old/bad: This solution makes adding additional values a pain in a fluid environment because the table needs to be modified regularly. MyTable ============================ ID Key1 Key2 Key3 int int string date ---------------------------- 1 Value1 Value2 Value3 2 Value4 Value5 Value6 Single Table Solution This solution allows simplicity via a single table. The querying code still needs to check for nulls to determine which data type the field is storing. A check constraint is probably also required to ensure only one of the value fields contains non-nulll data. DataValues ============================================================= ID RecordID Key IntValue StringValue DateValue int int string int string date ------------------------------------------------------------- 1 1 Key1 Value1 NULL NULL 2 1 Key2 NULL Value2 NULL 3 1 Key3 NULL NULL Value3 4 2 Key1 Value4 NULL NULL 5 2 Key2 NULL Value5 NULL 6 2 Key3 NULL NULL Value6 Multiple-Table Solution This solution allows for more concise purposing of each table, though the code needs to know the data type in advance as it needs to query a different table for each data type. Indexing is probably simpler and more efficient because there are less columns that need indexing. IntegerValues =============================== ID RecordID Key Value int int string int ------------------------------- 1 1 Key1 Value1 2 2 Key1 Value4 StringValues =============================== ID RecordID Key Value int int string string ------------------------------- 1 1 Key2 Value2 2 2 Key2 Value5 DateValues =============================== ID RecordID Key Value int int string date ------------------------------- 1 1 Key3 Value3 2 2 Key3 Value6 How do you approach this problem? Which solution is better? Also, should the key column be separated into a separate table and referenced via a foreign key or be should it be kept in the value table and bulk updated if for some reason the key name changes?

    Read the article

  • Question about Benchmark funcion in Mysql ( Incredible results ).

    - by xRobot
    I have 2 tables: author with 3 millions of rows. book with 20 miles rows. . So I have benchmarked this query with a join: SELECT BENCHMARK(100000000, 'SELECT book.title, author.name FROM `book` , `author` WHERE book.id = author.book_id ') And this is the result: Query took 0.7438 sec ONLY 0.7438 seconds for 100 millions of query with a join ??? Do I make some mistakes or this is the right result ?

    Read the article

  • how to get transform a local image to a web accessable image

    - by hguser
    Hi: Generally,if we want to display a image in the web page,we give the uri of the image resource like: http://host:port/image/xxx.jpg. Now,there are some images in my file system,and I save its absolute path in the db. Like this; id name address image 1 xxxx xxxx C:/images/xxx.jpg Now if the entity is retrived,its image should be displayed in the page. How to make it? What I thought is copy the image under the web server dir,then build its url,then the page can render it. But I wonder if this is a good idea? Is there any other way?

    Read the article

  • How to Map a table with another lookup table using JPA?

    - by Sameer Malhotra
    Hi, I have two tables: 1) Application(int appid, int statusid, String appname, String appcity with getter and Setter methods) 2) App_Status(int statusid,String statusDescription with setter and getter methods) I want to map Application table with App_Status so that I don't have to query separately App_Status table in order to get the statusDescription. One thing I have to careful is that no matter what (Insert,update or delete) to the Application table the App_Status table should be unaffected means its a read only table which is maintained by the DBA internally and used only for lookup table. I am using JPA annotations so please suggest how to handle this.

    Read the article

  • Facebook style messaging system schema design

    - by Jamie
    Hi all, I'm looking to implement a facebook style messaging system (thread messages) into a site of mine. Do you think this schema markup looks okay? Doctrine schema.yml: UserMessage: tableName: user_message actAs: [Timestampable] columns: id: { type: integer(10), primary: true, autoincrement: true } sender_id : { type: integer(10), notnull: true } sender_read: { type: boolean, default: 1 } subject: { type: string(255), notnull: true } message: { type: string(1000), notnull: true } hash: { type: string(32), notnull: true } relations: UserMessageRecipient as Recipient: type: many local: id foreign: message_id UserMessageReply as Reply: type: many local: id foreign: message_id UserMessageReply: tableName: user_message_reply columns: id: { type: integer(10), primary: true, autoincrement: true } user_message_id as message_id: { type: integer(10), notnull: true } message: { type: string(1000), notnull: true } sender_id: { type: integer(10), notnull: true } relations: UserMessage as Message: local: message_id foreign: id type: one UserMessageRecipient: tableName: user_message_recipient actAs: [Timestampable] columns: id: { type: integer(10), primary: true, autoincrement: true } user_message_id as message_id: { type: integer(10), notnull: true } recipient_id: { type: integer(10), notnull: true } recipient_read: { type: boolean, default: 0 } When I a new reply is made,i'll make sure the boolean for "recipient_read" for each recipient is set to false and of course i'll make sure sender_read is set to false too. I'm using a hash for the URL: http://example.com/user/messages/aadeb18f8bdaea49882ec4d2a8a3c062 (As the id will be starting from 1, i don't wish to have http://example.com/user/messages/1. Yeah, I could start incrementing from a bigger number, but i'd prefer to start at 1.) Is this a good way to go about it? Your thoughts and suggestions would be hugely appreciated. Thanks guys!

    Read the article

  • What is the fastest way to get a DataTable into SQL Server?

    - by John Gietzen
    I have a DataTable in memory that I need to dump straight into a SQL Server temp table. After the data has been inserted, I transform it a little bit, and then insert a subset of those records into a permanent table. The most time consuming part of this operation is getting the data into the temp table. Now, I have to use temp tables, because more than one copy of this app is running at once, and I need a layer of isolation until the actual insert into the permanent table happens. What is the fastest way to do a bulk insert from a C# DataTable into a SQL Temp Table? I can't use any 3rd party tools for this, since I am transforming the data in memory. My current method is to create a parameterized SqlCommand: INSERT INTO #table (col1, col2, ... col200) VALUES (@col1, @col2, ... @col200) and then for each row, clear and set the parameters and execute. There has to be a more efficient way. I'm able to read and write the records on disk in a matter of seconds...

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >