Search Results

Search found 30858 results on 1235 pages for 'database tuning'.

Page 326/1235 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Solr; What does this mean?

    - by Camran
    At the end of the README.txt file which is located in the example directory under solr, I find this line: NOTE: This Solr example server references SolrCell jars outside of the server directory with statements in the solrconfig.xml. If you make a copy of this example server and wish to use the ExtractingRequestHandler (SolrCell), you will need to copy the required jars into solr/lib or update the paths to the jars in your solrconfig.xml What does this mean? Do I have to make some adjustment before uploading solr to my server? Also, if you know, what is Solr-nightly:s difference to regular solr? The tutorial states "solr-nightly.zip" but on their download section I cant find it.

    Read the article

  • need help with a simple SQL update statement.

    - by Tony
    There's a field with type of varchar. It actually stores a float point string. Like 2.0 , 12.0 , 34.5 , 67.50 ... What I need is a update statement that remove the ending zeros of fields like 2.0 , 12.0 , change them to their integer representation , that is 2 , 12 ...,and leave 3.45 , 67.50 unchanged . How should I do this ? I am using oracle 10.

    Read the article

  • Error: Too Many Arguments Specified when Inserting Values from ASP.NET to SQL Server

    - by SidC
    Good Afternoon All, I have a wizard control that contains 20 textboxes for part numbers and another 20 for quantities. I want the part numbers and quantities loaded into the following table: USE [Diel_inventory] GO /****** Object: Table [dbo].[QUOTEDETAILPARTS] Script Date: 05/09/2010 16:26:54 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[QUOTEDETAILPARTS]( [QuoteDetailPartID] [int] IDENTITY(1,1) NOT NULL, [QuoteDetailID] [int] NOT NULL, [PartNumber] [float] NULL, [Quantity] [int] NULL, CONSTRAINT [pkQuoteDetailPartID] PRIMARY KEY CLUSTERED ( [QuoteDetailPartID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[QUOTEDETAILPARTS] WITH CHECK ADD CONSTRAINT [fkQuoteDetailID] FOREIGN KEY([QuoteDetailID]) REFERENCES [dbo].[QUOTEDETAIL] ([ID]) ON UPDATE CASCADE ON DELETE CASCADE GO Here's the snippet from my sproc for this insert: set @ID=scope_identity() Insert into dbo.QuoteDetailParts (QuoteDetailPartID, QuoteDetailID, PartNumber, Quantity) values (@ID, @QuoteDetailPartID, @PartNumber, @Quantity) When I run the ASPX page, I receive an error that there are too many arguments specified for my stored procedure. I understand why I'm getting the error, given the above table layout. However, I need help in structuring my insert syntax to look for values in all 20 PartNumber and Quantity field pairs. Thanks, Sid

    Read the article

  • Better alternative to autonumber primary keys

    - by Comrad_Durandal
    I am looking for a better primary key than the autonumber data type, namely for the reason that it's limited to a long integer, when I really just need the field to reflect a number or text string that will never ever repeat, no matter HOW many records are added or deleted from the table. The problem is I am not sure how to implement something like turning the current date and time into a hexadecimal string and using that as a unique field I can use as a primary key. Am I just being too paranoid about running out of space?

    Read the article

  • diffing two databases

    - by flybywire
    Is there a tool to find the difference between two databases. Both the schema and the actual data are pretty much the same, but not 100%. Do you know a tool that can help to succinctly describe the changes.

    Read the article

  • How to use Externel Triggers on Oracle 11g..

    - by RBA
    Hi, I want to fire a trigger whenever an insert command is fired.. The trigger will access a pl/sql file which can change anytime.. So the query is, if we design the trigger, how can we make sure this dynamic thing happens.. As during the stored procedure, it is not workingg.. I think - it should work for 1) External Procedures 2) Execute Statement Please correct me, if I am wrong.. I was working on External Procedures but i am not able to find the way to execute the external procedure from here on.. SQL> CREATE OR REPLACE FUNCTION Plstojavafac_func (N NUMBER) RETURN NUMBER AS 2 LANGUAGE JAVA 3 NAME 'Factorial.J_calcFactorial(int) return int'; 4 / @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ SQL> CREATE OR REPLACE TRIGGER student_after_insert 2 AFTER INSERT 3 ON student 4 FOR EACH ROW How to call the procedure from heree... And does my interpretations are right,, plz suggest.. Thanks.

    Read the article

  • Inheritance in tables - structure problem

    - by Naor
    I have 3 types of users in my system. each type has different information I created the following tables: BaseUser(base_user_id, username, password, additional common data) base_user_id is PK and Identity UserType1(user_id, data related to type1 only) user_id is PK and FK to base_user_id UserType2(user_id, data related to type2 only) user_id is PK and FK to base_user_id UserType3(user_id, data related to type3 only) user_id is PK and FK to base_user_id Now I have relation from each type of user to warehouses table. Users from type1 and type2 should have only warehouse_id and users from type3 should have warehouse_id and customer_id. I thought about this structure: WarehouseOfUser(base_user_id,warehouse_id) base_user_id is FK to base_user_id in BaseUser WarehouseOfTyp3User(base_user_id,warehouse_id, customer_id) base_user_id is FK to base_user_id in BaseUser The problem is that such structure allows 2 things I want to prevent: 1. add to WarehouseOfTyp3User data of user from type2 or type1. 2. add to WarehouseOfUser data of user from type3. what is the best structure for such case?

    Read the article

  • Walking through an SQLite Table

    - by galford13x
    I would like to implement or use functionality that allows stepping through a Table in SQLite. If I have a Table Products that has 100k rows, I would like to retrive perhaps 10k rows at a time. Somthing similar to how a webpage would list data and have a < Previous .. Next > link to walk through the data. Are there select statements that can make this simple? I see and have tried using the ROWID in conjunction with LIMIT which seems ok if not ordering the data. // This seems works if not ordering. SELECT * FROM Products WHERE ROWID BETWEEN x AND y;

    Read the article

  • How to work with CTE. There is some error related to anchor.

    - by Shantanu Gupta
    I am creating a hierarchy representaion of a column. But an error occurs Details are Msg 240, Level 16, State 1, Line 1 Types don't match between the anchor and the recursive part in column "DISPLAY" of recursive query "CTE". I know there is some typecasting error. But I dont know how to remove error. Please just dont only sort out my error. I need explanation why this error is coming. When this error occurs. I am trying to sort table on the basis of sort col that i m introducing. I want to add '-' at every level and want to sort accordingly. Please help WITH CTE (PK_CATEGORY_ID, [DESCRIPTION], FK_CATEGORY_ID, DISPLAY, SORT, DEPTH) AS ( SELECT PK_CATEGORY_ID, [DESCRIPTION], FK_CATEGORY_ID, '-' AS DISPLAY, '--' AS SORT, 0 AS DEPTH FROM dbo.L_CATEGORY_TYPE WHERE FK_CATEGORY_ID IS NULL UNION ALL SELECT T.PK_CATEGORY_ID, T.[DESCRIPTION], T.FK_CATEGORY_ID, CAST(DISPLAY+T.[DESCRIPTION] AS VARCHAR(1000)), '--' AS SORT, C.DEPTH +1 FROM dbo.L_CATEGORY_TYPE T JOIN CTE C ON C.PK_CATEGORY_ID = T.FK_CATEGORY_ID --SELECT T.PK_CATEGORY_ID, C.SORT+T.[DESCRIPTION], T.FK_CATEGORY_ID --, CAST('--' + C.SORT AS VARCHAR(1000)) AS SORT, CAST(DEPTH +1 AS INT) AS DEPTH --FROM dbo.L_CATEGORY_TYPE T JOIN CTE C ON C.FK_CATEGORY_ID = T.PK_CATEGORY_ID ) SELECT PK_CATEGORY_ID, [DESCRIPTION], FK_CATEGORY_ID, DISPLAY, SORT, DEPTH FROM CTE ORDER BY SORT

    Read the article

  • sql combine two subqueries

    - by Claudiu
    I have two tables. Table A has an id column. Table B has an Aid column and a type column. Example data: A: id -- 1 2 B: Aid | type ----+----- 1 | 1 1 | 1 1 | 3 1 | 1 1 | 4 1 | 5 1 | 4 2 | 2 2 | 4 2 | 3 I want to get all the IDs from table A where there is a certain amount of type 1 and type 3 actions. My query looks like this: SELECT id FROM A WHERE (SELECT COUNT(type) FROM B WHERE B.Aid = A.id AND B.type = 1) = 3 AND (SELECT COUNT(type) FROM B WHERE B.Aid = A.id AND B.type = 3) = 1 so on the data above, just the id 1 should be returned. Can I combine the 2 subqueries somehow?

    Read the article

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

  • How to store MySQL query results in another Table?

    - by Taz
    How to store results from following query into another table. Considering there is an appropriate table already created. SELECT labels.label,shortabstracts.ShortAbstract,images.LinkToImage,types.Type FROM ner.images,ner.labels,ner.shortabstracts,ner.types WHERE labels.Resource=images.Resource AND labels.Resource=shortabstracts.Resource AND labels.Resource=types.Resource;

    Read the article

  • Designing a table to store EXIF data

    - by rafale
    I'm looking to get the best performance out of querying a table containing EXIF data. The queries in question will only search the EXIF data for the specified strings and return the row index on a match. With that said, would it better to store the EXIF data in a table with separate columns for each of the tags, or would storing all of the tags in a single column as one long delimited string suit me just as well? There are around 115 EXIF tags I'll be storing, and each record would be around 1500 to 2000 chars in length if concatenated into a single string.

    Read the article

  • minimal cover for functional dependencies

    - by user2975836
    I have the following problem: AB -> CD H->B G ->DA CD-> EF A -> HJ J>G I understand the first step (break down right hand side) and get the following results: AB -> C AB -> D H -> B G -> D G -> A CD -> E CD -> F A -> H A -> J J -> G I understand that A - h and h - b, therefore I can remove the B from AB - c and ab - D, to get: A -> C A -> D H -> B G -> D G -> A CD -> E CD -> F A -> H A -> J J -> G The step that follows is what I can't compute (reduce the left hand side) Any help will be greatly appreciated.

    Read the article

  • DB Design - Linking to a parent without circular reference issues

    - by zSysop
    Hi all, I'm having trouble coming up with a solution for the following issue. Lets say i have a db that looks something like the following: Issue Table Id | Details | CreateDate | ClosedDate Issue Notes Table Id | ObjectId | Notes | NoteDate Issue Assignment Table Id | ObjectId | AssignedToId| AssignedDate I'd like allow the linking of an issue to another issue. I thought about adding a column to the Issue table called ParentIssueId and that would allow me the ability to link issues, but i foresee circular references occurring within the issue table if i go through with this implementation. Is there a better way to go about doing this, and if so, how? Thanks

    Read the article

  • Strange data swapping error occurs when I attempt to update rows in my table from another table in m

    - by Wesley
    So I have a table of data that is 10,000 lines long. Several of the columns in the table simply describe information about one of the columns, meaning, that only one column has the content, and the rest of the columns describe the location of the content (its for a book). Right now, only 6,000 of the 10,000 rows' content column is filled with its content. Rows 6-10,000's content column simply says null. I have another table in the db that has the content for rows 6,000-10,000, with the correct corresponding primary key which would (seemingly) make it easy to update the 10,000 row table. I have been trying an update query such as the following: UPDATE table(10,000) SET content_column = (SELECT content FROM table(6,000-10,000) WHERE table(10,000).id = table(6-10,000.id) Which kind of works, the only problem is that it pulls in the data from the second table just fine, but it replaces the existing content column with null. So rows 1-6,000's content column become null, and rows 6-10,000's content column have the correct values...Pretty strange I thought anyway. Does anybody have any thoughts about where I am going wrong? If you could show me a better sql query, I would appreciate it! Thanks

    Read the article

  • 2 Select or 1 Join query ?

    - by xRobot
    I have 2 tables: book ( id, title, age ) ---- 100 milions of rows author ( id, book_id, name, born ) ---- 10 millions of rows Now, supposing I have a generic id of a book. I need to print this page: Title: mybook authors: Tom, Graham, Luis, Clarke, George So... what is the best way to do this ? 1) Simple join like this: Select book.title, author.name From book, author WHERE ( author.book_id = book.id ) AND ( book.id = 342 ) 2) For avoid the join, I could make 2 simple query: Select title FROM book WHERE id = 342 Select name FROM author WHERE book_id = 342 What is the most efficient way ?

    Read the article

  • Import CSV to class structure as the user defines

    - by Assimilater
    I have a contact manager program and I would like to offer the feature to import csv files. The problem is that different data sources order the fields in different ways. I thought of programming an interface for the user to tell it the field order and how to handle exceptions. Here is an example line in one of many possible field orders: "ID#","Name","Rank","Address1","Address2","City","State","Country","Zip","Phone#","Email","Join Date","Sponsor ID","Sponsor Name" "Z1234","Call, Anson","STU","1234 E. 6578 S.","","Somecity","TX","United States","012345","000-000-0000","[email protected]","5/24/2010","z12343","Quantum Independence" Notice that in one data field "Name" there is a comma to separate last name and first name and in another there is not. My plan is to have a line for each field (ie ID, Name, City etc.) and a statement "import to" and list box with options like: Don't Import, BusinessJoin Date, First Name, Zip and the program recognizes those as properties of an object... I'd also like the user to be able to record preset field orders so they can re-use them for csv files from the same download source. Then I also need it to check if a record all ready exists (is there a record for Anson Call all ready?) and allow the user to tell it what to do if there is a record (ie mailing address may have changes, so if that field is filled overwrite it, or this mailing address is invalid, leave the current data untouched for this person, overwrite the rest). While I'm capable of coding this...i'm not very excited about it and I'm wondering if there's a tool or set of tools out there to all ready perform most of this functionality... I hope this makes sense...

    Read the article

  • Can I raise System Error in sql Server in a stored procedure.

    - by Shantanu Gupta
    I am writing a stored procedure where i m using try catch block. Now i have a unique column in a table. When i try to insert duplicate value it throws exception with exception no 2627. I want this to be done like this if (exists(select * from tblABC where col1='value')=true) raiseError(2627)--raise system error that would have thrown if i would have used insert query to insert duplicate value And which method will be better, using insert query or checking for duplicate value before insertion using Select query ?

    Read the article

  • make db connection persistent throught zend framework

    - by kamikaze_pilot
    I'm using zend framework. currently everytime I need to use the db I go ahead and connect to the DB: function connect(){ $connParams = array("host" => $host, "port" => $port, "username" => $username, "password" => $password, "dbname" => $dbname); $db = new Zend_Db_Adapter_Pdo_Mysql($connParams); return $db } so I would just call the connect() function everytime I need to use the db My question is...suppose I want to reuse $db everywhere in my site and only connect once in the very initial stage of the site load and then close the connection right before the site gets sent to the user, what would be the best practice to accomplish this? Which file in Zend should I save $db in, what method should I use to save it (global variable?), and which file should I do the connection closing in?

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >