Search Results

Search found 1805 results on 73 pages for 'varchar'.

Page 45/73 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Deleting Duplicates in MySQL

    - by elmaso
    Query was this: CREATE TABLE `query` ( `id` int(11) NOT NULL auto_increment, `searchquery` varchar(255) NOT NULL default '', `datetime` int(11) NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=MyISAM first I want to drop the table with: ALTER TABLE `querynew` DROP `id` and then delete the double entries.. I tried it with: INSERT INTO `querynew` SELECT DISTINCT * FROM `query` but with no success.. :( and with ALTER TABLE query ADD UNIQUE ( searchquery ) - is it possible to save the queries only one time?

    Read the article

  • MySQL: Which is faster — INSTR or LIKE?

    - by Grekker
    If your goal is to test if a string exists in a MySQL column (of type 'varchar', 'text', 'blob', etc) which of the following is faster / more efficient / better to use, and why? Or, is there some other method that tops either of these? INSTR( columnname, 'mystring' ) > 0 vs columnname LIKE '%mystring%'

    Read the article

  • Concatenate SQL script from Powershell

    - by Jeff Meatball Yang
    I have a bunch of (50+) XML files in a directory that I would like to insert into a SQL server 2008 table. How can I create a SQL script from the command prompt or Powershell that will let me insert the files into a simple table with the following schema: XMLDataFiles ( xmlFileName varchar(255) , content xml ) All I need is for something to generate a script with a bunch of insert statements. Right now, I'm contemplating writing a silly little .NET console app to write the SQL script. Thanks.

    Read the article

  • Help with MySQL query

    - by Michael S.
    I have a table that contains the next columns: ip(varchar 255), index(bigint 20), time(timestamp) each time something is inserted there, the time column gets current timestamp. I want to run a query that returns all the rows that have been added in the last 24 hours. This is what I try to execute: SELECT ip, index FROM users WHERE ip = 'some ip' AND TIMESTAMPDIFF(HOURS,time,NOW()) < 24 And it doesn't work. Can someone help me out? Thanks :)

    Read the article

  • Mysql stored procedure where clause

    - by Mneva skoko
    I am having a problem with this stored procedure: Delimiter // Create procedure(in varchar(50)) Begin Select * from employees where email = eml; End// Delimiter ; I don't get errors when I run this procedure but when i call it in my php script it returns nothing.

    Read the article

  • HTML tags in mysql text field

    - by paracaudex
    I'm creating a database with what I anticipate will be a long (perhaps several paragraphs for some tuples) attribute. I'm assigning it text instead of varchar. I have two questions: Should I give a maximum value for the text field? Is this necessary? Is it useful? Since the contents of this field will be displayed on a website in HTML, do I need to include paragraph tags for paragraph formatting when I enter records into mysql?

    Read the article

  • Large number of UPDATE queries slowing down page

    - by Bryan Lewis
    I am reading and validating large fixed-width text files (range from 10-50K lines) that are submitted via our ASP.net website (coded in VB.Net). I do an initial scan of the file to check for basic issues (line length, etc). Then I import each row into a MS SQL table. Each DB rows basically consists of a record_ID (Primary, auto-incrementing) and about 50 varchar fields. After the insert is done, I run a validation function on the file that checks each field in each row based on a bunch of criteria (trimmed length, isnumeric, range checks, etc). If it finds an error in any field, it inserts a record into the Errors table, which has an error_ID, the record_ID and an error message. In addition, if the field fails in a particular way, I have to do a "reset" on that field. A reset might consist of blanking the entire field, or simply replacing the value with another value (e.g. replacing the string with a new one that has all illegals chars taken out). I have a 5,000 line test file. The upload, initial check, and import takes about 5-6 seconds. The detailed error check and insert into the Errors table takes about 5-8 seconds (this file has about 1200 errors in it). However, the "resets" part takes about 40-45 seconds for 750 fields that need to be reset. When I comment out the resets function (returning immediately without actually calling the UPDATE stored proc), the process is very fast. With the resets turned on, the pages take 50 seconds to return. My UPDATE stored proc is using some recommended code from http://sommarskog.se/dynamic_sql.html, whereby it uses CASE instead of dynamic SQL: UPDATE dbo.Records SET dbo.Records.file_ID = CASE @field_name WHEN 'file_ID' THEN @field_value ELSE file_ID END, . . (all 50 varchar field CASE statements here) . WHERE dbo.Records.record_ID = @record_ID Is there any way I can help my performance here. Can I somehow group all of these UPDATE calls into a single transaction? Should I be reworking the UPDATE query somehow? Or is it just sheer quantity of 750+ UPDATEs and things are just slow (it's a quad proc server with 8GB ram). Any suggestions appreciated.

    Read the article

  • IP address numbers in MySQL subquery

    - by Iain Collins
    I have a problem with a subquery involving IPV4 addresses stored in MySQL (MySQL 5.0). The IP addresses are stored in two tables, both in network number format - e.g. the format output by MySQL's INET_ATON(). The first table ('events') contains lots of rows with IP addresses associated with them, the second table ('network_providers') contains a list of provider information for given netblocks. events table (~4,000,000 rows): event_id (int) event_name (varchar) ip_address (unsigned 4 byte int) network_providers table (~60,000 rows): ip_start (unsigned 4 byte int) ip_end (unsigned 4 byte int) provider_name (varchar) Simplified for the purposes of the problem I'm having, the goal is to create an export along the lines of: event_id,event_name,ip_address,provider_name If do a query along the lines of either of the following, I get the result I expect: SELECT provider_name FROM network_providers WHERE INET_ATON('192.168.0.1') >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 SELECT provider_name FROM network_providers WHERE 3232235521 >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 That is to say, it returns the correct provider_name for whatever IP I look up (of course I'm not really using 192.168.0.1 in my queries). However, when performing this same query as a subquery, in the following manner, it doesn't yield the result I would expect: SELECT event.id, event.event_name, (SELECT provider_name FROM network_providers WHERE event.ip_address >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1) as provider FROM events Instead the a different (incorrect) value for network_provider is returned - over 90% (but curiously not all) values returned in the provider column contain the wrong provider information for that IP. Using event.ip_address in a subquery just to echo out the value confirms it contains the value I'd expect and that the subquery can parse it. Replacing event.ip_address with an actual network number also works, just using it dynamically in the subquery in this manner that doesn't work for me. I suspect the problem is there is something fundamental and important about subqueries in MySQL that I don't get. I've worked with IP addresses like this in MySQL quite a bit before, but haven't previously done lookups for them using a subquery. The question: I'd really appreciate an example of how I could get the output I want, and if someone here knows, some enlightenment as to why what I'm doing doesn't work so I can avoid making this mistake again. Notes: The actual real-world usage I'm trying to do is considerably more complicated (involving joining two or three tables). This is a simplified version, to avoid overly complicating the question. Additionally, I know I'm not using a between on ip_start & ip_end - that's intentional (the DB's can be out of date, and such cases the owner in the DB is almost always in the next specified range and 'best guess' is fine in this context) however I'm grateful for any suggestions for improvement that relate to the question. Efficiency is always nice, but in this case absolutely not essential - any help appreciated.

    Read the article

  • MYSQL: how to search for fields that hold values sep. by commas?

    - by andufo
    hi. i have 2 tables: tags (id_tag,name) news (id,title,data,tags) The field newstags is a varchar(255). Im planning to put data like this in that field: "1,7,34" That means that a particular row in news is linked to tags 1, 7 and 34 from the tags table. Then, how can i search for ALL news records that have the 34 value (among others) in the tags field? Is there a better way to do this?

    Read the article

  • SQL SERVER - Understanding how MIN(text) works.

    - by tmercer
    I'm doing a little digging and looking for a explanation on how SQL server evaluates MIN(Varchar). I found this remark in BOL: MIN finds the lowest value in the collating sequence defined in the underlying database So if I have a table that has one row with the following values: Data AA AB AC Doing a SELECT MIN(DATA) would return back AA. I just want to understand the why behind this and understand the BOL a little better. Thanks!

    Read the article

  • Memory Allocation Error in MySQL

    - by Chinjoo
    I am using MySql ODBC driver with .Net 3.5. I have created a stored procedure in MySQl which accepts around 15 parameters with types like datetime, varchar, Int32, Int64 etc.. When I run the SP from the query window with the arguments provided, it runs fine. But whwn I test using the .Net application, it gives exception with "Memory allocation error", MySQL native error code is 4001. Any help will be much appreciated.

    Read the article

  • How to convert a table column to another data type

    - by holden
    I have a column with the type of Varchar in my Postgres database which I meant to be integers... and now I want to change them, unfortunately this doesn't seem to work using my rails migration. change_column :table1, :columnB, :integer So I tried doing this: execute 'ALTER TABLE "table1" ALTER COLUMN "columnB" TYPE integer USING CAST(columnB AS INTEGER)' but cast doesn't work in this instance because some of the column are null... any ideas?

    Read the article

  • Optimal way to convert to date

    - by IMHO
    I have legacy system where all date fields are maintained in YMD format. Example: 20101123 this is date: 11/23/2010 I'm looking for most optimal way to convert from number to date field. Here is what I came up with: declare @ymd int set @ymd = 20101122 select @ymd, convert(datetime, cast(@ymd as varchar(100)), 112) This is pretty good solution but I'm wandering if someone has better way doing it

    Read the article

  • IF statement error

    - by Jasl
    I have the following columns in TableA TableA Column1 varchar Column2 int Column3 bit I am using this statement IF Column3 = 0 SELECT Column1, Column2 FROM TableA WHERE Column2 > 200 ELSE SELECT Column1, Column2 FROM TableA WHERE Column2 < 200 But the statment does not compile. It says Invalid Column Name 'Column3'

    Read the article

  • generic Mysql stored procedure

    - by psu
    Hi, I have the fallowing stored procedure: CREATE PROCEDURE `get`(IN tb VARCHAR(50), IN id INTEGER) BEGIN SELECT * FROM tb WHERE Indx = id; END// When I call get(user,1) I get the following: ERROR 1054 (42S22): Unknown column 'user' in 'field list'

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • how to enter manual time stamp in get date ()

    - by Arunachalam
    how to enter manual time stamp in get date () ? select conver(varchar(10),getdate(),120) returns 2010-06-07 now i want to enter my own time stamp in this like 2010-06-07 10.00.00.000 i m using this in select * from sample table where time_stamp ='2010-06-07 10.00.00.000' since i m trying to automate this query i need the current date but i need different time stamp can it be done .

    Read the article

  • Most optimal way to convert to date

    - by IMHO
    I have legacy system where all date fields are maintained in YMD format. Example: 20101123 this is date: 11/23/2010 I'm looking for most optimal way to convert from number to date field. Here is what I came up with: declare @ymd int set @ymd = 20101122 select @ymd, convert(datetime, cast(@ymd as varchar(100)), 112) This is pretty good solution but I'm wandering if someone has better way doing it

    Read the article

  • how to design this relation in a DB schema

    - by raticulin
    I have a table Car in my db, one of the columns is purchaseDate. I want to be able to tag every car with a number of Policies (limited to 10 policies). Each policy has a time to life (ttl, a duration of time, like '5 years', '10 months' etc), that is, for how long since the car's purchaseDate the policy can be applied. I need to perform the following actions: when inserting a Car, it will be set with a number of Policies (at least one is set) sometimes a Car will be updated to add/remove a Policy searches must be done taking into account date/policies, for example: 'select all cars that are not covered by any policy as of today' My current design is (pol0..pol9 are the policies): CREATE TABLE Car ( id int NOT NULL IDENTITY(1,1), purchaseDate datetime NOT NULL, //more stuff... pol0 smallint default NULL, pol1 smallint default NULL, pol2 smallint default NULL, pol3 smallint default NULL, pol4 smallint default NULL, pol5 smallint default NULL, pol6 smallint default NULL, pol7 smallint default NULL, pol8 smallint default NULL, pol9 smallint default NULL, PRIMARY KEY (id) ) CREATE TABLE Policy ( id smallint NOT NULL, name varchar(50) collate Latin1_General_BIN NOT NULL, ttl varchar(100) collate Latin1_General_BIN NOT NULL, PRIMARY KEY (id) ) The problem I am facing is that the sql to perform the query above is a nightmare to write. As I don't know in which column each policy can be, so I have to check all columns for every policy etc etc. So I am wondering wether it is worth changing this. My questions are: The smallint as Policy id was chosen instead of an 'int IDENTITY' in order to save some space as there are going to be millions of Car records. It just adds complexity when creating a Policy as we must handle the id etc. Was it worth doing this? I am thinking that maybe there is a much better design? Obviously we could move the policy/car relation to its own table CarPolicy, benefits would be: no limit on 10 policies per car adding/removing etc much easier when only the default policy is applied (when no others are applied one called Default policy is applied), we could signal that by not having any entry in CarPolicy, now this is just done inserting the Default policy id in one of the columns. The cons are that we would need to change the DB, ORM classes etc. What would you recommend? Maybe there is another smart way to implement this that we are not aware without using the CarPolicy table?

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >