Search Results

Search found 28043 results on 1122 pages for 'sql replication'.

Page 669/1122 | < Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >

  • Regular Expression to parse SQL Structure

    - by user351429
    I am trying to parse the MySQL data types returned by "DESCRIBE [TABLE]". It returns strings like: int(11) float varchar(200) int(11) unsigned float(6,2) I've tried to do the job using regular expressions but it's not working. PHP CODE: $string = "int(11) numeric"; $regex = '/(\w+)\s*(\w+)/'; var_dump( preg_split($regex, $string) );

    Read the article

  • MySQL COUNT() multiple columns

    - by liam
    Hello, I'm trying to fetch the most popular tags from all videos in my database (ignoring blank tags). I also need the 'flv' for each tag. I have this working as I want if each video has one tag: SELECT tag_1, flv, COUNT(tag_1) AS tagcount FROM videos WHERE NOT tag_1='' GROUP BY tag_1 ORDER BY tagcount DESC LIMIT 0, 10 However in my database, each video is allowed three tags - tag_1, tag_2 and tag_3. Is there a way to get the most popular tags reading from multiple columns? The record structure is: +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | flv | varchar(150) | YES | | NULL | | | tag_1 | varchar(75) | YES | | NULL | | | tag_2 | varchar(75) | YES | | NULL | | | tag_3 | varchar(75) | YES | | NULL | | +-----------------+--------------+------+-----+---------+----------------+

    Read the article

  • multi-row update table with "different" data

    - by kralco626
    I think the best way to explain this is to tell you what I have. I have two tables A and B both have columns Field1 and Field2. However Field 2 is not populated in table B I want to populate field 2 of table B with field 2 of table A where field 1 of table A matches field 1 of table B. something like update tableB set Field2 = tableA.field2 where tablea.field1 = tableb.field1. The reason this may seem so odd and obscure is that I'm tyring to do an inital data load form an old database to a new one. please let me know if you need clarification

    Read the article

  • How do I get every nth row in a table, or how do I break up a subset of a table into sets or rows of

    - by Jherico
    I have a table of heterogeneous pieces of data identified by a primary key (ID) and a type identifier (TYPE_ID). I would like to be able to perform a query that returns me a set of ranges for a given type broken into even page sizes. For instance, if there are 10,000 records of type '1' and I specify a page size of 1000, I want 10 pairs of numbers back representing values I can use in a BETWEEN clause in subsequent queries to query the DB 1000 records at a time. My initial attempt was something like this select id, rownum from CONTENT_TABLE where type_id = ? and mod(rownum, ?) = 0 But this doesn't work.

    Read the article

  • Databinding in combo box

    - by muralekarthick
    Hi I have two forms, and a class, queries return in Stored procedure. Stored Procedure: ALTER PROCEDURE [dbo].[Payment_Join] @reference nvarchar(20) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here SELECT p.iPaymentID,p.nvReference,pt.nvPaymentType,p.iAmount,m.nvMethod,u.nvUsers,p.tUpdateTime FROM Payment p, tblPaymentType pt, tblPaymentMethod m, tblUsers u WHERE p.nvReference = @reference and p.iPaymentTypeID = pt.iPaymentTypeID and p.iMethodID = m.iMethodID and p.iUsersID = u.iUsersID END payment.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using System.Data.SqlClient; using System.Windows.Forms; namespace Finance { class payment { string connection = global::Finance.Properties.Settings.Default.PaymentConnectionString; #region Fields int _paymentid = 0; string _reference = string.Empty; string _paymenttype; double _amount = 0; string _paymentmethod; string _employeename; DateTime _updatetime = DateTime.Now; #endregion #region Properties public int paymentid { get { return _paymentid; } set { _paymentid = value; } } public string reference { get { return _reference; } set { _reference = value; } } public string paymenttype { get { return _paymenttype; } set { _paymenttype = value; } } public string paymentmethod { get { return _paymentmethod; } set { _paymentmethod = value; } } public double amount { get { return _amount;} set { _amount = value; } } public string employeename { get { return _employeename; } set { _employeename = value; } } public DateTime updatetime { get { return _updatetime; } set { _updatetime = value; } } #endregion #region Constructor public payment() { } public payment(string refer) { reference = refer; } public payment(int paymentID, string Reference, string Paymenttype, double Amount, string Paymentmethod, string Employeename, DateTime Time) { paymentid = paymentID; reference = Reference; paymenttype = Paymenttype; amount = Amount; paymentmethod = Paymentmethod; employeename = Employeename; updatetime = Time; } #endregion #region Methods public void Save() { try { SqlConnection connect = new SqlConnection(connection); SqlCommand command = new SqlCommand("payment_create", connect); command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@reference", reference)); command.Parameters.Add(new SqlParameter("@paymenttype", paymenttype)); command.Parameters.Add(new SqlParameter("@amount", amount)); command.Parameters.Add(new SqlParameter("@paymentmethod", paymentmethod)); command.Parameters.Add(new SqlParameter("@employeename", employeename)); command.Parameters.Add(new SqlParameter("@updatetime", updatetime)); connect.Open(); command.ExecuteScalar(); connect.Close(); } catch { } } public void Load(string reference) { try { SqlConnection connect = new SqlConnection(connection); SqlCommand command = new SqlCommand("Payment_Join", connect); command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@Reference", reference)); //MessageBox.Show("ref = " + reference); connect.Open(); SqlDataReader reader = command.ExecuteReader(); while (reader.Read()) { this.reference = Convert.ToString(reader["nvReference"]); // MessageBox.Show(reference); // MessageBox.Show("here"); // MessageBox.Show("payment type id = " + reader["nvPaymentType"]); // MessageBox.Show("here1"); this.paymenttype = Convert.ToString(reader["nvPaymentType"]); // MessageBox.Show(paymenttype.ToString()); this.amount = Convert.ToDouble(reader["iAmount"]); this.paymentmethod = Convert.ToString(reader["nvMethod"]); this.employeename = Convert.ToString(reader["nvUsers"]); this.updatetime = Convert.ToDateTime(reader["tUpdateTime"]); } reader.Close(); } catch (Exception ex) { MessageBox.Show("Check it again" + ex); } } #endregion } } i have already binded the combo box items through designer, When i run the application i just get the reference populated in form 2 and combo box just populated not the particular value which is fetched. New to c# so help me to get familiar

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Using Linq, how to separate a list in to grouped objects by name?

    - by Dr. Zim
    I have a table where a record looks like this varchar(255) Name varchar(255) Text varchar(255) Value Name is the DDL name, Text is what is displayed, and Value is returned upon selection. There are between one and twenty options for each Name. Without iterating though each option like a cursor, is there any way to pull out a list of objects, one for each unique DDL Name, using Linq and C#? A sample of the data: Beds '4 (10)' 4 Beds '5 (1)' 5 Beds '7 (1)' 7 Baths 'NA (13)' NULL Baths '0 (1)' 0 Baths '1 (13)' 1 I was thinking about doing an outer select to get the unique Names, then an inner select to get the list of options for it, then return the set as a List of a set of Lists.

    Read the article

  • integrating jquery with AJAX using MVC for ddl/html.dropdownlist

    - by needhelp
    the situation: a user on the page in question selects a category from a dropdown which then dynamically populates all the users of that category in a second dropdown beside it. all the data is being retrieved using LinqtoSQL and i was wondering if this can be done a) using html.dropdownlist in a strongly typed view? b) using jquery to trigger the ajax request on selected index change instead of a 'populate' button trigger? sorry i dont have code as what i was trying really wasnt working at all. I am having trouble with how to do it conceptually and programatically! will appreciate any links to examples etc greatly! thanks in advance! EDIT: this is kind of what i was trying to achieve.. first the ViewPage: <script type="text/javascript"> $(document).ready function TypeSearch() { $.getJSON("/Home/Type", null, function(data) { //dont know what to do here }); } </script> <p> <label for="userType">userType:</label> <%= Html.DropDownList("userType") %> <%= Html.ValidationMessage("userType", "*") %> <input type="submit" runat="server" onclick="TypeSearch()" /> <label for="accountNumber">accountNumber:</label> <%= Html.DropDownList("accountNumber") %> <%= Html.ValidationMessage("accountNumber", "*") %> </p> Then home controller action: public ActionResult Type() { string accountType = dropdownvalue; List<Account> accounts = userRep.GetAccountsByType(accountType).ToList(); return Json(accounts); }

    Read the article

  • Solr autocommit and autooptimize?

    - by Camran
    I will be uploading my website to a VPS soon. It is a classifieds website which uses Solr integrated with MySql. Solr is updated whenever a new classified is put or deleted. I need a way to make the commit() and optimize() be automated, for example once every 3 hours or so. How can I do this? (Details Please) When is it ideal to optimize? Thanks

    Read the article

  • Should I Use GUID or IDENTITY as Thread Number?

    - by user311509
    offerID is the thread # which represents the thread posted. I see in forums posts are represented by random numbers. Is this achieved by IDENTITY? If not, please advice. nvarchar(max) will carry all kind of texts along with HTML tags. CREATE TABLE Offer ( offerID int IDENTITY (4382,15) PRIMARY KEY, memberID int NOT NULL REFERENCES Member(memberID), title nvarchar(200) NOT NULL, thread nvarchar(max) NOT NULL, . . . );

    Read the article

  • Find all those columns which have only null values, in a MySQL table

    - by Robin v. G.
    The situation is as follows: I have a substantial number of tables, with each a substantial number of columns. I need to deal with this old and to-be-deprecated database for a new system, and I'm looking for a way to eliminate all columns that have - apparently - never been in use. I wanna do this by filtering out all columns that have a value on any given row, leaving me with a set of columns where the value is NULL in all rows. Of course I could manually sort every column descending, but that'd take too long as I'm dealing with loads of tables and columns. I estimate it to be 400 tables with up to 50 (!) columns per table. Is there any way I can get this information from the information_schema? EDIT: Here's an example: column_a column_b column_c column_d NULL NULL NULL 1 NULL 1 NULL 1 NULL 1 NULL NULL NULL NULL NULL NULL The output should be 'column_a' and 'column_c', for being the only columns without any filled in values.

    Read the article

  • MSSQL Sum query

    - by ldb
    today my problem is this i have 2 column and i wish check if the sum of that columns isn't Higher then a value(485 for example) and if is do a query...i though to do SELECT * FROM table WHERE ColumnA+ColumnB<485 But isn't working... i've already tried with SELECT Sum(ColumnA)+Sum(ColumnB) AS Total FROM table but it gives me 1 column with the sum of all rows, i instead want a row for every sum. so how can i do..? xD i hope you understood if not just ask that i try to explain it better! and thanks in advice for who want to help me! EDIT: I Found out XD the problem was that the columns was Smallint and the result of 1 or more rows was more than 32k so it wasn't working! Thanks At all!!

    Read the article

  • How can i design a DB where the user can define the fields and types of a detail table in a M-D rela

    - by Simon
    My application has one table called 'events' and each event has approx 30 standard fields, but also user defined fields that could be any name or type, in an 'eventdata' table. Users can define these event data tables, by specifying x number of fields (either text/double/datetime/boolean) and the names of these fields. This 'eventdata' (table) can be different for each 'event'. My current approach is to create a lookup table for the definitions. So if i need to query all 'event' and 'eventdata' per record, i do so in a M-D relaitionship using two queries (i.e. select * from events, then for each record in 'events', select * from 'some table'). Is there a better approach to doing this? I have implemented this so far, but most of my queries require two distinct calls to the DB - i cannot simply join my master 'events' table with different 'eventdata' tables for each record in in 'events'. I guess my main question is: can i join my master table with different detail tables for each record? E.g. SELECT E.*, E.Tablename FROM events E LEFT JOIN 'E.tablename' T ON E._ID = T.ID If not, is there a better way to design my database considering i have no idea on how many user defined fields there may be and what type they will be.

    Read the article

  • Getting the last element of a Postgres array, declaratively

    - by Wojciech Kaczmarek
    How to obtain the last element of the array in Postgres? I need to do it declaratively as I want to use it as a ORDER BY criteria. I wouldn't want to create a special PGSQL function for it, the less changes to the database the better in this case. In fact, what I want to do is to sort by the last word of a specific column containing multiple words. Changing the model is not an option here. In other words, I want to push Ruby's sort_by {|x| x.split[-1]} into the database level. I can split a value into array of words with Postgres string_to_array or regexp_split_to_array functions, then how to get its last element?

    Read the article

  • Can a primary key be equal to a different column?

    - by eric
    I know that a primary key must be unique, but is it okay for a primary key to be equal to a different column in the same table by coincidence? For instance, I have 2 tables. One table is called person that holds information about a person (ID, email, telephone, address, name). The other table is staff (ID, pID(person ID), salary, position). In staff the ID column is the primary key and is used to uniquely identify a staff member. The number is from 1 - 100. However, the pID (person ID) may be equal to the ID. For instance the staff ID may be 1 and the pID that it references to may be equal to 1. Is that okay?

    Read the article

  • Converting delimited string to multiple values in mysql

    - by epo
    I have a mysql legacy table which contains an client identifier and a list of items, the latter as a comma-delimited string. E.g. "xyz001", "foo,bar,baz". This is legacy stuff and the user insists on being able to edit a comma delimited string. They now have a requirement for a report table with the above broken into separate rows, e.g. "xyz001", "foo" "xyz001", "bar" "xyz001", "baz" Breaking the string into substrings is easily doable and I have written a procedure to do this by creating a separate table, but that requires triggers to deal with deletes, updates and inserts. This query is required rarely (say once a month) but has to be absolutely up to date when it is run, so e.g. the overhead of triggers is not warranted and scheduled tasks to create the table might not be timely enough. Is there any way to write a function to return a table or a set so that I can join the identifier with the individual items on demand?

    Read the article

  • How do I put data from multiple records into different columns?

    - by Bryan
    My two tables are titled analyzed and analyzedCopy3. I'm trying to put information from analyzedCopy3 into multiple columns in analyzed. Sample data from analyzedCopy3: readings_miu_id OriginalCol ColRSSIz 110001366 Frederick Road -108 110001366 Steel Street 110001366 Fifth Ave. 110001508 Steel Street -104 What I want to do is put the top 3 OriginalCol, ColRSSIz combinations into columns that I have in the table analyzed. In analyzed there is only one record for each unique readings_miu_id. Any ideas? Thanks in advance. Additional Info: By "top 3 OriginalCol, ColRSSIz combinations" I mean the first 3 combinations with the highest value in the ColRSSIz column. For any readings_miu_id there could be anywhere from 1 row of information to 6 rows of information. So at most I'm only wanting the top 3. If there is less than 3 rows for the readings_miu_id then the other columns need to be blank. Query that generates the table "analyzed": strSql4 = " SELECT readings_miu_id, Count(readings_miu_id) as NumberOfReads, First(PercentSuccessz) as PercentSuccess, First(Readingz)as Reading, First(MIUwindowz) as MIUwindow, First(SNz) as SN, First(Noisez) as Noise, First(RSSIz) as RSSI, First(ColRSSIz) as ColRSSI, First(MIURSSIz) as MIURSSI, First(Col1z) as Col1, First(Col1RSSIz) as Col1RSSI, First(Col2z) as Col2, First(Col2RSSIz) as Col2RSSI, First(Col3z) as Col3, First(Col3RSSIz) as Col3RSSI, First(Firmwarez) as Firmware, First(CFGDatez) as CFGDate, First(FreqCorrz) as FreqCorr, First(Activez) as Active, First(MeterTypez) as MeterType, First(OriginColz) as OriginCol, First(ColIDz) as ColID, First(Ownagez) as Ownage, First(SiteIDz) as SiteID, First(PremIDz) as PremID, First(prem_group1z) as prem_group1, First(prem_group2z) as prem_group2, First(ReadIDz) as ReadID, First(prem_addr1z) as prem_addr1 " & _ "INTO analyzed " & _ "FROM analyzedCopy2 " & _ "GROUP BY readings_miu_id, PremIDz; " DoCmd.SetWarnings False DoCmd.RunSQL strSql4 DoCmd.SetWarnings True

    Read the article

  • Data storage advice needed: Best way to store location + time data?

    - by sobedai
    I have a project in mind that will require the majority of queries to be keyed off of lat/long as well as date + time. Initially, I was thinking of a standard RDBMS where lat, long, and the datetime field are properly indexed. Then, I began thinking of a document based system where the document was essentially a timestamp and each document has lat/long with in it. Each document could have n objects associated with it. I'm looking for advice on what would be the best type of storage engine for this sort of thing is - which of the above idea would be better or if there is something else completely that is the ideal solution. Thanks

    Read the article

  • mysql: inserting data and autoincrement

    - by every_answer_gets_a_point
    i am converting from access to mysql i have a table in access where one of the columns is an autonumber when i transfer the data into the mysql database (where i also have a column that is auto_increment), should i be transfering the auto_increment data into the auto_increment column, or will it auto_increment itself? how do i ensure that if i do not transfer the autoincrement data from access, that it auto_increments properly?

    Read the article

  • my output parameters are always null when i use BeginExecuteNonQuery

    - by CharlesO
    I have a stored procedure that returns a varchar(160) as an output parameter of a stored procedure. Everything works fine when i use ExecuteNonQuery, i always get back the expected value. However, once i switch to use BeginExecuteNonQuery, i get a null value for the output. I am using connString + "Asynchronous Processing=true;" in both cases. Sadly the BeginExecuteNonQuery is about 1.5 times faster in my case...but i really need the output parameter. Thanks!

    Read the article

< Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >