Search Results

Search found 27472 results on 1099 pages for 'sql humor'.

Page 659/1099 | < Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >

  • Sql Alchemy Duplicated Commit

    - by PythonWolf
    Good Morning i'm currently facing a problem in my Cherrypy application. Im my own custom session module , anyway when performing session.add() The exact same object gets updated Twice. cherrypy.request.SessionManager.user_data = user try: db_session.add(cherrypy.request.SessionManager) db_session.commit() Will Return 2011-06-21 09:16:48,991 INFO sqlalchemy.engine.base.Engine.0x...04cL BEGIN (implicit) 2011-06-21 09:16:49,015 INFO sqlalchemy.engine.base.Engine.0x...04cL SELECT ..... FROM "Clients_Users" WHERE "Clients_Users".username = %(username_1)s AND "Clients_Users".password = %(password_1)s LIMIT 1 OFFSET 0 2011-06-21 09:16:49,015 INFO sqlalchemy.engine.base.Engine.0x...04cL {'password_1': '123', 'username_1': u'1'} 2011-06-21 09:16:49,047 INFO sqlalchemy.engine.base.Engine.0x...04cL UPDATE "SYS_Sessions" SET user_data=%(user_data)s WHERE "SYS_Sessions".id = %(SYS_Sessions_id)s 2011-06-21 09:16:49,067 INFO sqlalchemy.engine.base.Engine.0x...04cL {'SYS_Sessions_id': 92L, 'user_data': } 2011-06-21 09:16:49,071 INFO sqlalchemy.engine.base.Engine.0x...04cL COMMIT 2011-06-21 09:16:49,093 INFO sqlalchemy.engine.base.Engine.0x...04cL BEGIN (implicit) 2011-06-21 09:16:49,095 INFO sqlalchemy.engine.base.Engine.0x...04cL UPDATE "SYS_Sessions" SET user_data=%(user_data)s WHERE "SYS_Sessions".id = %(SYS_Sessions_id)s 2011-06-21 09:16:49,095 INFO sqlalchemy.engine.base.Engine.0x...04cL {'SYS_Sessions_id': 92L, 'user_data': } 2011-06-21 09:16:49,108 INFO sqlalchemy.engine.base.Engine.0x...04cL COMMIT As Anyone seen this before ? P.S This doesn't happen in the rest of the modules i have made.

    Read the article

  • Manual Linq to SQL entity framework mapping

    - by kprobst
    I've been playing with the O/R designer in VS and I was wondering if someone could shed come light on this. I'm used to OR mappers that are largely manual (homegrown and e.g., NHibernate). I don't mind encoding the entity classes myself, since they don't change all that often to begin with, and I have this irrational fear of designers and auto generated code as it is. I have noticed that the generated entity classes contain a lot of boilerplate extensibility methods, e.g. On[Property]Changed() and so on where [Property] is a mapped member of the class. These are placed in the setters of the property accessors. I assume it's OK if I don't include these when I do my hand coding, correct? They would be nice if I needed some sort of interception pattern but that's certainly not the case. I guess I just need to know if any of those methods are required by the entity framework to keep track of changes to the mapping types in order for things to work when updating the database. Thanks!

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • How to go about writing this classic asp in asp.net

    - by Phil
    I am stuck in converting this snipped to asp.net. set RSLinksCat = conn.Execute("select linkscat.id, linkscat.category from linkscat, contentlinks, links where contentlinks.linksid = links.id and contentlinks.contentid = " & contentid & " and links.linkscatid = linkscat.id order by linkscat.category") <%if not RSLinksCat.EOF then%><h1>Links</h1> <br /> <%do while not RSLinksCat.EOF%> <%set RSLinks = conn.Execute("select * from links where linkscatid = " & RSLinksCat("id") & "")%> <strong><%=RSlinkscat("category")%><strong> <ul> <%do while not RSlinks.EOF%> <li> <a href = "http://<%=RSLinks("url")%>" target="_blank"><%=RSlinks("description")%></a> </li> <%RSLinks.MoveNext loop%> </ul> <%RSLinksCat.MoveNext loop%> <br /> <%end if%><%conn.close%> I'm not sure where to start. Can anyone recommend the correct approach i.e sqldatareaders or repeaters or arrays or? VB code samples most welcome. Thanks

    Read the article

  • Why might one app connect to SQL backend OK and a second app fail if they share the same connectionstring?

    - by hawbsl
    Trying to figure out a SQL connection error 26 in our app. We've got two closely related apps Foo and FooAddIn. Foo is a Winforms app built in VS2010 and runs fine and connects fine to our SQLExpress back end. FooAddIn is an Outlook AddIn which references Foo.exe and connects to the same SQL Express instance. Or rather, it doesn't connect, instead reporting: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) Now, both apps share the same connectionstring and we've verified they really do share the same connectionstring. At this stage we're just testing from within the same developer machine, so the apps are on the same machine, going via the same VS2010 IDE. So a lot of the advice online for this error doesn't apply because the fact that Foo connects through to SQL Express tells us the database is there and available and can be reached. What else is there to check? One thing is that Foo and FooAddIn are running different runtime versions of System.Data (v2.0.50727 and v4.0.30319). Could that be a factor?

    Read the article

  • Databinding in combo box

    - by muralekarthick
    Hi I have two forms, and a class, queries return in Stored procedure. Stored Procedure: ALTER PROCEDURE [dbo].[Payment_Join] @reference nvarchar(20) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here SELECT p.iPaymentID,p.nvReference,pt.nvPaymentType,p.iAmount,m.nvMethod,u.nvUsers,p.tUpdateTime FROM Payment p, tblPaymentType pt, tblPaymentMethod m, tblUsers u WHERE p.nvReference = @reference and p.iPaymentTypeID = pt.iPaymentTypeID and p.iMethodID = m.iMethodID and p.iUsersID = u.iUsersID END payment.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using System.Data.SqlClient; using System.Windows.Forms; namespace Finance { class payment { string connection = global::Finance.Properties.Settings.Default.PaymentConnectionString; #region Fields int _paymentid = 0; string _reference = string.Empty; string _paymenttype; double _amount = 0; string _paymentmethod; string _employeename; DateTime _updatetime = DateTime.Now; #endregion #region Properties public int paymentid { get { return _paymentid; } set { _paymentid = value; } } public string reference { get { return _reference; } set { _reference = value; } } public string paymenttype { get { return _paymenttype; } set { _paymenttype = value; } } public string paymentmethod { get { return _paymentmethod; } set { _paymentmethod = value; } } public double amount { get { return _amount;} set { _amount = value; } } public string employeename { get { return _employeename; } set { _employeename = value; } } public DateTime updatetime { get { return _updatetime; } set { _updatetime = value; } } #endregion #region Constructor public payment() { } public payment(string refer) { reference = refer; } public payment(int paymentID, string Reference, string Paymenttype, double Amount, string Paymentmethod, string Employeename, DateTime Time) { paymentid = paymentID; reference = Reference; paymenttype = Paymenttype; amount = Amount; paymentmethod = Paymentmethod; employeename = Employeename; updatetime = Time; } #endregion #region Methods public void Save() { try { SqlConnection connect = new SqlConnection(connection); SqlCommand command = new SqlCommand("payment_create", connect); command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@reference", reference)); command.Parameters.Add(new SqlParameter("@paymenttype", paymenttype)); command.Parameters.Add(new SqlParameter("@amount", amount)); command.Parameters.Add(new SqlParameter("@paymentmethod", paymentmethod)); command.Parameters.Add(new SqlParameter("@employeename", employeename)); command.Parameters.Add(new SqlParameter("@updatetime", updatetime)); connect.Open(); command.ExecuteScalar(); connect.Close(); } catch { } } public void Load(string reference) { try { SqlConnection connect = new SqlConnection(connection); SqlCommand command = new SqlCommand("Payment_Join", connect); command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@Reference", reference)); //MessageBox.Show("ref = " + reference); connect.Open(); SqlDataReader reader = command.ExecuteReader(); while (reader.Read()) { this.reference = Convert.ToString(reader["nvReference"]); // MessageBox.Show(reference); // MessageBox.Show("here"); // MessageBox.Show("payment type id = " + reader["nvPaymentType"]); // MessageBox.Show("here1"); this.paymenttype = Convert.ToString(reader["nvPaymentType"]); // MessageBox.Show(paymenttype.ToString()); this.amount = Convert.ToDouble(reader["iAmount"]); this.paymentmethod = Convert.ToString(reader["nvMethod"]); this.employeename = Convert.ToString(reader["nvUsers"]); this.updatetime = Convert.ToDateTime(reader["tUpdateTime"]); } reader.Close(); } catch (Exception ex) { MessageBox.Show("Check it again" + ex); } } #endregion } } i have already binded the combo box items through designer, When i run the application i just get the reference populated in form 2 and combo box just populated not the particular value which is fetched. New to c# so help me to get familiar

    Read the article

  • postgres subquery w/ derived column

    - by Wells
    The following query won't work, but it should be clear what I'm trying to do: split the value of 't' on space and use the last element in that array in the subquery (as it will match tl). Any ideas how to do this? Thanks! SELECT t, y, "type", regexp_split_to_array(t, ' ') as t_array, sum(dr), ( select uz from f.tfa where tl = t_array[-1] ) as uz, sc FROM padres.yd_fld WHERE y = 2010 AND pos <> 0 GROUP BY t, y, "type", sc;

    Read the article

  • multi-row update table with "different" data

    - by kralco626
    I think the best way to explain this is to tell you what I have. I have two tables A and B both have columns Field1 and Field2. However Field 2 is not populated in table B I want to populate field 2 of table B with field 2 of table A where field 1 of table A matches field 1 of table B. something like update tableB set Field2 = tableA.field2 where tablea.field1 = tableb.field1. The reason this may seem so odd and obscure is that I'm tyring to do an inital data load form an old database to a new one. please let me know if you need clarification

    Read the article

  • Regular Expression to parse SQL Structure

    - by user351429
    I am trying to parse the MySQL data types returned by "DESCRIBE [TABLE]". It returns strings like: int(11) float varchar(200) int(11) unsigned float(6,2) I've tried to do the job using regular expressions but it's not working. PHP CODE: $string = "int(11) numeric"; $regex = '/(\w+)\s*(\w+)/'; var_dump( preg_split($regex, $string) );

    Read the article

  • Data storage advice needed: Best way to store location + time data?

    - by sobedai
    I have a project in mind that will require the majority of queries to be keyed off of lat/long as well as date + time. Initially, I was thinking of a standard RDBMS where lat, long, and the datetime field are properly indexed. Then, I began thinking of a document based system where the document was essentially a timestamp and each document has lat/long with in it. Each document could have n objects associated with it. I'm looking for advice on what would be the best type of storage engine for this sort of thing is - which of the above idea would be better or if there is something else completely that is the ideal solution. Thanks

    Read the article

  • How can i design a DB where the user can define the fields and types of a detail table in a M-D rela

    - by Simon
    My application has one table called 'events' and each event has approx 30 standard fields, but also user defined fields that could be any name or type, in an 'eventdata' table. Users can define these event data tables, by specifying x number of fields (either text/double/datetime/boolean) and the names of these fields. This 'eventdata' (table) can be different for each 'event'. My current approach is to create a lookup table for the definitions. So if i need to query all 'event' and 'eventdata' per record, i do so in a M-D relaitionship using two queries (i.e. select * from events, then for each record in 'events', select * from 'some table'). Is there a better approach to doing this? I have implemented this so far, but most of my queries require two distinct calls to the DB - i cannot simply join my master 'events' table with different 'eventdata' tables for each record in in 'events'. I guess my main question is: can i join my master table with different detail tables for each record? E.g. SELECT E.*, E.Tablename FROM events E LEFT JOIN 'E.tablename' T ON E._ID = T.ID If not, is there a better way to design my database considering i have no idea on how many user defined fields there may be and what type they will be.

    Read the article

  • Getting the last element of a Postgres array, declaratively

    - by Wojciech Kaczmarek
    How to obtain the last element of the array in Postgres? I need to do it declaratively as I want to use it as a ORDER BY criteria. I wouldn't want to create a special PGSQL function for it, the less changes to the database the better in this case. In fact, what I want to do is to sort by the last word of a specific column containing multiple words. Changing the model is not an option here. In other words, I want to push Ruby's sort_by {|x| x.split[-1]} into the database level. I can split a value into array of words with Postgres string_to_array or regexp_split_to_array functions, then how to get its last element?

    Read the article

  • Find all those columns which have only null values, in a MySQL table

    - by Robin v. G.
    The situation is as follows: I have a substantial number of tables, with each a substantial number of columns. I need to deal with this old and to-be-deprecated database for a new system, and I'm looking for a way to eliminate all columns that have - apparently - never been in use. I wanna do this by filtering out all columns that have a value on any given row, leaving me with a set of columns where the value is NULL in all rows. Of course I could manually sort every column descending, but that'd take too long as I'm dealing with loads of tables and columns. I estimate it to be 400 tables with up to 50 (!) columns per table. Is there any way I can get this information from the information_schema? EDIT: Here's an example: column_a column_b column_c column_d NULL NULL NULL 1 NULL 1 NULL 1 NULL 1 NULL NULL NULL NULL NULL NULL The output should be 'column_a' and 'column_c', for being the only columns without any filled in values.

    Read the article

  • integrating jquery with AJAX using MVC for ddl/html.dropdownlist

    - by needhelp
    the situation: a user on the page in question selects a category from a dropdown which then dynamically populates all the users of that category in a second dropdown beside it. all the data is being retrieved using LinqtoSQL and i was wondering if this can be done a) using html.dropdownlist in a strongly typed view? b) using jquery to trigger the ajax request on selected index change instead of a 'populate' button trigger? sorry i dont have code as what i was trying really wasnt working at all. I am having trouble with how to do it conceptually and programatically! will appreciate any links to examples etc greatly! thanks in advance! EDIT: this is kind of what i was trying to achieve.. first the ViewPage: <script type="text/javascript"> $(document).ready function TypeSearch() { $.getJSON("/Home/Type", null, function(data) { //dont know what to do here }); } </script> <p> <label for="userType">userType:</label> <%= Html.DropDownList("userType") %> <%= Html.ValidationMessage("userType", "*") %> <input type="submit" runat="server" onclick="TypeSearch()" /> <label for="accountNumber">accountNumber:</label> <%= Html.DropDownList("accountNumber") %> <%= Html.ValidationMessage("accountNumber", "*") %> </p> Then home controller action: public ActionResult Type() { string accountType = dropdownvalue; List<Account> accounts = userRep.GetAccountsByType(accountType).ToList(); return Json(accounts); }

    Read the article

  • Should I Use GUID or IDENTITY as Thread Number?

    - by user311509
    offerID is the thread # which represents the thread posted. I see in forums posts are represented by random numbers. Is this achieved by IDENTITY? If not, please advice. nvarchar(max) will carry all kind of texts along with HTML tags. CREATE TABLE Offer ( offerID int IDENTITY (4382,15) PRIMARY KEY, memberID int NOT NULL REFERENCES Member(memberID), title nvarchar(200) NOT NULL, thread nvarchar(max) NOT NULL, . . . );

    Read the article

  • Can a primary key be equal to a different column?

    - by eric
    I know that a primary key must be unique, but is it okay for a primary key to be equal to a different column in the same table by coincidence? For instance, I have 2 tables. One table is called person that holds information about a person (ID, email, telephone, address, name). The other table is staff (ID, pID(person ID), salary, position). In staff the ID column is the primary key and is used to uniquely identify a staff member. The number is from 1 - 100. However, the pID (person ID) may be equal to the ID. For instance the staff ID may be 1 and the pID that it references to may be equal to 1. Is that okay?

    Read the article

  • How do I put data from multiple records into different columns?

    - by Bryan
    My two tables are titled analyzed and analyzedCopy3. I'm trying to put information from analyzedCopy3 into multiple columns in analyzed. Sample data from analyzedCopy3: readings_miu_id OriginalCol ColRSSIz 110001366 Frederick Road -108 110001366 Steel Street 110001366 Fifth Ave. 110001508 Steel Street -104 What I want to do is put the top 3 OriginalCol, ColRSSIz combinations into columns that I have in the table analyzed. In analyzed there is only one record for each unique readings_miu_id. Any ideas? Thanks in advance. Additional Info: By "top 3 OriginalCol, ColRSSIz combinations" I mean the first 3 combinations with the highest value in the ColRSSIz column. For any readings_miu_id there could be anywhere from 1 row of information to 6 rows of information. So at most I'm only wanting the top 3. If there is less than 3 rows for the readings_miu_id then the other columns need to be blank. Query that generates the table "analyzed": strSql4 = " SELECT readings_miu_id, Count(readings_miu_id) as NumberOfReads, First(PercentSuccessz) as PercentSuccess, First(Readingz)as Reading, First(MIUwindowz) as MIUwindow, First(SNz) as SN, First(Noisez) as Noise, First(RSSIz) as RSSI, First(ColRSSIz) as ColRSSI, First(MIURSSIz) as MIURSSI, First(Col1z) as Col1, First(Col1RSSIz) as Col1RSSI, First(Col2z) as Col2, First(Col2RSSIz) as Col2RSSI, First(Col3z) as Col3, First(Col3RSSIz) as Col3RSSI, First(Firmwarez) as Firmware, First(CFGDatez) as CFGDate, First(FreqCorrz) as FreqCorr, First(Activez) as Active, First(MeterTypez) as MeterType, First(OriginColz) as OriginCol, First(ColIDz) as ColID, First(Ownagez) as Ownage, First(SiteIDz) as SiteID, First(PremIDz) as PremID, First(prem_group1z) as prem_group1, First(prem_group2z) as prem_group2, First(ReadIDz) as ReadID, First(prem_addr1z) as prem_addr1 " & _ "INTO analyzed " & _ "FROM analyzedCopy2 " & _ "GROUP BY readings_miu_id, PremIDz; " DoCmd.SetWarnings False DoCmd.RunSQL strSql4 DoCmd.SetWarnings True

    Read the article

  • Using Linq, how to separate a list in to grouped objects by name?

    - by Dr. Zim
    I have a table where a record looks like this varchar(255) Name varchar(255) Text varchar(255) Value Name is the DDL name, Text is what is displayed, and Value is returned upon selection. There are between one and twenty options for each Name. Without iterating though each option like a cursor, is there any way to pull out a list of objects, one for each unique DDL Name, using Linq and C#? A sample of the data: Beds '4 (10)' 4 Beds '5 (1)' 5 Beds '7 (1)' 7 Baths 'NA (13)' NULL Baths '0 (1)' 0 Baths '1 (13)' 1 I was thinking about doing an outer select to get the unique Names, then an inner select to get the list of options for it, then return the set as a List of a set of Lists.

    Read the article

  • Converting delimited string to multiple values in mysql

    - by epo
    I have a mysql legacy table which contains an client identifier and a list of items, the latter as a comma-delimited string. E.g. "xyz001", "foo,bar,baz". This is legacy stuff and the user insists on being able to edit a comma delimited string. They now have a requirement for a report table with the above broken into separate rows, e.g. "xyz001", "foo" "xyz001", "bar" "xyz001", "baz" Breaking the string into substrings is easily doable and I have written a procedure to do this by creating a separate table, but that requires triggers to deal with deletes, updates and inserts. This query is required rarely (say once a month) but has to be absolutely up to date when it is run, so e.g. the overhead of triggers is not warranted and scheduled tasks to create the table might not be timely enough. Is there any way to write a function to return a table or a set so that I can join the identifier with the individual items on demand?

    Read the article

  • MSSQL Sum query

    - by ldb
    today my problem is this i have 2 column and i wish check if the sum of that columns isn't Higher then a value(485 for example) and if is do a query...i though to do SELECT * FROM table WHERE ColumnA+ColumnB<485 But isn't working... i've already tried with SELECT Sum(ColumnA)+Sum(ColumnB) AS Total FROM table but it gives me 1 column with the sum of all rows, i instead want a row for every sum. so how can i do..? xD i hope you understood if not just ask that i try to explain it better! and thanks in advice for who want to help me! EDIT: I Found out XD the problem was that the columns was Smallint and the result of 1 or more rows was more than 32k so it wasn't working! Thanks At all!!

    Read the article

  • Solr autocommit and autooptimize?

    - by Camran
    I will be uploading my website to a VPS soon. It is a classifieds website which uses Solr integrated with MySql. Solr is updated whenever a new classified is put or deleted. I need a way to make the commit() and optimize() be automated, for example once every 3 hours or so. How can I do this? (Details Please) When is it ideal to optimize? Thanks

    Read the article

< Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >