Search Results

Search found 23955 results on 959 pages for 'insert query'.

Page 10/959 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • MySQL INSERT Query

    - by mouthpiec
    Hi, I need a query to perform the following: find the countryID where country = 'UK' from table Countries then use the found value in INSERT into towns (id, country_fk, name) values (1, <value_found>, 'London'). is this possible?

    Read the article

  • MySQL INSERT IGNORE not working

    - by gAMBOOKa
    Here's my table with some sample data a_id | b_id ------------ 1 225 2 494 3 589 When I run this query INSERT IGNORE INTO table_name (a_id, b_id) VALUES ('4', '230') ('2', '494') It inserts both those rows when it's supposed to ignore the second value pair (2, 494) No indexes defined, neither of those columns are primary. What don't I know?

    Read the article

  • Linq-to-sql Compiled Query is returning result from different DataContext

    - by Vladimir Kojic
    Compiled query: public static Func<OperationalDataContext, short, Machine> QueryMachineById = CompiledQuery.Compile((OperationalDataContext db, short machineID) => db.Machines.Where(m => m.MachineID == machineID).SingleOrDefault()); It looks like compiled query is caching Machine object and returning the same object even if query is called from new DataContext (I’m disposing DataContext in the service but I’m getting Machine from previous DataContext). I use POCOs and XML mapping. Revised: It looks like compiled query is returning result from new data context and it is not using the one that I passed in compiled-query. Therefore I can not reuse returned object and link it to another object obtained from datacontext thru non compiled queries. I’m using unit of work pattern : // First Call Using(new DataContext) { Machine from DataContext.Table == machine from cached query } // Do some work // Second Call is failing Using(new DataContext) { Machine from DataContext.Table <> machine from cached query }

    Read the article

  • insert and modify a record in an entity using Core Data

    - by aminfar
    I tried to find the answer of my question on the internet, but I could not. I have a simple entity in Core data that has a Value attribute (that is integer) and a Date attribute. I want to define two methods in my .m file. First method is the ADD method. It takes two arguments: an integer value (entered by user in UI) and a date (current date by default). and then insert a record into the entity based on the arguments. Second method is like an increment method. It uses the Date as a key to find a record and then increment the integer value of that record. I don't know how to write these methods. (assume that we have an Array Controller for the table in the xib file)

    Read the article

  • dynamic insert php mysql and preformance

    - by Ross
    I have a folder/array of images, it may be 1, maximum of 12. What I need to do is dynamically add them so the images are added to an images table. At the moment I have $directory = "portfolio_images/$id/Thumbs/"; $images = glob("" . $directory . "*.jpg"); for ( $i= 0; $i <= count($images); $i += 1) { mysql_query("INSERT INTO project_images (image_name, project_id)VALUES ('$images[0]', '$id')") or die(mysql_error()); } this is fine but it does not feel right, how is this for performance? Is there a better way? The maximum number of images is only ever going to be 12. Thanks, Ross

    Read the article

  • insert and mofigy a record in an entity using Core Data

    - by aminfar
    I tried to find the answer of my question on the internet, but I could not. I have a simple entity in Core data that has a Value attribute (that is integer) and a Date attribute. I want to define two methods in my .m file. First method is the ADD method. It takes two arguments: an integer value (entered by user in UI) and a date (current date by default). and then insert a record into the entity based on the arguments. Second method is like an increment method. It uses the Date as a key to find a record and then increment the integer value of that record. I don't know how to write these methods. (assume that we have an Array Controller for the table in the xib file)

    Read the article

  • python MySQLdb got invalid syntax when trying to INSERT INTO table

    - by Michelle Jun Lee
    ## COMMENT OUT below just for reference "" cursor.execute (""" CREATE TABLE yellowpages ( business_id BIGINT(20) NOT NULL AUTO_INCREMENT, categories_name VARCHAR(255), business_name VARCHAR(500) NOT NULL, business_address1 VARCHAR(500), business_city VARCHAR(255), business_state VARCHAR(255), business_zipcode VARCHAR(255), phone_number1 VARCHAR(255), website1 VARCHAR(1000), website2 VARCHAR(1000), created_date datetime, modified_date datetime, PRIMARY KEY(business_id) ) """) "" ## TOP COMMENT OUT (just for reference) ## code website1g = "http://www.triman.com" business_nameg = "Triman Sales Inc" business_address1g = "510 E Airline Way" business_cityg = "Gardena" business_stateg = "CA" business_zipcodeg = "90248" phone_number1g = "(310) 323-5410" phone_number2g = "" website2g = "" cursor.execute (""" INSERT INTO yellowpages(categories_name, business_name, business_address1, business_city, business_state, business_zipcode, phone_number1, website1, website2) VALUES ('%s','%s','%s','%s','%s','%s','%s','%s','%s') """, (''gas-stations'', business_nameg, business_address1g, business_cityg, business_stateg, business_zipcodeg, phone_number1g, website1g, website2g)) cursor.close() conn.close() I keep getting this error File "testdb.py", line 51 """, (''gas-stations'', business_nameg, business_address1g, business_cityg, business_stateg, business_zipcodeg, phone_number1g, website1g, website2g)) ^ SyntaxError: invalid syntax any idea why? By the way, the up arrow is pointing to website1g (the b character) . Thanks for the help in advance

    Read the article

  • MYSQL Fast Insert dependent on flag from a seperate table

    - by Stuart P
    Hi all. For work I'm dealing with a large database (160 million + rows a year, 10 years of data) and have a quandary; A large percentage of the data we upload is null data and I'd like to stop it from being uploaded. The data in question is spatial in nature, so I have one table like so: idLocations (Auto-increment int, PK) X (float) Y (foat) Alwaysignore (Bool) Which is used as a reference in a second table like so: idLocations (Int, PK, "FK") idDates (Int, PK, "FK") DATA1 (float) DATA2 (float) ... DATA7 (float) So, Ideally I'd like to find a method where I can do something like: INSERT INTO tblData(idLocations, idDates, DATA1, ..., DATA7) VALUES (...), ..., (...) WHERE VALUES(idLocations) NOT LIKE (SELECT FROM tblLocation WHERE alwaysignore=TRUE ON DUPLICATE KEY UPDATE DATA1=VALUES(DATA1) So, for my large batch of input data (250 values in a block), ignore the inserts where the idLocations matches an idLocations values flagged with alwaysignore. Anyone have any suggestions? Cheers. -Stuart Other details: Running MySQL on a semi-dedicated machine, MyISAM engine for the tables.

    Read the article

  • mysqli insert problem

    - by Simon
    Hello i have this error: Warning: mysqli_stmt::bind_param() [mysqli-stmt.bind-param]: Number of elements in type definition string doesn't match number of bind variables in E:\wamp\www\classes\UserLogin.php on line 31 and i dont know what it is :/ here is my code function createUser($username, $password) { $mysql = connect(); if($stmt = $mysql->prepare('INSERT INTO users (username, password, alder, hood, fornavn, efternavn, city, ip, level, email) VALUES (?,?,?,?,?,?,?,?,?,?)')) { $stmt->bind_param($username,$password, $alder, $hood, $fornavn, $efternavn, $city, $ip, $level, $email); $stmt->execute(); $stmt->close(); } else { echo 'error: ' . $mysql->error; } then my user create the user they only need to type username and password, and later they can edit the profile and edit, email,alder,hood and like that :). Thanks for helping me

    Read the article

  • Insert xelements using LINQ Select?

    - by Simon Woods
    I have a source piece of xml into which I want to insert multiple elements which are created dependant upon certain values found in the original xml At present I have a sub which does this for me: <Extension()> Public Sub AddElements(ByVal xml As XElement, ByVal elementList As IEnumerable(Of XElement)) For Each e In elementList xml.Add(e) Next End Sub And this is getting invoked in a routine as follows: Dim myElement = New XElement("NewElements") myElement.AddElements( xml.Descendants("TheElements"). Where(Function(e) e.Attribute("FilterElement") IsNot Nothing). Select(Function(e) New XElement("NewElement", New XAttribute("Text", e.Attribute("FilterElement").Value)))) Is it possible to re-write this using Linq syntax so I don't need to call out to the Sub AddElements but could do it all in-line Many Thx Simon

    Read the article

  • IStatelessSession insert object with many-to-one

    - by Andrew Kalashnikov
    Hello guys. I've got common mapping <class name="NotSyncPrice, Portal.Core" table='Not_sync_price'> <id name="Id" unsaved-value="0"> <column name="id" not-null="true"/> <generator class="native"/> </id> <many-to-one name="City" class="Clients.Core.Domains.City, Clients.Core" column="city_id" cascade="none"></many-to-one> <!--<property name="City"> <column name="city_id"/> </property>--> I want to use IStatelessSession for batch insert. But when i set city object to NotSyncPrice object and call IStatelessSession I've got strange exception: NHibernate.Impl.StatelessSessionImpl.get_Timestamp() When its null or int all is ok. I try use real && proxy city object. But no result. What's wrong:( Please help

    Read the article

  • getting mysql_insert_id() while using ON DUPLICATE KEY UPDATE with PHP

    - by julio
    Hi-- I've found a few answers for this using mySQL alone, but I was hoping someone could show me a way to get the ID of the last inserted or updated row of a mysql DB when using PHP to handle the inserts/updates. Currently I have something like this, where column3 is a unique key, and there's also an id column that's an autoincremented primary key: $query ="INSERT INTO TABLE (column1, column2, column3) VALUES (value1, value2, value3) ON DUPLICATE KEY UPDATE SET column1=value1, column2=value2, column3=value3"; mysql_query($query); $my_id = mysql_insert_id(); $my_id is correct on INSERT, but incorrect when it's updating a row (ON DUPLICATE KEY UPDATE). I have seen several posts with people advising that you use something like INSERT INTO table (a) VALUES (0) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id) to get a valid ID value when the ON DUPLICATE KEY is invoked-- but will this return that valid ID to the PHP "mysql_insert_id()" function? Thanks for any advice.

    Read the article

  • how to run progress-bar through insert query ?

    - by Gold
    hi i have this insert query: try { Cmd.CommandText = @"INSERT INTO BarcodTbl SELECT * FROM [Text;DATABASE=" + PathI + @"\].[Tmp.txt];"; Cmd.ExecuteNonQuery(); Cmd.Dispose(); } catch (Exception ex) { MessageBox.Show(ex.Message); } and i have two question: how to run progress-bar from the begging to the end of the insert ? if there any error i got the error exception and the action is stop - the query stop and the BarcodTbl is empty. how i can see the error and that the query will continue to fill the table ? thank's in advance

    Read the article

  • Adding a Line Break in mySQL INSERT INTO text

    - by james
    Hi Could someone tell me how to add a new line in a text that i enter in a mySql table. I tried using the '\n in the line i entered with INSERT INTO statement but '\n is shown as it is. Actually i have created a table in Ms Access with some data. Ms Access adds new line with '\n. I am converting Ms Access table data into mySql . But when i convert, the '\n is ignored and all the text is shown in one single line when i display it from mySql table on a php form. Can anyone tell me how mySQL can add a new line in a text ? Awaiting response Thanks !!

    Read the article

  • Insert value in SQL database

    - by littleBrain
    In SQL Server 2005, I'm trying to figure out How to fill up the following fields? Any kind of help will be highly appreciated.. INSERT INTO [Fisharoo].[dbo].[Accounts] ([FirstName] ,[LastName] ,[Email] ,[EmailVerified] ,[Zip] ,[Username] ,[Password] ,[BirthDate] ,[CreateDate] ,[LastUpdateDate] ,[TermID] ,[AgreedToTermsDate]) VALUES (<FirstName, varchar(30),> ,<LastName, varchar(30),> ,<Email, varchar(150),> ,<EmailVerified, bit,> ,<Zip, varchar(15),> ,<Username, varchar(30),> ,<Password, varchar(50),> ,<BirthDate, smalldatetime,> ,<CreateDate, smalldatetime,> ,<LastUpdateDate, smalldatetime,> ,<TermID, int,> ,<AgreedToTermsDate, smalldatetime,>)

    Read the article

  • Proper code but can't insert to database

    - by Dchris
    I have a Visual Basic project using Access database.I run a query but i don't see any new data on my database table.I don't have any exception or error.Instead of this the success messagebox is shown. Here is my code: Dim ID As Integer = 2 Dim TableNumber As Integer = 2 Dim OrderDate As Date = Format(Now, "General Date") Dim TotalPrice As Double = 100.0 Dim ConnectionString As String = "myconnectionstring" Dim con As New OleDb.OleDbConnection(ConnectionString) Try Dim InsertCMD As OleDb.OleDbCommand InsertCMD = New OleDb.OleDbCommand("INSERT INTO Orders([ID],[TableNumber],[OrderDate],[TotalPrice]) VALUES(@ID,@TableNumber,@OrderDate,@TotalPrice);", con) InsertCMD.Parameters.AddWithValue("@ID", ID) InsertCMD.Parameters.AddWithValue("@TableNumber", TableNumber) InsertCMD.Parameters.AddWithValue("@OrderDate", OrderDate) InsertCMD.Parameters.AddWithValue("@TotalPrice", TotalPrice) con.Open() InsertCMD.ExecuteNonQuery() MessageBox.Show("Successfully Added New Order", "Success", MessageBoxButtons.OK, MessageBoxIcon.Information) con.Close() Catch ex As Exception 'Something went wrong MessageBox.Show(ex.ToString) Finally 'Success or not, make sure it's closed If con.State <> ConnectionState.Closed Then con.Close() End Try What is the problem?

    Read the article

  • Light-weight, free, database query tool for Windows?

    - by NoCatharsis
    My question is very similar to the one here except pertaining to a Windows tool. I am also referencing this table and what I found here with a Google search. However, I have no idea which tool would best meet my (very basic) purposes. I am currently using Excel with a basic ODBC connection string to query my database at work. However, Excel is pretty memory-heavy and a basic query tends to throw my computer into a 30 second stall-a-thon. Is there a free tool out there that is light-weight and can serve the same purpose when provided an ODBC connection and a SQL query? Also would prefer that it easily copies over to a spreadsheet as needed.

    Read the article

  • Visual Query Builder

    - by johnnyArt
    If been using "dbForge Query Builder" lately and I'm gotten used to the ease of building and testing a query, specially for those complex ones with inner joins, aliases and multiple conditionals. The expiry date of the trial is about to come, and while wanting to remain on the legal side for this I'd rather not pay the 50USD it costs (although I must say it's pretty cheap for what it does). So my question would be: Are there any free alternatives to replace this visual query builder? I've failed to find any and fear that my only two options are paying for it, or going to the dark side.

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • Problem in HQL query

    - by Rupeshit
    I written a query in my sql like this: "select * from table_name order by col_name = 101 desc " Which is working perfectly fine in mysql but when I tried to convert this query into HQl query then it is throwing an exception.So can anyone suggest me that how to write HQL query for the above SQL query.

    Read the article

  • SQLAlchemy custom query column

    - by thrillerator
    I have a declarative table defined like this: class Transaction(Base): __tablename__ = "transactions" id = Column(Integer, primary_key=True) account_id = Column(Integer) transfer_account_id = Column(Integer) amount = Column(Numeric(12, 2)) ... The query should be: SELECT id, (CASE WHEN transfer_account_id=1 THEN -amount ELSE amount) AS amount FROM transactions WHERE account_id = 1 OR transfer_account_id = 1 My code is: query = Transaction.query.filter_by(account_id=1, transfer_account_id=1) query = query.add_column(func.case(...).label("amount") But it doesn't replace the amount column. Been trying to do this with for hours and I don't want to use raw SQL.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >