Search Results

Search found 31931 results on 1278 pages for 'sql statement'.

Page 674/1278 | < Previous Page | 670 671 672 673 674 675 676 677 678 679 680 681  | Next Page >

  • alter mysqldump file before import

    - by julio
    Hi-- I have a mysqldump file created from an earlier version of a product that can't be imported into a new version of the product, since the db structure has changed slightly (mainly altering a column that was NOT NULL DEFAULT 0 to UNIQUE KEY DEFAULT NULL). If I just import the old dump file, it will error out since the column that has default values of 0 now breaks the UNIQUE constraint. It would be easy enough to either manually alter the mysqldump file, or import into a temp table and change it, then copy to the new table. However, is there a way to do this programatically, so it will be repeatable and not manual? (this will need to happen for many instances of this product). I'm thinking something like disabling key constraints for the import, then setting all values that = 0 to NULL, then re-enabling the key constraints? Is this possible? Any help appreciated.

    Read the article

  • Java: JPQL search -similar- strings

    - by bguiz
    What methods are there to get JPQL to match similar strings? By similar I mean: Contains: search string is found within the string of the matches entity Case-insensitive Small mispellings: e.g. "arow" matches "arrow" I suspect the first two will be easy, however, I would appreciate help with the last one Thank you

    Read the article

  • How to retrieve last primary Id from mdb's table?

    - by William
    I got table with next columns: Id, Name, Age, Class I am trying to insert new row in db like this: INSERT INTO MyTable (Name, Age, Class) VALUES (@name, @age, @class) And get an exeption: "Index or primary key cannot contain a Null value." The question is how to add a new row without knowing next primary Id, or maybe there is a way to get this Id from the table with the help of another query ?

    Read the article

  • error in arabic script in mysql

    - by fusion
    i inserted data in mysql database which includes arabic script. while the output displays arabic correctly, the data in mysql looks like garbage. something like this: '&#1589;&#1614;&#1608;&#1605;&#1615; &#1579;&#1614;&#1604;&#1575;&#1579;&#1614;&#1577;&#1616; &#1571;&#1610;&#1617;&#1575;&#1605;&#1613; &#1605;&#1616;&#1606; &#1603;&#1615;&#1604;&#1617;&#1616; &#1588;&#1614;&#1607;&#1585;&#1613; &#1600; &#1571;&#1585;&#1576;&#1614;&#1593;&#1575;&#1569;&#1615; &#1576;&#1614;&#1610;&#1606;&#1614; &#1582;&#1614; should i be worried about this? if yes, how do i make it appear in proper arabic script in mysql? thanks.

    Read the article

  • Cannot add an entity that already exists.

    - by mazhar
    Code: public ActionResult Create(Group group) { if (ModelState.IsValid) { group.int_CreatedBy = 1; group.dtm_CreatedDate = DateTime.Now; var Groups = Request["Groups"]; int GroupId = 0; GroupFeature GroupFeature=new GroupFeature(); foreach (var GroupIdd in Groups) { // GroupId = int.Parse(GroupIdd.ToString()); } var Features = Request["Features"]; int FeatureId = 0; int t = 0; int ids=0; string[] Feature = Features.Split(',').ToArray(); //foreach (var FeatureIdd in Features) for(int i=0; i<Feature.Length; i++) { if (int.TryParse(Feature[i].ToString(), out ids)) { GroupFeature.int_GroupId = 35; GroupFeature.int_FeaturesId = ids; if (ids != 0) { GroupFeatureRepository.Add(GroupFeature); GroupFeatureRepository.Save(); } } } return RedirectToAction("Details", new { id = group.int_GroupId }); } return View(); } I am getting an error here Cannot add an entity that already exists. at this line GroupFeatureRepository.Add(GroupFeature); GroupFeatureRepository.Save();

    Read the article

  • Is there an alternative to Microsoft.SqlServer.Management.Smo.SqlDataType that includes a value for

    - by Daniel Schaffer
    The Microsoft.SqlServer.Management.Smo.SqlDataType enum has a value for the timestamp type but not rowversion. I'm looking for an updated version of the assembly or an alternate enum type that supports it. The existing enum has a value for Timestamp, but according to the rowversion documentation, timestamp is "deprecated and will be removed in a future version". I prefer to avoid using deprecated things :)

    Read the article

  • Why are joins bad when considering scalability?

    - by acidzombie24
    Why are joins bad or 'slow'. I know i heard this more then once. I found this quote The problem is joins are relatively slow, especially over very large data sets, and if they are slow your website is slow. It takes a long time to get all those separate bits of information off disk and put them all together again. source I always thought they were fast especially when looking up a PK. Why are they 'slow'?

    Read the article

  • Right way to implement a n-to-m related

    - by ThreeFingerMark
    Hello, this is a part from my database structure: Table: Item Columns: ItemID, Title, Content, Price Table: Tag Columns: TagID, Title Table: ItemTag Columns: ItemID, TagID Table: Image Columns: ImageID, Path, Size, UploadDate Table: ItemImage Columns: ItemID, ImageID The items can have more than one image so i have a extra table "Image" and map this images to an items. I see now a problem with this structure. Before i can add Images i must enter an item. My question is now. Is this structure a good way to solve my problem with many images / tags for one item? Thank you

    Read the article

  • MySQL: Efficient Blobbing?

    - by feklee
    I'm dealing with blobs of up to - I estimate - about 100 kilo bytes in size. The data is compressed already. Storage engine: InnoDB on MySQL 5.1 Frontend: PHP (Symfony with Propel ORM) Some questions: I've read somewhere that it's not good to update blobs, because it leads to reallocation, fragmentation, and thus bad performance. Is that true? Any reference on this? Initially the blobs get constructed by appending data chunks. Each chunk is up to 16 kilo bytes in size. Is it more efficient to use a separate chunk table instead, for example with fields as below? parent_id, position, chunk Then, to get the entire blob, one would do something like: SELECT GROUP_CONCAT(chunk ORDER BY position) FROM chunks WHERE parent_id = 187 The result would be used in a PHP script. Is there any difference between the types of blobs, aside from the size needed for meta data, which should be negligible.

    Read the article

  • T-SQL Query, combine columns from multiple rows into single column

    - by Shayne
    I have seeen some examples of what I am trying to do using COALESCE and FOR XML (seems like the better solution). I just can't quite get the syntax right. Here is what I have (I will shorten the fields to only the key ones): Table Fields ------ ------------------------------- Requisition ID, Number IssuedPO ID, Number Job ID, Number Job_Activity ID, JobID (fkey) RequisitionItems ID, RequisitionID(fkey), IssuedPOID(fkey), Job_ActivityID (fkey) I need a query that will list ONE Requisition per line with its associated Jobs and IssuedPOs. (The requisition number start with "R-" and the Job Number start with "J-"). Example: R-123 | "PO1; PO2; PO3" | "J-12345; J-6780" Sure thing Adam! Here is a query that returns multiple rows. I have to use outer joins, since not all Requisitions have RequisitionItems that are assigned to Jobs and/or IssuedPOs (in that case their fkey IDs would just be null of course). SELECT DISTINCT Requisition.Number, IssuedPO.Number, Job.Number FROM Requisition INNER JOIN RequisitionItem on RequisitionItem.RequisitionID = Requisition.ID LEFT OUTER JOIN Job_Activity on RequisitionItem.JobActivityID = Job_Activity.ID LEFT OUTER JOIN Job on Job_Activity.JobID = Job.ID LEFT OUTER JOIN IssuedPO on RequisitionItem.IssuedPOID = IssuedPO.ID

    Read the article

  • Proper way to structure a Sync Framework DAL

    - by Refracted Paladin
    I am creating a WPF app that needs to allow users to work in a temporary disconnected state and I plan to use a Local Database Cache. My question's are about my data access layer. Do you typically create the whole DAL to point at the Cache or both and create a switching mechanism? Is Entity's a good way to go for my DAL against the Cache? I am used to L2S but my understanding is that I can't use that against SQLCE, correct? Thanks! PS: Any good resources out there for using Sync, Linq, and WPF ALL TOGETHER? Tutorials, videos, etc?

    Read the article

  • oracle search word in string

    - by Atul
    I want to search a word in string in ORACLE in which string is comma separated. Eg. String is ('MF1,MF2,MF3') and now I want to search whether 'MF' exists in that or not. If I am using instr('MF1,MF2,MF3','MF') it will give wrong result since I want to search Full MF in MF1 or MF2 or MF3.

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • How to use database to generate multiple folder content page?

    - by VenomVipes
    Scenario :I am trying to build a Mobile Entertainment Portal. It will enable users to download Music & Movies to their Cell Phones... Problem Exp : Suppose I upload 100 folders of Songs, each folder is for one Album. I want a way to generate a page with all the folders name (Album Name) in it. If user click on the page, they should be taken to a page where they get list of all songs in the album. Clicking on any song name will let them download it. Can it be done anyway or will I have to manually design each of the 3 pages for each album. If I do that, its time consuming and also will be difficult to change anything like footer, header...

    Read the article

  • mysql replace matching but not changing

    - by alex
    I've used mysql's update replace function before, but even though I think I'm following the same syntax, I can't get this to work-it matches the rows, but doesn't replace. Here's what I'm trying to do: mysql> update contained_widgets set preference_values = replace(preference_values, '<li><a_href="/enewsletter"><span class="not-tc">eNewsletter</span></a></li>', '<li><a_href="/enewsletter"><span class="not-tc">eNewsletter</span></a></li> <li> <a_href="/projects"><span class="not-tc">Projects</span></a></li>'); Query OK, 0 rows affected (0.00 sec) Rows matched: 77 Changed: 0 Warnings: 0 I don't see what I'm missing. Any help is appreciated. I edited "a " to "a_" because the site thinks I'm posting spam links otherwise.

    Read the article

  • Data historian queries

    - by Scott Dennis
    Hi, I have a table that contains data for electric motors the format is: DATE(DateTime) | TagName(VarChar(50) | Val(Float) | 2009-11-03 17:44:13.000 | Motor_1 | 123.45 2009-11-04 17:44:13.000 | Motor_1 | 124.45 2009-11-05 17:44:13.000 | Motor_1 | 125.45 2009-11-03 17:44:13.000 | Motor_2 | 223.45 2009-11-04 17:44:13.000 | Motor_2 | 224.45 Data for each motor is inserted daily, so there would be 31 Motor_1s and 31 Motor_2s etc. We do this so we can trend it on our control system displays. I am using views to extract last months max val and last months min val. Same for this months data. Then I join the two and calculate the difference to get the actual run hours for that month. The "Val" is a nonresetable Accumulation from a PLC(Controller). This is my query for Last months Max Value: SELECT TagName, Val AS Hours FROM dbo.All_Data_From_Last_Mon AS cur WHERE (NOT EXISTS (SELECT TagName, Val FROM dbo.All_Data_From_Last_Mon AS high WHERE (TagName = cur.TagName) AND (Val > cur.Val))) This is my query for Last months Max Value: SELECT TagName, Val AS Hours FROM dbo.All_Data_From_Last_Mon AS cur WHERE (NOT EXISTS (SELECT TagName, Val FROM dbo.All_Data_From_Last_Mon AS high WHERE (TagName = cur.TagName) AND (Val < cur.Val))) This is the query that calculates the difference and runs a bit slow: SELECT dbo.Motors_Last_Mon_Max.TagName, STR(dbo.Motors_Last_Mon_Max.Hours - dbo.Motors_Last_Mon_Min.Hours, 12, 2) AS Hours FROM dbo.Motors_Last_Mon_Min RIGHT OUTER JOIN dbo.Motors_Last_Mon_Max ON dbo.Motors_Last_Mon_Min.TagName = dbo.Motors_Last_Mon_Max.TagName I know there is a better way. Ultimately I just need last months total and this months total. Any help would be appreciated. Thanks in advance

    Read the article

  • Textbox is disabled after adding text dynamically in codebehind

    - by user1761348
    Can't quite work out what is happening but The background is that I am dynamically adding table rows in a web page and some of the cells hold controls such as dropdowns etc. One of the columns pulls a size into it. However the column I am having issues with is the next column which takes the text from teh previous dropdown and shows the appropriate price. On doing htis however the textbox which is being created appears to turn into a label as I cannot select or adjust the text that has been put in there. var gotPrice =(from a in getPrice.Sizes where a.Size1 == Size.SelectedValue select a).First(); TextBox Price = new TextBox(); Price.Width = 100; PriceField.Controls.Add(Price); PriceField.Text = gotPrice.RackRate.ToString(); I have tried then calling .Enabled but still the text box is not editable. any help appreciated.

    Read the article

  • Calculate open timeslots given availability and existing appointments - by day

    - by Andre
    Overview: I have a table which stores a persons "availability" for a current day, e.g. Mon - 8:00am - 11:30am Mon - 1:30pm - 6:00pm A second table stores appointments that this person already has for the same day, e.g. Mon - 8:30am - 11:00am Mon - 2:30pm - 4pm Desired result: Doing calculationsI'd like to have the following result - e.g. "this person has availability on the given day": Mon - 8:00am - 8:30am Mon - 11:00am - 11:30am Mon - 1:30pm - 2:30pm Mon - 4:00pm - 6:00pm Any ideas on how to calculate the output given the two inputs (e.g. availability, existing appointments) would be greatly appreciated. Preferably I'd use javascript on the client to do the calculating as I would believe that doing it within the DB (I'm using MSSQL) would be slow for many records, persons, etc. Hope this is enough information to illustrate the problem at hand - Many thanks in advance.

    Read the article

  • Advanced queries in HBase

    - by Teflon Ted
    Given the following HBase schema scenario (from the official FAQ)... How would you design an Hbase table for many-to-many association between two entities, for example Student and Course? I would define two tables: Student: student id student data (name, address, ...) courses (use course ids as column qualifiers here) Course: course id course data (name, syllabus, ...) students (use student ids as column qualifiers here) This schema gives you fast access to the queries, show all classes for a student (student table, courses family), or all students for a class (courses table, students family). How would you satisfy the request: "Give me all the students that share at least two courses in common"? Can you build a "query" in HBase that will return that set, or do you have to retrieve all the pertinent data and crunch it yourself in code?

    Read the article

  • How to apply GROUP_CONCAT in mysql Query

    - by Query Master
    How to apply GROUP_CONCAT in this Query if you guys have any idea or any alternate solution about this please share me. Helps are definitely appreciated also (see Query or result required) Query SELECT WEEK(cpd.added_date) AS week_no,COUNT(cpd.result) AS death_count FROM cron_players_data cpd WHERE cpd.player_id = 81 AND cpd.result = 2 AND cpd.status = 1 GROUP BY WEEK(cpd.added_date); Query output result screen Result Required 23,24,25 AS week_no 2,3,1 AS death_count

    Read the article

< Previous Page | 670 671 672 673 674 675 676 677 678 679 680 681  | Next Page >