Search Results

Search found 21433 results on 858 pages for 'query execution plans'.

Page 325/858 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Transaction issue in java with hibernate - latest entries not pulled from database

    - by Gearóid
    Hi, I'm having what seems to be a transactional issue in my application. I'm using Java 1.6 and Hibernate 3.2.5. My application runs a monthly process where it creates billing entries for a every user in the database based on their monthly activity. These billing entries are then used to create Monthly Bill object. The process is: Get users who have activity in the past month Create the relevant billing entries for each user Get the set of billing entries that we've just created Create a Monthly Bill based on these entries Everything works fine until Step 3 above. The Billing Entries are correctly created (I can see them in the database if I add a breakpoint after the Billing Entry creation method), but they are not pulled out of the database. As a result, an incorrect Monthly Bill is generated. If I run the code again (without clearing out the database), new Billing Entries are created and Step 3 pulls out the entries created in the first run (but not the second run). This, to me, is very confusing. My code looks like the following: for (User user : usersWithActivities) { createBillingEntriesForUser(user.getId()); userBillingEntries = getLastMonthsBillingEntriesForUser(user.getId()); createXMLBillForUser(user.getId(), userBillingEntries); } The methods called look like the following: @Transactional public void createBillingEntriesForUser(Long id) { UserManager userManager = ManagerFactory.getUserManager(); User user = userManager.getUser(id); List<AccountEvent> events = getLastMonthsAccountEventsForUser(id); BillingEntry entry = new BillingEntry(); if (null != events) { for (AccountEvent event : events) { if (event.getEventType().equals(EventType.ENABLE)) { Calendar cal = Calendar.getInstance(); Date eventDate = event.getTimestamp(); cal.setTime(eventDate); double startDate = cal.get(Calendar.DATE); double numOfDaysInMonth = cal.getActualMaximum(Calendar.DAY_OF_MONTH); double numberOfDaysInUse = numOfDaysInMonth - startDate; double fractionToCharge = numberOfDaysInUse/numOfDaysInMonth; BigDecimal amount = BigDecimal.valueOf(fractionToCharge * Prices.MONTHLY_COST); amount.scale(); entry.setAmount(amount); entry.setUser(user); entry.setTimestamp(eventDate); userManager.saveOrUpdate(entry); } } } } @Transactional public Collection<BillingEntry> getLastMonthsBillingEntriesForUser(Long id) { if (log.isDebugEnabled()) log.debug("Getting all the billing entries for last month for user with ID " + id); //String queryString = "select billingEntry from BillingEntry as billingEntry where billingEntry>=:firstOfLastMonth and billingEntry.timestamp<:firstOfCurrentMonth and billingEntry.user=:user"; String queryString = "select be from BillingEntry as be join be.user as user where user.id=:id and be.timestamp>=:firstOfLastMonth and be.timestamp<:firstOfCurrentMonth"; //This parameter will be the start of the last month ie. start of billing cycle SearchParameter firstOfLastMonth = new SearchParameter(); firstOfLastMonth.setTemporalType(TemporalType.DATE); //this parameter holds the start of the CURRENT month - ie. end of billing cycle SearchParameter firstOfCurrentMonth = new SearchParameter(); firstOfCurrentMonth.setTemporalType(TemporalType.DATE); Query query = super.entityManager.createQuery(queryString); query.setParameter("firstOfCurrentMonth", getFirstOfCurrentMonth()); query.setParameter("firstOfLastMonth", getFirstOfLastMonth()); query.setParameter("id", id); List<BillingEntry> entries = query.getResultList(); return entries; } public MonthlyBill createXMLBillForUser(Long id, Collection<BillingEntry> billingEntries) { BillingHistoryManager manager = ManagerFactory.getBillingHistoryManager(); UserManager userManager = ManagerFactory.getUserManager(); MonthlyBill mb = new MonthlyBill(); User user = userManager.getUser(id); mb.setUser(user); mb.setTimestamp(new Date()); Set<BillingEntry> entries = new HashSet<BillingEntry>(); entries.addAll(billingEntries); String xml = createXmlForMonthlyBill(user, entries); mb.setXmlBill(xml); mb.setBillingEntries(entries); MonthlyBill bill = (MonthlyBill) manager.saveOrUpdate(mb); return bill; } Help with this issue would be greatly appreciated as its been wracking my brain for weeks now! Thanks in advance, Gearoid.

    Read the article

  • Perform Grouping of Resultset in Code

    - by NinjaBomb
    Stackoverflowers, I have a resultset from a SQL query in the form of: Category Column2 Column3 A 2 3.50 A 3 2 B 3 2 B 1 5 ... I need to group the resultset based on the Category column and sum the values for Column2 and Column3. I have to do it in code because I cannot perform the grouping in the SQL query that gets the data due to the complexity of the query (long story). This grouped data will then be displayed in a table. I have it working for specific set of values in the Category column, but I would like a solution that would handle any possible values that appear in the Category column. I know there has to be a straightforward, efficient way to do it but I cannot wrap my head around it right now. How would you accomplish it?

    Read the article

  • Linq like or other construction

    - by Yauhen Kavalenka
    I have DB oracle in my solution. I want to have some results in this query. Query example: select * from doctor where doctor.name like '%IVANOV_A%'; But if i do it at LINQ i cannot get any result. from p in repository.Doctor.Where(x => x.Name.ToLower().Containsname)) select p; Where 'name' is variable of string parameter. Web layout request next string: "Ivanov a" or "A Ivanov" But i suggest for user choose you pattetn for query. How i can to get "patient by name" if name consist of "First name" and "Last name" but user doesn't know your doctor's full name?

    Read the article

  • Hos to group the complex list objects by using Linq

    - by Daoming Yang
    I want to select and group the products, and rank them by the number of times they occur. For example, I have an OrderList each of order object has a OrderProductVariantList(OrderLineList), and each of OrderProductVariant object has ProductVariant, and then the ProductVariant object will have a Product object which contains product information. A friend helped me with the following code. It could be compiled, but it did not return any value/result. I used the watch window for the query and it gave me "The name 'query' does not exist in the current context". Can anyone help me? Many thanks. var query = orderList.SelectMany( o => o.OrderLineList ) // results in IEnumerable<OrderProductVariant> .Select( opv => opv.ProductVariant ) .Select( pv => p.Product ) .GroupBy( p => p ) .Select( g => new { Product = g.Key, Count = g.Count() });

    Read the article

  • SQL Access INSERT INTO Autonumber Field

    - by KrazyKash
    I'm trying to make a visual basic application which is connected to a Microsoft Access Database using OLEDB. Inside my database I have a user table with the following layout ID - Autonumber Username - Text Password - Text Email - Text To insert data into the table I use the following query INSERT INTO Users (Username, Password, Email) VALUES ('004606', 'Password', '[email protected]') However I seem to get an error with this statement and according to VB it's a syntax error. But then I tried to use the following query INSERT INTO Users (Username) Values ('004606') This query seemed to work absolutely fine... So the problem is I can insert into just one field but not all 3 (excluding the ID field because it's an autonumber). Any help would be appreciated, Thanks

    Read the article

  • How can I have MySQL write outfiles as a different user?

    - by David Locke
    I'm working with a MySQL query that writes into an outfile. I run this query once every day or two and so I want to be able to remove the outfile without having to resort to su or sudo. The only way I can think of making that happen is to have the outfile written as owned by someone other than the mysql user. Is this possible? Edit: I am not redirecting output to a file, I am using the INTO OUTFILE part of a select query to output to a file. If it helps: mysql --version mysql Ver 14.12 Distrib 5.0.32, for pc-linux-gnu (x86_64) using readline 5.2

    Read the article

  • How to pass parameters to report model in Reporting Services

    - by savras
    I'm developing report in RS that show top N customers based on some criteria. It also allows to select number of customers and period of time. Is it possible to do it by using report model? Thing that it seems to be difficult is how to pass parameters determined by user. Another thing that in my oppinion is very disappointing is that i cannot use SQL query as dataset query, because it uses odd and elaborate XML. Although report model items seem to map its fields to query or table fields. I m concerning using report models because i need to provide uniform data model (the same tables and fields) for more or less different database schemas. It would be very nice if somebody would explain what can be done with report models and what can not.

    Read the article

  • Get latest sql rows based on latest date and per user

    - by Umair
    I have the following table: RowId, UserId, Date 1, 1, 1/1/01 2, 1, 2/1/01 3, 2, 5/1/01 4, 1, 3/1/01 5, 2, 9/1/01 I want to get the latest records based on date and per UserId but as a part of the following query (due to a reason I cannot change this query as this is auto generated by a tool but I can write pass any thing starting with AND...): SELECT RowId, UserId, Date FROM MyTable WHERE 1 = 1 AND ( // everything which needs to be done goes here . . . ) I have tried similar query, but get an error: Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.

    Read the article

  • Left outer join null using VB.NET and LINQ

    - by jvcoach23
    I've got what I think is a working left outer join LINQ query, but I'm having problems with the select because of null values in the right hand side of the join. Here is what I have so far Dim Os = From e In oExcel Group Join c In oClassIndexS On c.tClassCode Equals Mid(e.ClassCode, 1, 4) Into right1 = Group _ From c In right1.DefaultIfEmpty I want to return all of e and one column from c called tClassCode. I was wondering what the syntax would be. As you can see, I'm using VB.NET. Update... Here is the query doing join where I get the error: _message = "Object reference not set to an instance of an object." Dim Os = From e In oExcel Group Join c In oClassIndexS On c.tClassCode Equals Mid(e.ClassCode, 1, 4) Into right1 = Group _ From c In right1.DefaultIfEmpty Select e, c.tClassCode If I remove the c.tClassCode from the select, the query runs without error. So I thought perhaps I needed to do a select new, but I don't think I was doing that correctly either.

    Read the article

  • ASP.NET and SQL server with huge data sets

    - by Jake Petroules
    I am developing a web application in ASP.NET and on one page I am using a ListView with paging. As a test I populated the table it draws from with 6 million rows. The table and a schema-bound view based off it have all the necessary indexes and executing the query in SQL Server Management Studio with SELECT TOP 5 returned in < 1 second as expected. But on the ASP.NET page, with the same query, it seems to be selecting all 6 million rows without any limit. Shouldn't the paging control limit the query to return only N rows rather than the entire data set? How can I use these ASP.NET controls to handle huge data sets with millions of records? Does SELECT [columns] FROM [tablename] quite literally mean that for the ListView, and it doesn't actually inject a TOP <n> and does all the pagination at the application level rather than the database level?

    Read the article

  • How Do I insert Data in a table From the Model MVC?

    - by user54197
    I have data coming into my Model, how do I setup to insert the data in a table? public string Name { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string Zip { get; set; } public Info() { using (SqlConnection connect = new SqlConnection(connections)) { string query = "Insert Into Personnel_Data (Name, StreetAddress, City, State, Zip, HomePhone, WorkPhone)" + "Values('" + Name + "','" + Address + "','" + City + "','" + State + "','" + Zip + "','" + ContactHPhone + "','" + ContactWPhone + "')"; SqlCommand command = new SqlCommand(query, connect); connect.Open(); command.ExecuteNonQuery(); } } The Name, Address, City, and so on is null when the query is being run. How do I set this up?

    Read the article

  • Find model records by ID in the order the array of IDs were given

    - by defaye
    I have a query to get the IDs of people in a particular order, say: ids = [1, 3, 5, 9, 6, 2] I then want to fetch those people by Person.find(ids) But they are always fetched in numerical order, I know this by performing: people = Person.find(ids).map(&:id) => [1, 2, 3, 5, 6, 9] How can I run this query so that the order is the same as the order of the ids array? I made this task more difficult as I wanted to only perform the query to fetch people once, from the IDs given. So, performing multiple queries is out of the question. I tried something like: ids.each do |i| person = people.where('id = ?', i) But I don't think this works.

    Read the article

  • What is the performance impact of tracing in C# and ASP.NET?

    - by SkippyFire
    I found this in some production login code I was looking at recently... HttpContext.Current.Trace.Write(query + ": " + username + ", " + password)); ...where query is a short SQL query to grab matching users. Does this have any sort of performance impact? I assume its very small. Also, what is the purpose of this exact type of trace, using the HTTP Context? Where does this data get traced to? Thanks in advance!

    Read the article

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • How to save bytes to an image and access it from Bottle

    - by Graham Smith
    I'm working on an API wrapper for Snapchat using Python and Bottle, but in order to return the file (retrieved by the Python script) I have to save the bytes (returned by Snapchat) to a .jpg file. I'm not quite sure how I will do this and still be able to access the file so that it can be returned. Here's what I have so far, but it returns a 404. @route('/image') def image(): username = request.query.username token = request.query.auth_token img_id = request.query.id return get_blob(username, token, img_id) def get_blob(usr, token, img_id): # Form URL and download encrypted "blob" blob_url = "https://feelinsonice.appspot.com/ph/blob?id={}".format(img_id) blob_url += "&username=" + usr + "&timestamp=" + str(timestamp()) + "&req_token=" + req_token(token) enc_blob = requests.get(blob_url).content # Save decrypted image FileUpload.save('/images/' + img_id + '.jpg') img = open('images/' + img_id + '.jpg', 'wb') img.write(decrypt(enc_blob)) img.close() return static_file(img_id + '.jpg', root='/images/')

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • Querying datatable to get rows with a certain value in the first column

    - by user1776590
    I am currently using the code below and am wondering if it would be possible to query based on an entry value. At the moment it returns the number of rows that have been defined. var q = sqlData.AsEnumerable().Take(2); This data comes in from a database and is imputed into the table but at the moment it only returns database data into the datatable and allows me to select the first two rows and I was wondering if I can query the data table so that I can get the rows that I require based on an index in the actual table itself (e.g. in the table I find five rows and query this information out).

    Read the article

  • count(*) is it really expensive ?

    - by Anil Namde
    I have a page where i have 4 tabs displaying 4 different reports based of different tables. Now i get row count of each tabled using select count() from table query and display number of rows available in each table on the tabs. Now with each page post back 5 count() queries are executed (4 to get counts and 1 for pagination) and 1 query for getting report. Now my question is should is count(*) query really expensive that i should keep the row counts (at least which are displayed on tab) in view state of page instead of queering each time? How much expensive it is ?

    Read the article

  • php push 2d array into mysql

    - by john
    Hay All, I cant seem to get my head around this dispite the number to examples i read. Basically I have a 2d array and want to insert it into MySQL. The array contains a few strings. I cant get the following to work... $value = addslashes(serialize($temp3));//temp3 is my 2d array, do i need to use keys? (i am not at the moment) $query = "INSERT INTO table sip (id,keyword,data,flags) VALUES(\"$value\")"; mysql_query($query) or die("Failed Query"); Thanks Guys,

    Read the article

  • Anchoring the Action URL of a Form

    - by John
    Hello, I am using a function that leads users to a file called "comments2.php": <form action="http://www...com/.../comments/comments2.php" method="post"> On comments2.php, data passed over from the form is inserted into MySQL: $query = sprintf("INSERT INTO comment VALUES (NULL, %d, %d, '%s', NULL)", $uid, $subid, $comment); mysql_query($query) or die(mysql_error()); Then, later in comments2.php, I am using a query that loops through entries meeting certain criteria. The loop contains a row with the following information: echo '<td rowspan="3" class="commentname1" id="comment-' . $row["commentid"] . '">'.stripslashes($row["comment"]).'</td>'; For the function above, I would like the URL to be anchored by the highest value of "commentid" for id="comment-' . $row["commentid"] . '" How can this be done? Thanks in advance, John

    Read the article

  • How to find loginname, database username, or roles of sqlserver domain user who doesn't have their own login?

    - by Adam Butler
    I have created a login and database user called "MYDOMAIN\Domain Users". I need to find what roles a logged on domain user has but all the calls to get the current user return the domain username eg. "MYDOMAIN\username" not the database username eg. "MYDOMAIN\Domain Users". For example, this query returns "MYDOMAIN\username" select original_login(),suser_name(), suser_sname(), system_user, session_user, current_user, user_name() And this query returns 0 select USER_ID() I want the username to query database_role_members is there any fuction that will return it or any other way I can get the current users roles?

    Read the article

  • Safe to KILL a mysql process REPLACEing records in a large myisam table?

    - by threecheeseopera
    I have a REPLACE query running for a few days now on a few MyISAM tables, the largest having 20+million records. I need it to stop. It is, basically: REPLACE INTO really_large_table (a,b,c,d) SELECT e,f,g,h FROM big_table INNER JOIN huge_table ON big_table.x LIKE CONCAT('%', huge_table.y, '%'); I need to KILL it, and I am worried that I may corrupt really_large_table. Because the sub-query itself takes a significant amount of time, the REPLACEing probably occurs (relatively) infrequently; if this is true, does this make it less likely for the data to become corrupted? For the curious, here is the SO question asked about the query I am trying to kill.

    Read the article

  • MySQL: Column Contains Word From List of Words

    - by mellowsoon
    I have a list of words. Lets say they are 'Apple', 'Orange', and 'Pear'. I have rows in the database like this: ------------------------------------------------ |author_id | content | ------------------------------------------------ | 54 | I ate an apple for breakfast. | | 63 | Going to the store. | | 12 | Should I wear the orange shirt? | ------------------------------------------------ I'm looking for a query on an InnoDB table that will return the 1st and 3rd row, because the content column contains one or more words from my list. I know I could query the table once for each word in my list, and use LIKE and the % wildcard character, but I'm wondering if there is a single query method for such a thing?

    Read the article

  • Quering distinct values throught related model

    - by matheus.emm
    Hi! I have a simple one-to-many (models.ForeignKey) relationship between two of my model classes: class TeacherAssignment(models.Model): # ... some fields year = models.CharField(max_length=4) class LessonPlan(models.Model): teacher_assignment = models.ForeignKey(TeacherAssignment) # ... other fields I'd like to query my database to get the set of distinct years of TeacherAssignments related to at least one LessonPlan. I'm able to get this set using Django query API if I ignore the relation to LessonPlan: class TeacherAssignment(models.Model): # ... model's fields def get_years(self): year_values = self.objects.all().values_list('year').distinct().order_by('-year') return [yv[0] for yv in year_values if len(yv[0]) == 4] Unfortunately I don't know how to express the condition that the TeacherAssignment must be related to at least one LessonPlan. Any ideas how I'd be able to write the query? Thanks in advance.

    Read the article

  • Search SQL Question Between Related Two Tables

    - by mTuran
    Hi, I am writing some kind of search engine for my web application and i have a problem. I have 2 tables first of is projects table: PROJECTS TABLE id int(11) NO PRI NULL auto_increment employer_id int(11) NO MUL NULL project_title varchar(100) NO MUL NULL project_description text NO NULL project_budget int(11) NO NULL project_allowedtime int(11) NO NULL project_deadline datetime NO NULL total_bids int(11) NO NULL average_bid int(11) NO NULL created datetime NO MUL NULL active tinyint(1) NO MUL NULL PROJECTS_SKILLS TABLE project_id int(11) NO MUL NULL skill_id int(11) NO MUL NULL For example: I want ask this query to database: 1-) Skills are 5 and 7. 2-) Order results by created 3-) project title contains "php" word. 4-) Returned rows should contain projects.* columuns. 5-) Projects should be distinct(i don't want same projects in return of query). Please write sql query that ensure these conditions. Thank You.

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >