Search Results

Search found 15456 results on 619 pages for 'global temporary tables'.

Page 547/619 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • PostgreSQL: return select count(*) from old_ids;

    - by Alexander Farber
    Hello, please help me with 1 more PL/pgSQL question. I have a PHP-script run as daily cronjob and deleting old records from 1 main table and few further tables referencing its "id" column: create or replace function quincytrack_clean() returns integer as $BODY$ begin create temp table old_ids (id varchar(20)) on commit drop; insert into old_ids select id from quincytrack where age(QDATETIME) > interval '30 days'; delete from hide_id where id in (select id from old_ids); delete from related_mks where id in (select id from old_ids); delete from related_cl where id in (select id from old_ids); delete from related_comment where id in (select id from old_ids); delete from quincytrack where id in (select id from old_ids); return select count(*) from old_ids; end; $BODY$ language plpgsql; And here is how I call it from the PHP script: $sth = $pg->prepare('select quincytrack_clean()'); $sth->execute(); if ($row = $sth->fetch(PDO::FETCH_ASSOC)) printf("removed %u old rows\n", $row['count']); Why do I get the following error? SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at or near "select" at character 9 QUERY: SELECT select count(*) from old_ids CONTEXT: SQL statement in PL/PgSQL function "quincytrack_clean" near line 23 Thank you! Alex

    Read the article

  • Parse large XML file w/ script or use BioPython API ?

    - by jeremy04
    Hey guys this is my first question on here. I'm trying to make a local copy of the UniprotKB in SQL. The UniprotKB is 2.1GB, and it comes in XML and a special text format used by SwissProt Here are my options: 1) Use a SAX parser (XML) - I chose Ruby, and Nokogiri. I started writing the parser, but my initial reaction: how would I map the XML schema to the SAX parser? 2) BioPython - I already have BioSQL/Biopython installed, which literally created my SQL schema for me, and I was able to successfully insert one SwissProt/Uniprot txt file into the database. I'm running it right now (crosses fingers) on the entire 2.1gb. Here is the code I'm running: from Bio import SeqIO from BioSQL import BioSeqDatabase from Bio import SwissProt server = BioSeqDatabase.open_database(driver = "MySQLdb", user = "root", passwd = "", host="localhost", db = "bioseqdb") db = server["uniprot"] iterator = SeqIO.parse(open("/path/to/uniprot_sprot.dat", "r"), "swiss") db.load(iterator) server.commit() Edit: it's now crashing because the transactions are getting locked (since the tables are Innodb) Error Number: 1205 Lock wait timeout exceeded; try restarting transaction. I'm using MySQL version: 5.1.43 Should I switch my database to Postgrelsql ?

    Read the article

  • Select return dynamic columns

    - by Ascalonian
    I have two tables: Standards and Service Offerings. A Standard can have multiple Service Offerings. Each Standard can have a different number of Service Offerings associated to it. What I need to be able to do is write a view that will return some common data and then list the service offerings on one line. For example: Standard Id | Description | SO #1 | SO #2 | SO #3 | ... | SO #21 | SO Count 1 | One | A | B | C | ... | G | 21 2 | Two | A | | | ... | | 1 3 | Three | B | D | E | ... | | 3 I have no idea how to write this. The number of SO columns is set to a specific number (21 in this case), so we cannot exceed past that. Any ideas on how to approach this? A place I started is below. It just returned multiple rows for each Service Offering, when they need to be on one row. SELECT * FROM SERVICE_OFFERINGS WHERE STANDARD_KEY IN (SELECT STANDARD_KEY FROM STANDARDS)

    Read the article

  • UITableView cell appearance update not working - iPhone

    - by Jack Nutkins
    I have this piece of code which I'm using to set the alpha and accessibility of one of my tables cells dependent on a value stored in user defaults: - (void) viewDidAppear:(BOOL)animated{ [self reloadTableData]; } - (void) reloadTableData { if ([[userDefaults objectForKey:@"canDeleteReceipts"] isEqualToString:@"0"]) { NSIndexPath *path = [NSIndexPath indexPathForRow:0 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 0.2; cell.userInteractionEnabled = NO; } else { NSIndexPath *path = [NSIndexPath indexPathForRow:0 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 1; cell.userInteractionEnabled = YES; } if ([[userDefaults objectForKey:@"canDeleteMileages"] isEqualToString:@"0"]) { NSIndexPath *path = [NSIndexPath indexPathForRow:1 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 0.2; cell.userInteractionEnabled = NO; } else { NSIndexPath *path = [NSIndexPath indexPathForRow:1 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 1; cell.userInteractionEnabled = YES; } if ([[userDefaults objectForKey:@"canDeleteAll"] isEqualToString:@"0"]) { NSIndexPath *path = [NSIndexPath indexPathForRow:2 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 0.2; cell.userInteractionEnabled = NO; NSLog(@"In Here"); } else { NSIndexPath *path = [NSIndexPath indexPathForRow:2 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 1; cell.userInteractionEnabled = YES; } } The value stored in userDefaults is '0' so the cell in section 8 row 2 should be greyed out and disabled, however this doesn't happen and the cell is still selectable with an alpha of 1... The log statement 'In Here' is called so it seems to be executing the right code, but that code doesn't seem to be doing anything. Can anyone explain what I've done wrong? Thanks, Jack

    Read the article

  • Foreign key relationships in Entity Framework

    - by Anders Svensson
    I'm trying to add an object created from Entity Data Model classes. I have a table called Users, which has turned into a User EDM class. And I also have a table Pages, which has become a Page EDM class. These tables have a foreign key relationship, so that each page is associated with many users. Now I want to be able to add a page, but I can't get how to do it. I get a nullreference exception on Users below. I'm still rather confused by all this, so I'm sure it's a simple error, but I just can't see how to do it. Also, by the way, the compiler requires that I set PageID in the object initializer, even though this field is set to be an automatic id in the table. Am I doing it right just setting it to 0, expecting it to be updated automatically in the table when saved, or how should I do that? Any help appreciated! The method in question: private Page GetPage(User currentUser) { string url = _request.ServerVariables["url"].ToLower(); var userPages = from p in _context.PageSet where p.Users.UserID == currentUser.UserID select p; var existingPage = userPages.FirstOrDefault(e => e.Url == url); //Could be combined with above, but hard to read? if (existingPage != null) return existingPage; Page page = new Page() { Count = 0, Url = _request.ServerVariables["url"].ToLower(), PageID = 0, //Only initial value, changed later? }; _context.AddToPageSet(page); page.Users.UserID = currentUser.UserID; //Here's the problem... return page; }

    Read the article

  • How create single HTML file to viewed in Excel with multiple sheets?

    - by Dmitry Nelepov
    I want know, is it possible create single HTML file wich (after chaging extension to xls) when opened at Excel will parsed to multiple sheets. Little example i got file test.xls with contents <html> <body> <table> <tr> <td> 1 </td> <td> 2 </td> <td> 3 </td> <td> =sum(A1:C1) </td> </tr> </table> </body> </html> When i open this file at Excel i got one Sheet with calculated sum of cells A1 to C1 in A4=6 I wonder, it's possible create a HTML file wich containts multiple tables wich will be parsed as multiple sheets at Excel. Here Excel view of this file

    Read the article

  • Why is using a common-lookup table to restrict the status of entity wrong?

    - by FreshCode
    According to Five Simple Database Design Errors You Should Avoid by Anith Sen, using a common-lookup table to store the possible statuses for an entity is a common mistake. Why is this wrong? I disagree that it's wrong, citing the example of jobs at a repair service with many possible statuses that generally have a natural flow, eg.: Booked In Assigned to Technician Diagnosing problem Waiting for Client Confirmation Repaired & Ready for Pickup Repaired & Couriered Irreparable & Ready for Pickup Quote Rejected Arguably, some of these statuses can be normalised to tables like Couriered Items, Completed Jobs and Quotes (with Pending/Accepted/Rejected statuses), but that feels like unnecessary schema complication. Another common example would be order statuses that restrict the status of an order, eg: Pending Completed Shipped Cancelled Refunded The status titles and descriptions are in one place for editing and are easy to scaffold as a drop-down with a foreign key for dynamic data applications. This has worked well for me in the past. If the business rules dictate the creation of a new order status, I can just add it to OrderStatus table, without rebuilding my code.

    Read the article

  • iBatis how to solve a more complex N+1 problem

    - by Alvin
    I have a database that is similar to the following: create table Store(storeId) create table Staff(storeId_fk, staff_id, staffName) create table Item(storeId_fk, itme_id, itemName) The Store table is large. And I have create the following java bean public class Store { List<Staff> myStaff List<Item> myItem .... } public class Staff { ... } public class Item { ... } My question is how can I use iBatis's result map to EFFICIENTLY map from the tables to the java object? I tried: <resultMap id="storemap" class="my.example.Store"> <result property="myStaff" resultMap="staffMap"/> <result property="myItem" result="itemMap"/> </resultMap> (other maps omitted) But it's way too slow since the Store table is VERY VERY large. I tried to follow the example in Clinton's developer guide for the N+1 solution, but I cannot warp my mind around how to use the "groupBy" for an object with 2 list... Any help is appreciated!

    Read the article

  • how does MySQL implement the "group by"?

    - by user188916
    I read from the MySQL Reference Manual and find that when it can take use of index,it just do index scan,other it will create tmp tables and do things like filesort. And I also read from other article that the "Group By" result will sort by group by columns by default,if "order by null" clause added,it won't don filesort. The difference can be found from the "explain ..." clause. so my problem is:what is the difference between "group by" clause that with "order by null" and which doesn't have? I try to use profiling to see what mysql do on the background,and only see result like: result for group clause without order by null: |preparing | 0.000016 | | Creating tmp table | 0.000048 | | executing | 0.000009 | | Copying to tmp table | 0.000109 | **| Sorting result | 0.000023 |** | Sending data | 0.000027 | result for clause with "order by null": preparing | 0.000016 | | Creating tmp table | 0.000052 | | executing | 0.000009 | | Copying to tmp table | 0.000114 | | Sending data | 0.000028 | So I guess what MySQL do when the "order by null" added,it does not use filesort algorithm,maybe when it creates the tmp table,it uses index as well,and then use the index to do group by operation,when completed,it just read result from the table rows and does not sort the result. But my original opinion is that MySQL can use quicksort to sort the items and then do group by,so the result will be sorted as well. Any opinion appreciated,thanks.

    Read the article

  • C# sqlite syntax in ASP.NET?

    - by acidzombie24
    -edit- This is no longer relevant and the question doesnt make sense to me anymore. I think i wanted to know how to create tables or know if the syntax is the same from winform to ASP.NET I am very use to sqlite http://sqlite.phxsoftware.com/ and would like to create a DB in a similar style. How do i do this? it doesnt need to be the same, just similar enough for me to enjoy. An example of the syntax. connection = new SQLiteConnection("Data Source=" + sz +";Version=3"); command = new SQLiteCommand(connection); connection.Open(); command.CommandText = "CREATE TABLE if not exists miscInfo(key TEXT, " + "value TEXT, UNIQUE (key));"; command.ExecuteNonQuery(); The @name is a symbol and command.Parameters.Add("@userDesc", DbType.String).Value = d.userDesc; replaces the symbol with escaped values/texts/blob command.CommandText = "INSERT INTO discData(rootfolderID, time, volumeName, discLabel, " + "userTitle, userDesc) "+ "VALUES(@rootfolderID, @time, @volumeName, @discLabel, " + "@userTitle, @userDesc); " + "SELECT last_insert_rowid() AS RecordID;"; command.Parameters.Add("@rootfolderID", DbType.Int64).Value = d.rootfolderID; command.Parameters.Add("@time", DbType.Int64).Value = d.time; command.Parameters.Add("@volumeName", DbType.String).Value = d.volumeName; command.Parameters.Add("@discLabel", DbType.String).Value = d.discLabel; command.Parameters.Add("@userTitle", DbType.String).Value = d.userTitle; command.Parameters.Add("@userDesc", DbType.String).Value = d.userDesc; d.discID = (long)command.ExecuteScalar();

    Read the article

  • JOIN (SELECT DISTINCT [..] substitute

    - by FRKT
    Hello, I'd like to find a substitute for using SELECT DISTINCT in a derived table. Let's say I have three tables: CREATE TABLE `trades` ( `tradeID` int(11) unsigned NOT NULL AUTO_INCREMENT, `employeeID` int(11) unsigned NOT NULL, `corporationID` int(11) unsigned NOT NULL, `profit` int(11) NOT NULL, KEY `tradeID` (`tradeID`), KEY `employeeID` (`employeeID`), KEY `corporationID` (`corporationID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 CREATE TABLE `corporations` ( `corporationID` int(11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`corporationID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 CREATE TABLE `employees` ( `employeeID` int(11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`employeeID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 Let's say I'd like to find out how much profit a specific employee has generated. Simple: SELECT SUM(profit) FROM trades JOIN employees ON trades.employeeID = employees.employeeID AND employees.employeeID = 1; It gets trickier if I'd like to query how much revenue a specific corporation has, however. I cannot simply replicate the aforementioned query, because two or more employees from the same company might be involved in the same trade. This query should do the trick: SELECT SUM(profit) FROM trades JOIN (SELECT DISTINCT tradeID FROM trades WHERE trades.corporationID = 1) ... unfortunately, DISTINCT JOINs seem crazy ineffective. Is there any alternative I can use to determine how much revenue a corporation has, taking into account that a corporation might be listed several times with the same tradeID?

    Read the article

  • Generic Class Vb.net

    - by KoolKabin
    hi guys, I am stuck with a problem about generic classes. I am confused how I call the constructor with parameters. My interface: Public Interface IDBObject Sub [Get](ByRef DataRow As DataRow) Property UIN() As Integer End Interface My Child Class: Public Class User Implements IDBObject Public Sub [Get](ByRef DataRow As System.Data.DataRow) Implements IDBObject.Get End Sub Public Property UIN() As Integer Implements IDBObject.UIN Get End Get Set(ByVal value As Integer) End Set End Property End Class My Next Class: Public Class Users Inherits DBLayer(Of User) #Region " Standard Methods " #End Region End Class My DBObject Class: Public Class DBLayer(Of DBObject As {New, IDBObject}) Public Shared Function GetData() As List(Of DBObject) Dim QueryString As String = "SELECT * ***;" Dim Dataset As DataSet = New DataSet() Dim DataList As List(Of DBObject) = New List(Of DBObject) Try Dataset = Query(QueryString) For Each DataRow As DataRow In Dataset.Tables(0).Rows **DataList.Add(New DBObject(DataRow))** Next Catch ex As Exception DataList = Nothing End Try Return DataList End Function End Class I get error in the starred area of the DBLayer Object. What might be the possible reason? what can I do to fix it? I even want to add New(byval someval as datatype) in IDBObject interface for overloading construction. but it also gives an error? how can i do it? Adding Sub New(ByVal DataRow As DataRow) in IDBObject producess following error 'Sub New' cannot be declared in an interface. Error Produced in DBLayer Object line: DataList.Add(New DBObject(DataRow)) Msg: Arguments cannot be passed to a 'New' used on a type parameter.

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • How do I rescue a small portion of data from a SQL Server database backup?

    - by Greg
    I have a live database that had some data deleted from it and I need that data back. I have a very recent copy of that database that has already been restored on another machine. Unrelated changes have been made to the live database since the backup, so I do not want to wipe out the live database with a full restore. The data I need is small - just a dozen rows - but those dozen rows each have a couple rows from other tables with foreign keys to it, and those couple rows have god knows how many rows with foreign keys pointing to them, so it would be complicated to restore by hand. Ideally I'd be able to tell the backup copy of the database to select the dozen rows I need, and the transitive closure of everything that they depend on, and everything that depends on them, and export just that data, which I can then import into the live database without touching anything else. What's the best approach to take here? Thanks. Everyone has mentioned sp_generate_inserts. When using this, how do you prevent Identity columns from messing everything up? Do you just turn IDENTITY INSERT on?

    Read the article

  • nHibernate storage of an object with self referencing many children and many parents

    - by AdamC
    I have an object called MyItem that references children in the same item. How do I set up an nhibernate mapping file to store this item. public class MyItem { public virtual string Id {get;set;} public virtual string Name {get;set;} public virtual string Version {get;set;} public virtual IList<MyItem> Children {get;set;} } So roughly the hbm.xml would be: <class name="MyItem" table="tb_myitem"> <id name="Id" column="id" type="String" length="32"> <generator class="uuid.hex" /> </id> <property name="Name" column="name" /> <property name="Version" column="version" /> <bag name="Children" cascade="all-delete-orphan" lazy="false"> <key column="children_id" /> <one-to-many class="MyItem" not-found="ignore"/> </bag> </class> This wouldn't work I don't think. Perhaps I need to create another class, say MyItemChildren and use that as the Children member and then do the mapping in that class? This would mean having two tables. One table holds the MyItem and the other table holds references from my item. NOTE: A child item could have many parents.

    Read the article

  • MS Excel automation without macros in the generated reports. Any thoughts?

    - by ezeki77
    Hello! I know that the web is full of questions like this one, but I still haven't been able to apply the answers I can find to my situation. I realize there is VBA, but I always disliked having the program/macro living inside the Excel file, with the resulting bloat, security warnings, etc. I'm thinking along the lines of a VBScript that works on a set of Excel files while leaving them macro-free. Now, I've been able to "paint the first column blue" for all files in a directory following this approach, but I need to do more complex operations (charts, pivot tables, etc.), which would be much harder (impossible?) with VBScript than with VBA. For this specific example knowing how to remove all macros from all files after processing would be enough, but all suggestions are welcome. Any good references? Any advice on how to best approach external batch processing of Excel files will be appreciated. Thanks! PS: I eagerly tried Mark Hammond's great PyWin32 package, but the lack of documentation and interpreter feedback discouraged me.

    Read the article

  • Dynamic evaluation of a table column within an insert before trigger

    - by Tim Garver
    HI All, I have 3 tables, main, types and linked. main has an id column and 32 type columns. types has id, type linked has id, main_id, type_id I want to create an insert before trigger on the main table. It needs to compare its 32 type columns to the values in the types table if the main table column has an 'X' for its value and insert the main_id and types_id into the linked table. i have done a lot of searching, and it looks like a prepared statement would be the way to go, but i wanted to ask the experts. The issue, is i dont want to write 32 IF statements, and even if i did, i need to query the types table to get the ID for that type, seems like a huge waist of resources. Ideally i want to do this inside of my trigger: BEGIN DECLARE @types results_set -- (not sure if this is a valid type); -- (iam sure my loop syntax is all wrong here)... SET @types = (select * from types) for i=0;i<types.records;i++ { IF NEW.[i.type] = 'X' THEN insert into linked (main_id,type_id) values (new.ID, i.id); END IF; } END; Anyway, This is what i was hoping to do, maybe there is a way to dynamically set the field name inside of a results loop, but i cant find a good example of this. Thanks in advance Tim

    Read the article

  • How to map a 0..1 to 1 relationship in Entity Framework 3

    - by sako73
    I have two tables, Users, and Address. A the user table has a field that maps to the primary key of the address table. This field can be null. In plain english, Address exist independent of other objects. A user may be associated with one address. In the database, I have this set up as a foreign key relationship. I am attempting to map this relationship in the Entity Framework. I am getting errors on the following code: <Association Name="fk_UserAddress"> <End Role="User" Type="GenesisEntityModel.Store.User" Multiplicity="1"/> <End Role="Address" Type="GenesisEntityModel.Store.Address" Multiplicity="0..1" /> <ReferentialConstraint> <Principal Role="Address"> <PropertyRef Name="addressId"/> </Principal> <Dependent Role="User"> <PropertyRef Name="addressId"/> </Dependent> </ReferentialConstraint> </Association> It is giving a "The Lower Bound of the multiplicity must be 0" error. I would appreciate it if anyone could explain the error, and the best way to solve it. Thanks for any help.

    Read the article

  • Entity Framework POCO objects

    - by Dan
    I'm struggling with understanding Entity Framework and POCO objects. Here's what I'm trying to achieve. 1) Separate the DAL from the Business Layer by having my business layer use an interface to my DAL. Maybe use Unity to create my context. 2) Use Entity Framework inside my DAL. I have a Domain model with objects that reside in my business layer. I also have a database full of tables that doesn't really represent my domain model. I setup Entity Framework and generated POCO objects by using the ADO.NET POCO Generator extension. This gave me an object for each table in my database. Now I want to be able to say context.GetAll<User>(); and have it return a list of my User objects. The User object is in my business layer. Is that possible? Does that make sense or am I totally off and should start over? I'm guessing I need to use the repository pattern to achieve this, but I'm not sure. Can anyone help?

    Read the article

  • Which class should store the lookup table?

    - by max
    The world contains agents at different locations, with only a single agent at any location. Each agent knows where he's at, but I also need to quickly check if there's an agent at a given location. Hence, I also maintain a map from locations to agents. I have a problem deciding where this map belongs to: class World, class Agent (as a class attribute) or elsewhere. In the following I put the lookup table, agent_locations, in class World. But now agents have to call world.update_agent_location every time they move. This is very annoying; what if I decide later to track other things about the agents, apart from their locations - would I need to add calls back to the world object all across the Agent code? class World: def __init__(self, n_agents): # ... self.agents = {} self.agent_locations = {} for id in range(n_agents): x, y = self.find_location() agent = Agent(self,x,y) self.agents.append(agent) self.agent_locations[x,y] = agent def update_agent_location(self, agent, x, y): del self.agent_locations[agent.x, agent.y] self.agent_locations[x, y] = agent def update(self): # next step in the simulation for agent in self.agents: agent.update() # next step for this agent # ... class Agent: def __init__(self, world, x, y): self.world = world self.x, self.y = x, y def move(self, x1, y1): self.world.update_agent_location(self, x1, y1) self.x, self.y = x1, y1 def update(): # find a good location that is not occupied and move there for x, y in self.valid_locations(): if not self.location_is_good(x, y): continue if self.world.agent_locations[x, y]: # location occupied continue self.move(x, y) I can instead put agent_locations in class Agent as a class attribute. But that only works when I have a single World object. If I later decide to instantiate multiple World objects, the lookup tables would need to be world-specific. I am sure there's a better solution... EDIT: I added a few lines to the code to show how agent_locations is used. Note that it's only used from inside Agent objects, but I don't know if that would remain the case forever.

    Read the article

  • Selecting random top 3 listings per shop for a range of active advertising shops

    - by GraGra33
    I’m trying to display a list of shops each with 3 random items from their shop, if they have 3 or more listings, that are actively advertising. I have 3 tables: one for the shops – “Shops”, one for the listings – “Listings” and one that tracks active advertisers – “AdShops”. Using the below statement, the listings returned are random however I’m not getting exactly 3 listings (rows) returned per shop. SELECT AdShops.ID, Shops.url, Shops.image_url, Shops.user_name AS shop_name, Shops.title, L.listing_id AS listing_id, L.title AS listing_title, L.price as price, L.image_url AS listing_image_url, L.url AS listing_url FROM AdShops INNER JOIN Shops ON AdShops.user_id = Shops.user_id INNER JOIN Listings AS L ON Shops.user_id = L.user_id WHERE (Shops.is_vacation = 0 AND Shops.listing_count > 2 AND L.listing_id IN (SELECT TOP 3 L2.listing_id FROM Listings AS L2 WHERE L2.listing_id IN (SELECT TOP 100 PERCENT L3.listing_id FROM Listings AS L3 WHERE (L3.user_id = L.user_id) ) ORDER BY NEWID() ) ) ORDER BY Shops.shop_name I’m stumped. Anyone have any ideas on how to fix it? The ideal solution would be one record per store with the 3 listings (and associated data) were in columns and not rows – is this possible?

    Read the article

  • The question about the basics of LINQ to SQL working

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer Get all customers from the database ("SELECT * FROM Customers"), move them to the Customers array and then make a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • How to use MySQL geospatial extensions with spherical geometries

    - by Joshua
    Hi Everyone, I would like to store thousands of latitude/longitude points in a MySQL db. I was successful at setting up the tables and adding the data using the geospatial extensions where the column 'coord' is a Point(lat, lng). Problem: I want to quickly find the 'N' closest entries to latitude 'X' degrees and longitude 'Y' degrees. Since the Distance() function has not yet been implemented, I used GLength() function to calculate the distance between (X,Y) and each of the entries, sorting by ascending distance, and limiting to 'N' results. The problem is that this is not calculating shortest distance with spherical geometry. Which means if Y = 179.9 degrees, the list of closest entries will only include longitudes of starting at 179.9 and decreasing even though closer entries exist with longitudes increasing from -179.9. How does one typically handle the discontinuity in longitude when working with spherical geometries in databases? There has to be an easy solution to this, but I must just be searching for the wrong thing because I have not found anything helpful. Should I just forget the GLength() function and create my own function for calculating angular separation? If I do this, will it still be fast and take advantage of the geospatial extensions? Thanks! josh UPDATE: This is exactly what I am describing above. However, it is only for SQL Server. Apparently SQL Server has a Geometry and Geography datatypes. The geography does exactly what I need. Is there something similar in MySQL?

    Read the article

  • design pattern for related inputs

    - by curiousMo
    My question is a design question : let's say i have a data entry web page with 4 drop down lists, each depending on the previous one, and a bunch of text boxes. ddlCountry (DropDownList) ddlState (DropDownList) ddlCity (DropDownList) ddlBoro (DropDownList) txtAddress (TxtBox) txtZipcode(TxtBox) and an object that represents a datarow with a value for each: countrySeqid stateSeqid citySeqid boroSeqid address zipCode naturally the country, state, city and boro values will be values of primary keys of some lookup tables. when the user chooses to edits that record, i would load it from database and load it into the page. the issue that I have is how to streamline loading the DropDownLists. i have some code that would grab the object,look thru its values and move them to their corresponding input controls in one shot. but in this case i will have to load the ddlCountry with possible values, then assign values, then do the same thing for the rest of the ddls. I guess i am looking for an elegant solution. i am using asp.net, but i think it is irrelevant to the question. i am looking more into a design pattern.

    Read the article

  • EF4 POCO Not Updating Navigation Property On Save

    - by Gavin Draper
    I'm using EF4 with POCO objects the 2 tables are as follows Service ServiceID, Name, StatusID Status StatusID, Name The POCO objects look like this Service ServiceID, Status, Name Status StatusID, Name With Status on the Service object being a Navigation Property and of type Status. In my Service Repository I have a save method that takes a service objects attaches it to the context and calls save. This works fine for the service, but if the status for that service has been changed it does not get updated. My Save method looks like this public static void SaveService(Service service) { using (var ctx = Context.CreateContext()) { ctx.AttachModify("Services", service); ctx.AttachTo("Statuses",service.Status); ctx.SaveChanges(); } } The AttachModify method attaches an object to the context and sets it to modified it looks like this public void AttachModify(string entitySetName, object entity) { if (entity != null) { AttachTo(entitySetName, entity); SetModified(entity); } } public void SetModified(object entity) { ObjectStateManager.ChangeObjectState(entity, EntityState.Modified); } If I look at a SQL profile its not even including the navigation property in the update for the service table, it never touches the StatusID. Its driving me crazy. Any idea what I need to do to force the Navigation Property to update?

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >