Search Results

Search found 6276 results on 252 pages for 'join in'.

Page 83/252 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Rows in their own columns depending on their date and simbolized by 'x'

    - by Chandradyani
    Dear All, please help me since I'm newbie in SQL Server. I have a select query that currently produces the following results: DoctorName Team Visit date dr. As A 5 dr. Sc A 4 dr. Gh B 6 dr. Nd C 31 dr As A 7 Using the following query: SELECT d.DoctorName, t.TeamName, ca.VisitDate FROM cActivity AS ca INNER JOIN doctor AS d ON ca.DoctorId = d.Id INNER JOIN team AS t ON ca.TeamId = t.Id WHERE ca.VisitDate BETWEEN '1/1/2010' AND '1/31/2010' I want to produce the following: DoctorName Team 1 2 3 4 5 6 7 ... 31 Visited dr. As A x x ... 2 times dr. Sc A x ... 1 times dr. Gh B x ... 1 times dr. Nd C ... X 1 times

    Read the article

  • File Uploads with Turbogears 2

    - by William Chambers
    I've been trying to work out the 'best practices' way to manage file uploads with Turbogears 2 and have thus far not really found any examples. I've figured out a way to actually upload the file, but I'm not sure how reliable it us. Also, what would be a good way to get the uploaded files name? file = request.POST['file'] permanent_file = open(os.path.join(asset_dirname, file.filename.lstrip(os.sep)), 'w') shutil.copyfileobj(file.file, permanent_file) file.file.close() this_file = self.request.params["file"].filename permanent_file.close() So assuming I'm understanding correctly, would something like this avoid the core 'naming' problem? id = UUID. file = request.POST['file'] permanent_file = open(os.path.join(asset_dirname, id.lstrip(os.sep)), 'w') shutil.copyfileobj(file.file, permanent_file) file.file.close() this_file = file.filename permanent_file.close()

    Read the article

  • PyML 0.7.2 - How to prevent accuracy from dropping after storing/loading a classifier?

    - by Michael Aaron Safyan
    This is a followup from "Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?". The solution to that question was close, but not quite right, (the SparseDataSet is broken, so attempting to save/load with that dataset container type will fail, no matter what. Also, PyML is inconsistent in terms of whether labels should be numbers or strings... it turns out that the oneAgainstRest function is actually not good enough, because the labels need to be strings and simultaneously convertible to floats, because there are places where it is assumed to be a string and elsewhere converted to float) and so after a great deal of hacking and such I was finally able to figure out a way to save and load my multi-class classifier without it blowing up with an error.... however, although it is no longer giving me an error message, it is still not quite right as the accuracy of the classifier drops significantly when it is saved and then reloaded (so I'm still missing a piece of the puzzle). I am currently using the following custom mutli-class classifier for training, saving, and loading: class SVM(object): def __init__(self,features_or_filename,labels=None,kernel=None): if isinstance(features_or_filename,str): filename=features_or_filename; if labels!=None: raise ValueError,"Labels must be None if loading from a file."; with open(os.path.join(filename,"uniquelabels.list"),"rb") as uniquelabelsfile: self.uniquelabels=sorted(list(set(pickle.load(uniquelabelsfile)))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; self.classifiers=[]; for classidx, classname in enumerate(self.uniquelabels): self.classifiers.append(PyML.classifiers.svm.loadSVM(os.path.join(filename,str(classname)+".pyml.svm"),datasetClass = PyML.VectorDataSet)); else: features=features_or_filename; if labels==None: raise ValueError,"Labels must not be None when training."; self.uniquelabels=sorted(list(set(labels))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; points = [[float(xij) for xij in xi] for xi in features]; self.classifiers=[PyML.SVM(kernel) for label in self.uniquelabels]; for i in xrange(len(self.uniquelabels)): currentlabel=self.uniquelabels[i]; currentlabels=['+1' if k==currentlabel else '-1' for k in labels]; currentdataset=PyML.VectorDataSet(points,L=currentlabels,positiveClass='+1'); self.classifiers[i].train(currentdataset,saveSpace=False); def accuracy(self,pts,labels): logger=logging.getLogger("ml"); correct=0; total=0; classindexes=[self.labeltoindex[label] for label in labels]; h=self.hypotheses(pts); for idx in xrange(len(pts)): if h[idx]==classindexes[idx]: logger.info("RIGHT: Actual \"%s\" == Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])); correct+=1; else: logger.info("WRONG: Actual \"%s\" != Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])) total+=1; return float(correct)/float(total); def prediction(self,pt): h=self.hypothesis(pt); if h!=None: return self.uniquelabels[h]; return h; def predictions(self,pts): h=self.hypotheses(self,pts); return [self.uniquelabels[x] if x!=None else None for x in h]; def hypothesis(self,pt): bestvalue=None; bestclass=None; dataset=PyML.VectorDataSet([pt]); for classidx, classifier in enumerate(self.classifiers): val=classifier.decisionFunc(dataset,0); if (bestvalue==None) or (val>bestvalue): bestvalue=val; bestclass=classidx; return bestclass; def hypotheses(self,pts): bestvalues=[None for pt in pts]; bestclasses=[None for pt in pts]; dataset=PyML.VectorDataSet(pts); for classidx, classifier in enumerate(self.classifiers): for ptidx in xrange(len(pts)): val=classifier.decisionFunc(dataset,ptidx); if (bestvalues[ptidx]==None) or (val>bestvalues[ptidx]): bestvalues[ptidx]=val; bestclasses[ptidx]=classidx; return bestclasses; def save(self,filename): if not os.path.exists(filename): os.makedirs(filename); with open(os.path.join(filename,"uniquelabels.list"),"wb") as uniquelabelsfile: pickle.dump(self.uniquelabels,uniquelabelsfile,pickle.HIGHEST_PROTOCOL); for classidx, classname in enumerate(self.uniquelabels): self.classifiers[classidx].save(os.path.join(filename,str(classname)+".pyml.svm")); I am using the latest version of PyML (0.7.2, although PyML.__version__ is 0.7.0). When I construct the classifier with a training dataset, the reported accuracy is ~0.87. When I then save it and reload it, the accuracy is less than 0.001. So, there is something here that I am clearly not persisting correctly, although what that may be is completely non-obvious to me. Would you happen to know what that is?

    Read the article

  • How to list directory hierarchy in PyGTK treeview widget?

    - by lyrae
    I am trying to generate a hierarchical directory listing in pyGTK. Currently, I have this following directory tree: /root folderA - subdirA - subA.py - a.py folderB - b.py I have written a function that -almost- seem to work: def go(root, piter = None): for filename in os.listdir(root): isdir = os.path.isdir(os.path.join(root, filename)) piter = self.treestore.append(piter, [filename]) if isdir == True: go(os.path.join(root, filename), piter) This is what i get when i run the app: I also think my function is inefficient and that i should be using os.walk(), since it already exists for such purpose. How can I, and what is the proper/most efficient way of generating a directory tree with pyGTK?

    Read the article

  • mysql select from a table depending on in which table the data is in

    - by user253530
    I have 3 tables holding products for a restaurant. Products that reside in the bar, food and ingredients. I use php and mysql. I have another table that holds information about the orders that have been made so far. There are 2 fields, the most important ones, that hold information about the id of the product and the type (from the bar, from the kitchen or from the ingredients). I was thinking to write the sql query like below to use either the table for bar products, kitchen or ingredients but it doesn't work. Basically the second table on join must be either "bar", "produse" or "stoc". SELECT K.nume, COUNT(K.cantitate) as cantitate, SUM(K.pret) as pret, P.nume as NumeProduse FROM `clienti_fideli` as K JOIN if(P.tip,bar,produse) AS P ON K.produs = P.id_prod WHERE K.masa=18 and K.nume LIKE 'livrari-la-domiciliu' GROUP BY NumeProduse

    Read the article

  • INSERT OR IGNORE in a trigger

    - by dan04
    I have a database (for tracking email statistics) that has grown to hundreds of megabytes, and I've been looking for ways to reduce it. It seems that the main reason for the large file size is that the same strings tend to be repeated in thousands of rows. To avoid this problem, I plan to create another table for a string pool, like so: CREATE TABLE AddressLookup ( ID INTEGER PRIMARY KEY AUTOINCREMENT, Address TEXT UNIQUE ); CREATE TABLE EmailInfo ( MessageID INTEGER PRIMARY KEY AUTOINCREMENT, ToAddrRef INTEGER REFERENCES AddressLookup(ID), FromAddrRef INTEGER REFERENCES AddressLookup(ID) /* Additional columns omitted for brevity. */ ); And for convenience, a view to join these tables: CREATE VIEW EmailView AS SELECT MessageID, A1.Address AS ToAddr, A2.Address AS FromAddr FROM EmailInfo LEFT JOIN AddressLookup A1 ON (ToAddrRef = A1.ID) LEFT JOIN AddressLookup A2 ON (FromAddrRef = A2.ID); In order to be able to use this view as if it were a regular table, I've made some triggers: CREATE TRIGGER trg_id_EmailView INSTEAD OF DELETE ON EmailView BEGIN DELETE FROM EmailInfo WHERE MessageID = OLD.MessageID; END; CREATE TRIGGER trg_ii_EmailView INSTEAD OF INSERT ON EmailView BEGIN INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.ToAddr); INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.FromAddr); INSERT INTO EmailInfo SELECT NEW.MessageID, A1.ID, A2.ID FROM AddressLookup A1, AddressLookup A2 WHERE A1.Address = NEW.ToAddr AND A2.Address = NEW.FromAddr; END; CREATE TRIGGER trg_iu_EmailView INSTEAD OF UPDATE ON EmailView BEGIN UPDATE EmailInfo SET MessageID = NEW.MessageID WHERE MessageID = OLD.MessageID; REPLACE INTO EmailView SELECT NEW.MessageID, NEW.ToAddr, NEW.FromAddr; END; The problem After: INSERT OR REPLACE INTO EmailView VALUES (1, '[email protected]', '[email protected]'); INSERT OR REPLACE INTO EmailView VALUES (2, '[email protected]', '[email protected]'); The updated rows contain: MessageID ToAddr FromAddr --------- ------ -------- 1 NULL [email protected] 2 [email protected] [email protected] There's a NULL that shouldn't be there. The corresponding cell in the EmailInfo table contains an orphaned ToAddrRef value. If you do the INSERTs one at a time, you'll see that Alice's ID in the AddressLookup table changes! It appears that this behavior is documented: An ON CONFLICT clause may be specified as part of an UPDATE or INSERT action within the body of the trigger. However if an ON CONFLICT clause is specified as part of the statement causing the trigger to fire, then conflict handling policy of the outer statement is used instead. So the "REPLACE" in the top-level "INSERT OR REPLACE" statement is overriding the critical "INSERT OR IGNORE" in the trigger program. Is there a way I can make it work the way that I wanted?

    Read the article

  • Can I access Spring session-scoped beans from application-scoped beans? How?

    - by Corvus
    I'm trying to make this 2-player web game application using Spring MVC. I have session-scoped beans Player and application-scoped bean GameList, which creates and stores Game instances and passes them to Players. On player creates a game and gets its ID from GameList and other player sends ID to GameList and gets Game instance. The Game instance has its players as attributes. Problem is that each player sees only himself instead of the other one. Example of what each player sees: First player (Alice) creates a game: Creator: Alice, Joiner: Empty Second player (Bob) joins the game: Creator: Bob, Joiner: Bob First player refreshes her browser Creator: Alice, Joiner: Alice What I want them to see is Creator: Alice, Joiner: Bob. Easy way to achieve this is saving information about players instead of references to players, but the game object needs to call methods on its player objects, so this is not a solution. I think it's because of aop:scoped-proxy of session-scoped Player bean. If I understand this, the Game object has reference to proxy, which refers to Player object of current session. Can Game instance save/access the other Player objects somehow? beans in dispatcher-servlet.xml: <bean id="userDao" class="authorization.UserDaoFakeImpl" /> <bean id="gameList" class="model.GameList" /> <bean name="/init/*" class="controller.InitController" > <property name="gameList" ref="gameList" /> <property name="game" ref="game" /> <property name="player" ref="player" /> </bean> <bean id="game" class="model.GameContainer" scope="session"> <aop:scoped-proxy/> </bean> <bean id="player" class="beans.Player" scope="session"> <aop:scoped-proxy/> </bean> methods in controller.InitController private GameList gameList; private GameContainer game; private Player player; public ModelAndView create(HttpServletRequest request, HttpServletResponse response) throws Exception { game.setGame(gameList.create(player)); return new ModelAndView("redirect:game"); } public ModelAndView join(HttpServletRequest request, HttpServletResponse response, GameId gameId) throws Exception { game.setGame(gameList.join(player, gameId.getId())); return new ModelAndView("redirect:game"); } called methods in model.gameList public Game create(Player creator) { Integer code = generateCode(); Game game = new Game(creator, code); games.put(code, game); return game; } public Game join(Player joiner, Integer code) { Game game = games.get(code); if (game!=null) { game.setJoiner(joiner); } return game; }

    Read the article

  • What should I do with syncobj in SQL Server

    - by hgulyan
    Hi. I run this script to search particular text in sys.columns and I get a lot of "dbo.syncobj_0x3934443438443332" this kind of result. SELECT c.name, s.name + '.' + o.name FROM sys.columns c INNER JOIN sys.objects o ON c.object_id=o.object_id INNER JOIN sys.schemas s ON o.schema_id=s.schema_id WHERE c.name LIKE '%text%' If I get it right, they are replication objects. Is it so? Can i just throw them away from my query just like o.name NOT LIKE '%syncobj%' or there's another way? Thank you.

    Read the article

  • Can this sql query be simplified?

    - by Bas
    I have the following tables: Person, {"Id", "Name", "LastName"} Sports, {"Id" "Name", "Type"} SportsPerPerson, {"Id", "PersonId", "SportsId"} For my query I want to get all the Persons that excersise a specific Sport whereas I only have the Sports "Name" attribute at my disposal. To retrieve the correct rows I've figured out the following queries: SELECT * FROM Person WHERE Person.Id in ( SELECT SportsPerPerson.PersonId FROM SportsPerPerson INNER JOIN Sports on SportsPerPerson.SportsId = Sports.Id WHERE Sports.Name = 'Tennis' ) AND Person.Id in ( SELECT SportsPerPerson.PersonId FROM SportsPerPerson INNER JOIN Sports on SportsPerPerson.SportsId = Sports.Id WHERE Sports.Name = 'Soccer' ) OR SELECT * FROM Person WHERE Id IN (SELECT PersonId FROM SportsPerPerson WHERE SportsId IN (SELECT Id FROM Sports WHERE Name = 'Tennis')) AND Id IN (SELECT PersonId FROM SportsPerPerson WHERE SportsId IN (SELECT Id FROM Sports WHERE Name = 'Soccer')) Now my question is, isn't there an easier way to write this query? Using just OR won't work because I need the person who play 'Tennis' AND 'Soccer'. But using AND also doesn't work because the values aren't on the same row.

    Read the article

  • using joins or multiple queries in php/mysql

    - by askkirati
    Here i need help with joins. I have two tables say articles and users. while displaying articles i need to display also the user info like username, etc. So will it be better if i just use joins to join the articles and user tables to fetch the user info while displaying articles like below. SELECT a.*,u.username,u.id FROM articles a JOIN users u ON u.id=a.user_id OR can this one in php. First i get the articles with below sql SELECT * FROM articles Then after i get the articles array i loop though it and get the user info inside each loop like below SELECT username, id FROM users WHERE id='".$articles->user_id."'; Which is better can i have explanation on why too. Thank you for any reply or views

    Read the article

  • PyML 0.7.2 - How to prevent accuracy from dropping after stroing/loading a classifier?

    - by Michael Aaron Safyan
    This is a followup from "Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?". The solution to that question was close, but not quite right, (the SparseDataSet is broken, so attempting to save/load with that dataset container type will fail, no matter what. Also, PyML is inconsistent in terms of whether labels should be numbers or strings... it turns out that the oneAgainstRest function is actually not good enough, because the labels need to be strings and simultaneously convertible to floats, because there are places where it is assumed to be a string and elsewhere converted to float) and so after a great deal of hacking and such I was finally able to figure out a way to save and load my multi-class classifier without it blowing up with an error.... however, although it is no longer giving me an error message, it is still not quite right as the accuracy of the classifier drops significantly when it is saved and then reloaded (so I'm still missing a piece of the puzzle). I am currently using the following custom mutli-class classifier for training, saving, and loading: class SVM(object): def __init__(self,features_or_filename,labels=None,kernel=None): if isinstance(features_or_filename,str): filename=features_or_filename; if labels!=None: raise ValueError,"Labels must be None if loading from a file."; with open(os.path.join(filename,"uniquelabels.list"),"rb") as uniquelabelsfile: self.uniquelabels=sorted(list(set(pickle.load(uniquelabelsfile)))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; self.classifiers=[]; for classidx, classname in enumerate(self.uniquelabels): self.classifiers.append(PyML.classifiers.svm.loadSVM(os.path.join(filename,str(classname)+".pyml.svm"),datasetClass = PyML.VectorDataSet)); else: features=features_or_filename; if labels==None: raise ValueError,"Labels must not be None when training."; self.uniquelabels=sorted(list(set(labels))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; points = [[float(xij) for xij in xi] for xi in features]; self.classifiers=[PyML.SVM(kernel) for label in self.uniquelabels]; for i in xrange(len(self.uniquelabels)): currentlabel=self.uniquelabels[i]; currentlabels=['+1' if k==currentlabel else '-1' for k in labels]; currentdataset=PyML.VectorDataSet(points,L=currentlabels,positiveClass='+1'); self.classifiers[i].train(currentdataset,saveSpace=False); def accuracy(self,pts,labels): logger=logging.getLogger("ml"); correct=0; total=0; classindexes=[self.labeltoindex[label] for label in labels]; h=self.hypotheses(pts); for idx in xrange(len(pts)): if h[idx]==classindexes[idx]: logger.info("RIGHT: Actual \"%s\" == Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])); correct+=1; else: logger.info("WRONG: Actual \"%s\" != Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])) total+=1; return float(correct)/float(total); def prediction(self,pt): h=self.hypothesis(pt); if h!=None: return self.uniquelabels[h]; return h; def predictions(self,pts): h=self.hypotheses(self,pts); return [self.uniquelabels[x] if x!=None else None for x in h]; def hypothesis(self,pt): bestvalue=None; bestclass=None; dataset=PyML.VectorDataSet([pt]); for classidx, classifier in enumerate(self.classifiers): val=classifier.decisionFunc(dataset,0); if (bestvalue==None) or (val>bestvalue): bestvalue=val; bestclass=classidx; return bestclass; def hypotheses(self,pts): bestvalues=[None for pt in pts]; bestclasses=[None for pt in pts]; dataset=PyML.VectorDataSet(pts); for classidx, classifier in enumerate(self.classifiers): for ptidx in xrange(len(pts)): val=classifier.decisionFunc(dataset,ptidx); if (bestvalues[ptidx]==None) or (val>bestvalues[ptidx]): bestvalues[ptidx]=val; bestclasses[ptidx]=classidx; return bestclasses; def save(self,filename): if not os.path.exists(filename): os.makedirs(filename); with open(os.path.join(filename,"uniquelabels.list"),"wb") as uniquelabelsfile: pickle.dump(self.uniquelabels,uniquelabelsfile,pickle.HIGHEST_PROTOCOL); for classidx, classname in enumerate(self.uniquelabels): self.classifiers[classidx].save(os.path.join(filename,str(classname)+".pyml.svm")); I am using the latest version of PyML (0.7.2, although PyML.__version__ is 0.7.0). When I construct the classifier with a training dataset, the reported accuracy is ~0.87. When I then save it and reload it, the accuracy is less than 0.001. So, there is something here that I am clearly not persisting correctly, although what that may be is completely non-obvious to me. Would you happen to know what that is?

    Read the article

  • Implementing a "special" Access database to Expression Web

    - by Newbie
    I have just got an answer to my question about combining the result of a SQL. This was done with "ConcatRelated". Now I want to implement this in Expression Web 3. The SQL's I used in Access: SELECT land.id, land.official_name, vaksiner.vaksiner FROM land INNER JOIN (vaksiner INNER JOIN land_sykdom ON vaksiner.id = land_sykdom.sykdom) ON land.kort = land_sykdom.land ORDER BY land.official_name; and SELECT DISTINCT id, official_name, ConcatRelated("vaksiner","qryVaksinerRaw","id = " & [id]) AS vaksiner FROM qryVaksinerRaw; The last is saved as vaksine_query This is the SQL that I want to add to Expression Web: SELECT vaksine_query.id, vaksine_query.official_name, vaksine_query.vaksiner FROM vaksine_query WHERE vaksine_query.id="?"; Expression Web gives me the error message "Undefined function 'ContactRelated' in expression.

    Read the article

  • Rows in their own columns depending on their date and symbolized by 'x'

    - by Chandradyani
    Dear All, please help me since I'm newbie in SQL Server. I have a select query that currently produces the following results: DoctorName Team Visit date dr. As A 5 dr. Sc A 4 dr. Gh B 6 dr. Nd C 31 dr As A 7 Using the following query: SELECT d.DoctorName, t.TeamName, ca.VisitDate FROM cActivity AS ca INNER JOIN doctor AS d ON ca.DoctorId = d.Id INNER JOIN team AS t ON ca.TeamId = t.Id WHERE ca.VisitDate BETWEEN '1/1/2010' AND '1/31/2010' I want to produce the following: DoctorName Team 1 2 3 4 5 6 7 ... 31 Visited dr. As A x x ... 2 times dr. Sc A x ... 1 times dr. Gh B x ... 1 times dr. Nd C ... X 1 times

    Read the article

  • UPDATE REGEX MYSQL

    - by Simon
    I have a table of contacts and a table of postcode data. I need to match the first part of the postcode and the join that with the postcode table... and then perform an update... I want to do something like this... UPDATE `contacts` LEFT JOIN `postcodes` ON PREG_GREP("/^[A-Z]{1,2}[0-9][0-9A-Z]{0,1}/", `contacts`.`postcode`) = `postcodes`.`postcode` SET `contacts`.`lat` = `postcode`.`lat`, `contacts`.`lng` = `postcode`.`lng` Is it possible?? Or do I need to use an external script? Many thanks.

    Read the article

  • Oracle analytic functions for "the atatrbute from the row with the max date"

    - by tpdi
    I'm refactoring a colleague's code, and I have several cases where he's using a cursor to get "the latest row that matches some predicate": His technique is to write the join as a cursor, order it by the date field descending, open the cursor, get the first row, and close the cursor. This requires calling a cursor for each row of the result set that drives this, which is costly for many rows. I'd prefer to be able to join, but what something cheaper than a correlated subquery: select a.id_shared_by_several_rows, a.foo from audit_trail a where a.entry_date = (select max(a.entry_date) from audit_trail b where b.id_shared_by_several_rows = a.id_shared_by_several_rows ); I'm guessing that since this is a common need, there's an Oracle analytic function that does this?

    Read the article

  • Optimize SQL with Interbase

    - by Roland Bengtsson
    I was inspired by the good answers from my previous question about SQL. Now this SQL is run on a DB with Interbase 2009. It is about 21 GB in size. SELECT DistanceAsMeters, Bold_Id, Created, AddressFrom.CityName_CO as FromCity, AddressTo.CityName_CO as ToCity FROM AddrDistance LEFT JOIN Address AddressFrom ON AddrDistance.FromAddress = AddressFrom.Bold_Id LEFT JOIN Address AddressTo ON AddrDistance.ToAddress = AddressTo.Bold_Id Where DistanceAsMeters = 0 and PseudoDistanceAsCostKm = 0 and not AddrDistance.bold_id in (select bold_id from DistanceQueryTask) Order By Created Desc There are 840000 rows with AddrDistance 190000 rows with Address and 4 with DistanceQueryTask. The question is, can this be done faster? I guess, the same query is run many times select bold_id from DistanceQueryTask. Note that I'm not interested in stored procedures, just plain SQL :)

    Read the article

  • linq2sql and multiple joins

    - by zerkms
    is it possible to do multiple joins: from g in dataContext.Groups join ug in dataContext.UsersGroups on g.Id equals ug.GroupId join u in dataContext.Users on u. where ug.UserId == user.Id select GroupRepository.ToEntity(g); in the sample above all is fine until i press "." in the end of the 3rd line. there i expect to get intellisense and write u.Id == ug.UserId but it doesn't appear. and of course this code doesn't compile after. what did i wrong?

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • Error preparing statement in Jasper

    - by Augusto
    Hi, I'm trying to create a report with Jasper, but I keep getting this exception when running from my app (runs ok from IReport): net.sf.jasperreports.engine.JRException: Error preparing statement for executing the report query Here's the query I'm using: SELECT produtos.`Descricao` AS produtos_Descricao, saidas.`Quantidade` AS saidas_Quantidade, saidas.`Data` AS saidas_Data, motivossaidas.`Motivo` AS motivossaidas_Motivo FROM `produtos` produtos INNER JOIN `saidas` saidas ON produtos.`Id` = saidas.`Id_Produto` INNER JOIN `motivossaidas` motivossaidas ON saidas.`Id_MotivoSaida` = motivossaidas.`id` WHERE motivossaidas.`motivo` = $P{MOTIVO} and the parameter definition: <parameter name="MOTIVO" class="java.lang.String"/> The exception occurs when I do JasperPrint jasperPrint = JasperFillManager.fillReport(relatorio, parametros); where relatorio is a JasperReport object loaded with JRLoader.loadObject(URL) and parametros is a HashMap with the following key/values: REPORT_CONNECTION = MySQL JDBC connection, MOTIVO = "Venda" I really don't know what to do here. I keep getting the exception event if I use a query without any parameters. Why do I get this exception? What should I do here? Thanks!

    Read the article

  • SELECT a list of elements and 5 tags for each one

    - by Vittorio Vittori
    Hi, I'm trying to query a set of buldings listed on a table, these buildings are linked with tags. I'm able to do it, but my problem is how limit the number of tags to see: table buildings id building_name style 1 Pompidou bla 2 Alcatraz bla 3 etc. etc. table tags // they can be 50 or more per building id tag_name 1 minimal 2 gothic 3 classical 4 modern 5 etc. table buildings_tags id building_id tag_id I though to do something like this to retrieve the list, but this isn't compplete: SELECT DISTINCT(tag), bulding_name FROM buldings INNER JOIN buildings_tags ON buildings.id = buildings_tags.building_id INNER JOIN tags ON tags.id = buildings_tags.tag_id LIMIT 0, 20 // result building tag Pompidou great Pompidou france Pompidou paris Pompidou industrial Pompidou renzo piano <= How to stop at the 5th result? Pompidou hi-tech Pompidou famous place Pompidou wtf etc.. etc... this query loads the buildings, but this query loads all the tags linked for the building, and not only 5 of them?

    Read the article

  • Error Creating View From Importing MysqlDump

    - by Joshua
    I don't speak SQL... Please, anybody help me. What does this mean?: Error SQL query: /*!50001 CREATE ALGORITHM=UNDEFINED *//*!50001 VIEW `v_sr_videntity` AS select `t`.`c_id` AS `ID`,`User`.`c_id` AS `UserID`,`videntityfingerprint`.`ID` AS `VIdentityFingerPrintID`,`videntityfingerprint`.`FingerPrintID` AS `FingerPrintID`,`videntityfingerprint`.`FingerPrintFingerPrint` AS `FingerPrintFingerPrint` from ((`t_SR_u_Identity` `t` join `t_SR_u_User` `User` on((`User`.`c_r_Identity` = `t`.`c_id`))) join `vi_sr_videntity_0` `VIdentityFingerPrint` on((`videntityfingerprint`.`c_r_Identity` = `t`.`c_id`))) */; MySQL said: #1054 - Unknown column 'videntityfingerprint.ID' in 'field list' What does this mean? What is it expecting? How do I fix it? This file was created by mysqldump, so why are there errors when I import it?

    Read the article

  • Progress Dialog in Android doesn't Show?

    - by Tyler
    Okay.. I am doing something similar to the below: private void onCreate() { final ProgressDialog dialog = ProgressDialog.show(this, "Please wait..", "Doing stuff..", true); Thread t = new Thread() { public void run() { //do some serious stuff... dialog.dismiss(); } }; t.start(); t.join(); stepTwo(); } However, what I am finding is that my progress dialog never even shows up. My App stalls for a moment so I know it is chugging along inside of thread t, but why doesnt my dialog appear? IF I remove the line: t.join(); Then what I find happens is that the progress dialog does show up, but my app starts stepTwo(); before what happens in the thread is complete.. Any ideas?

    Read the article

  • Where clause in joins vs Where clause in Sub Query

    - by Kanavi
    DDL create table t ( id int Identity(1,1), nam varchar(100) ) create table t1 ( id int Identity(1,1), nam varchar(100) ) DML Insert into t( nam)values( 'a') Insert into t( nam)values( 'b') Insert into t( nam)values( 'c') Insert into t( nam)values( 'd') Insert into t( nam)values( 'e') Insert into t( nam)values( 'f') Insert into t1( nam)values( 'aa') Insert into t1( nam)values( 'bb') Insert into t1( nam)values( 'cc') Insert into t1( nam)values( 'dd') Insert into t1( nam)values( 'ee') Insert into t1( nam)values( 'ff') Query - 1 Select t.*, t1.* From t t Inner join t1 t1 on t.id = t1.id Where t.id = 1 Query 1 SQL profiler Result Reads = 56, Duration = 4 Query - 2 Select T1.*, K.* from ( Select id, nam from t Where id = 1 )K Inner Join t1 T1 on T1.id = K.id Query 2 SQL Profiler Results Reads = 262 and Duration = 2 You can also see my SQlFiddle Query - Which query should be used and why?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >