Search Results

Search found 6276 results on 252 pages for 'join'.

Page 105/252 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • SQLCMD.EXE generates ugly report. How to format it?

    - by Juri Bogdanov
    I did batch to run SQL query like use [AxDWH_Central_Reporting] GO EXEC sp_spaceused @updateusage = N'TRUE' GO It displays 2 tables and generates some ugly report with some kind of 'P' unneeded letters... See below Changed database context to 'AxDWH_Central_Reporting'. database_name Pdatabase_size Punallocated space --------------------------------------------------------------------------------------------------------------------------------P------------------P------------------ AxDWH_Central_Reporting P10485.69 MB P7436.85 MB reserved Pdata Pindex_size Punused ------------------P------------------P------------------P------------------ 3121176 KB P3111728 KB P7744 KB P1704 KB ---------------------------------------------------------------- I also tryed to generate 1 table from this procedure with next query declare @dbname sysname, @dbsize bigint, @logsize bigint, @reservedpages bigint select @reservedpages = sum(a.total_pages) from sys.partitions p join sys.allocation_units a on p.partition_id = a.container_id left join sys.internal_tables it on p.object_id = it.object_id select @dbsize = sum(convert(bigint,case when status & 64 = 0 then size else 0 end)), @logsize = sum(convert(bigint,case when status & 64 <> 0 then size else 0 end)) from dbo.sysfiles select 'database name' = db_name(), 'database size' = ltrim(str((convert (dec (15,2),@dbsize) + convert (dec (15,2),@logsize)) * 8192 / 1048576,15,2) + ' MB'), 'unallocated space' = ltrim(str((case when @dbsize >= @reservedpages then (convert (dec (15,2),@dbsize) - convert (dec (15,2),@reservedpages)) * 8192 / 1048576 else 0 end),15,2) + ' MB') But got similar ugly report: database name Pdatabase size Punallocated space --------------------------------------------------------------------------------------------------------------------------------P------------------P------------------ master P5.75 MB P1.52 MB (1 rows affected) Is it possible to change the layout formatting for report? To make it more beautifull?

    Read the article

  • LINQ self referencing query

    - by Chris
    I have the following SQL query: select p1.[id], p1.[useraccountid], p1.[subject], p1.[message], p1.[views], p1.[parentid], case when p2.[created] is null then p1.[created] else p2.[created] end as LastUpdate from forumposts p1 left join ( select parentid, max(created) as [created] from forumposts group by parentid ) p2 on p2.parentid = p1.id where p1.[parentid] is null order by LastUpdate desc Using the following class: public class ForumPost : PersistedObject { public int Views { get; set; } public string Message { get; set; } public string Subject { get; set; } public ForumPost Parent { get; set; } public UserAccount UserAccount { get; set; } public IList<ForumPost> Replies { get; set; } } How would I replicate such a query in LINQ? I've tried several variations, but I seem unable to get the correct join syntax. Is this simply a case of a query that is too complicated for LINQ? Can it be done using nested queries some how? The purpose of the query is to find the most recently updated posts i.e. replying to a post would bump it to the top of the list. Replies are defined by the ParentID column, which is self-referencing.

    Read the article

  • SQL Syntax to count unique users completing a task

    - by Belliez
    I have the following code which shows me what users has completed ticket and this lists each user and the date they close a ticket. i.e. Paul Matt Matt Bob Matt Paul Matt Matt At the moment I manually count each user myself to see their totals for the day. EDIT: Changed output as columns instead of rows: What I have been trying to do is get SQL Server to do this for me i.e. the final result to look like: Paul | 2 Matt | 5 Bob | 1 My code I am currently using is and I would be greatful if someone can help me change this so I can get it outputting something similar to above? DECLARE @StartDate DateTime; DECLARE @EndDate DateTime; -- Date format: YYYY-MM-DD SET @StartDate = '2013-11-06 00:00:00' SET @EndDate = GETDATE() -- Today SELECT (select Username from Membership where UserId = Ticket.CompletedBy) as TicketStatusChangedBy FROM Ticket INNER JOIN TicketStatus ON Ticket.TicketStatusID = TicketStatus.TicketStatusID INNER JOIN Membership ON Ticket.CheckedInBy = Membership.UserId WHERE TicketStatus.TicketStatusName = 'Completed' and Ticket.ClosedDate >= @StartDate --(GETDATE() - 1) and Ticket.ClosedDate <= @EndDate --(GETDATE()-0) ORDER BY Ticket.CompletedBy ASC, Ticket.ClosedDate ASC Thank you for your help and time.

    Read the article

  • Help Converting T-SQL Query to LINQ Query

    - by campbelt
    I am new to LINQ, and so am struggle over some queries that I'm sure are pretty simple. In any case, I have been hiting my head against this for a while, but I'm stumped. Can anyone here help me convert this T-SQL query into a LINQ query? Once I see how it is done, I'm sure I'll have some question about the syntax: SELECT BlogTitle FROM Blogs b JOIN BlogComments bc ON b.BlogID = bc.BlogID WHERE b.Deleted = 0 AND b.Draft = 0 AND b.[Default] = 0 AND bc.Deleted = 0 GROUP BY BlogTitle ORDER BY MAX([bc].[Timestamp]) DESC Just to show that I have tried to solve this on my own, here is what I've come up with so far, though it doesn't compile, let alone work ... var iqueryable = from blog in db.Blogs join blogComment in db.BlogComments on blog.BlogID equals blogComment.BlogID where blog.Deleted == false && blog.Draft == false && blog.Default == false && blogComment.Deleted == false group blogComment by blog.BlogID into blogGroup orderby blogGroup.Max(blogComment => blogComment.Timestamp) select blogGroup;

    Read the article

  • XML/PHP : Content is not allowed in prolog

    - by Tristan
    Hello, i have this message error and i don't know where does the problem comes from: <?php include "DBconnection.class.php"; $sql = DBConnection::getInstance(); $requete = "SELECT g.siteweb, g.offreDedie, g.coupon, g.only_dedi, g.transparence, g.abonnement , s.GSP_nom as nom , COUNT(s.GSP_nom) as nb_votes, TRUNCATE(AVG(vote), 2) as qualite, TRUNCATE(AVG(prix), 2) as rapport, TRUNCATE(AVG(serviceClient), 2) as serviceCli, TRUNCATE(AVG(interface), 2) as interface, TRUNCATE(AVG(services), 2) as services FROM votes_serveur AS v INNER JOIN serveur AS s ON v.idServ = s.idServ INNER JOIN gsp AS g ON s.GSP_nom = g.nom WHERE s.valide = 1 GROUP BY s.GSP_nom"; $sql->query($requete); $xml = '<?xml version="1.0" encoding="UTF-8" ?>'; $xml .='<GamerCertified>'; while($row = $sql->fetchArray()){ $moyenne_services = ($row['services'] + $row['serviceCli'] + $row['interface'] ) / 3 ; $moyenne_services = round( $moyenne_services, 2); $moyenne_ge = ($row['services'] + $row['serviceCli'] + $row['interface'] + $row['qualite'] + $row['rapport'] ) / 5 ; $moyenne_ge = round( $moyenne_ge, 2); $xml .= '<GSP>'; $xml .= '<nom>'.$row["nom"].'</nom>'; $xml .= '<nombre-votes>'.$row["nb_votes"].'</nombre-votes>'; $xml .= '<services>'.$moyenne_services.'</services>'; $xml .= '<qualite>'.$row["qualite"].'</qualite>'; $xml .= '<prix>'.$row["rapport"].'</prix>'; $xml .= '<label-transparence>'.$row["transparence"].'</label-transparence>'; $xml .= '<moyenne-generale>'.$moyenne_ge.'</moyenne-generale>'; $xml .= '<serveurs-dedies>'.$row["offreDedie"].'</serveurs-dedies>'; $xml .= '</GSP>'; } $xml .= '</GamerCertified>'; echo $xml; Thanks

    Read the article

  • Make Python Socket Server More Efficient

    - by BenMills
    I have very little experience working with sockets and multithreaded programming so to learn more I decided to see if I could hack together a little python socket server to power a chat room. I ended up getting it working pretty well but then I noticed my server's CPU usage spiked up over 100% when I had it running in the background. Here is my code in full: http://gist.github.com/332132 I know this is a pretty open ended question so besides just helping with my code are there any good articles I could read that could help me learn more about this? My full code: import select import socket import sys import threading from daemon import Daemon class Server: def __init__(self): self.host = '' self.port = 9998 self.backlog = 5 self.size = 1024 self.server = None self.threads = [] self.send_count = 0 def open_socket(self): try: self.server = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) self.server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.server.bind((self.host,self.port)) self.server.listen(5) print "Server Started..." except socket.error, (value,message): if self.server: self.server.close() print "Could not open socket: " + message sys.exit(1) def remove_thread(self, t): t.join() def send_to_children(self, msg): self.send_count = 0 for t in self.threads: t.send_msg(msg) print 'Sent to '+str(self.send_count)+" of "+str(len(self.threads)) def run(self): self.open_socket() input = [self.server,sys.stdin] running = 1 while running: inputready,outputready,exceptready = select.select(input,[],[]) for s in inputready: if s == self.server: # handle the server socket c = Client(self.server.accept(), self) c.start() self.threads.append(c) print "Num of clients: "+str(len(self.threads)) self.server.close() for c in self.threads: c.join() class Client(threading.Thread): def __init__(self,(client,address), server): threading.Thread.__init__(self) self.client = client self.address = address self.size = 1024 self.server = server self.running = True def send_msg(self, msg): if self.running: self.client.send(msg) self.server.send_count += 1 def run(self): while self.running: data = self.client.recv(self.size) if data: print data self.server.send_to_children(data) else: self.running = False self.server.threads.remove(self) self.client.close() """ Run Server """ class DaemonServer(Daemon): def run(self): s = Server() s.run() if __name__ == "__main__": d = DaemonServer('/var/servers/fserver.pid') if len(sys.argv) == 2: if 'start' == sys.argv[1]: d.start() elif 'stop' == sys.argv[1]: d.stop() elif 'restart' == sys.argv[1]: d.restart() else: print "Unknown command" sys.exit(2) sys.exit(0) else: print "usage: %s start|stop|restart" % sys.argv[0] sys.exit(2)

    Read the article

  • Hibernate limitations on using variables in queries

    - by sammichy
    I had asked the following question I have the following table structure for a table Player Table Player { Long playerID; Long points; Long rank; } Assuming that the playerID and the points have valid values, can I update the rank for all the players based on the number of points in a single query? If two people have the same number of points, they should tie for the rank. And received the answer from Daniel Vassalo (thank you). UPDATE player JOIN (SELECT p.playerID, IF(@lastPoint <> p.points, @curRank := @curRank + 1, @curRank) AS rank, IF(@lastPoint = p.points, @curRank := @curRank + 1, @curRank), @lastPoint := p.points FROM player p JOIN (SELECT @curRank := 0, @lastPoint := 0) r ORDER BY p.points DESC ) ranks ON (ranks.playerID = player.playerID) SET player.rank = ranks.rank; When I try to execute this as a native query in Hibernate, the following exception is thrown. java.lang.IllegalArgumentException: org.hibernate.QueryException: Space is not allowed after parameter prefix ':' Apparently this has been an open issue for the last couple of years, I want to know if the ranking query can be made to work either Without using any variables in the SQL query OR Using any workaround for Hibernate.

    Read the article

  • Use the Django ORM in a standalone script (again)

    - by Rishabh Manocha
    I'm trying to use the Django ORM in some standalone screen scraping scripts. I know this question has been asked before, but I'm unable to figure out a good solution for my particular problem. I have a Django project with defined models. What I would like to do is use these models and the ORM in my scraping script. My directory structure is something like this: project scrape #scraping scripts ... test.py web django_project settings.py ... #Django files I tried doing the following in project/scrape/test.py: print os.path.join(os.path.abspath('..'), 'web', 'django_project') sys.path.append(os.path.join(os.path.abspath('..'), 'web', 'django_project')) print sys.path print "-------" os.environ['DJANGO_SETTINGS_MODULE'] = 'django_project.settings' #print os.environ from django_project.myapp.models import MyModel print MyModel.objects.count() However, I get an ImportError when I try to run test.py: Traceback (most recent call last): File "test.py", line 12, in <module> from django_project.myapp.models import MyModel ImportError: No module named django_project.myapp.models One solution I found around this problem is to create a symbolic link to ../web/govcheck in the scrape folder: :scrape rmanocha$ ln -s ../web/govcheck ./govcheck With this, I can then run test.py just fine. However, this seems like a hack, and more importantly, is not very portable (I will have to create this symbolic link everywhere I run this code). So, I was wondering if anyone has any better solutions for my problem?

    Read the article

  • Complex query in mysql

    - by Satish
    I have two tables reports and holidays. reports: (username varchar(30),activity varchar(30),hours int(3),report_date date) holidays: (holiday_name varchar(30), holiday_date date) select * from reports gives +----------+-----------+---------+------------+ | username | activity | hours | date | +----------+-----------+---------+------------+ | prasoon | testing | 3 | 2009-01-01 | | prasoon | coding | 4 | 2009-01-03 | | gautam | coding | 1 | 2009-01-05 | | prasoon | coding | 4 | 2009-01-06 | | prasoon | coding | 4 | 2009-01-10 | | gautam | coding | 4 | 2009-01-10 | +----------+-----------+---------+------------+ select * from holidays gives +--------------+---------------+ | holiday_name | holiday_date | +--------------+---------------+ | Diwali | 2009-01-02 | | Holi | 2009-01-05 | +--------------+---------------+ When I used the following query SELECT dates.date AS date, CASE WHEN holiday_name IS NULL THEN COALESCE(reports.activity, 'Absent') WHEN holiday_name IS NOT NULL and reports.activity IS NOT NULL THEN reports.activity ELSE '' END AS activity, CASE WHEN holiday_name IS NULL THEN COALESCE(reports.hours, 'Absent') WHEN holiday_name IS NOT NULL and reports.hours IS NOT NULL THEN reports.hours ELSE '' END AS hours, CASE WHEN holiday_name IS NULL THEN COALESCE(holidays.holiday_name, '') ELSE holidays.holiday_name END AS holiday_name FROM dates LEFT OUTER JOIN reports ON dates.date = reports.date LEFT OUTER JOIN holidays ON dates.date = holidays.holiday_date where reports.username='gautam' and dates.date>='2009-01-01' and dates.date<='2009-01-10'; I got the following output +----------+-----------+---------+------------+ | date | activity | hours | holiday | +----------+-----------+---------+------------+ |2009-01-05| coding | 1 | Holi | +----------+-----------+---------+------------+ |2009-01-10| coding | 4 | | +----------+-----------+---------+------------+ but I expected this +----------+-----------+---------+------------+ | date | activity | hours | holiday | +----------+-----------+---------+------------+ |2009-01-01| Absent | Absent | | +----------+-----------+---------+------------+ |2009-01-02| | | Diwali | +----------+-----------+---------+------------+ |2009-01-03| Absent | Absent | | +----------+-----------+---------+------------+ |2009-01-04| Absent | Absent | | +----------+-----------+---------+------------+ |2009-01-05| Coding | 1 | Holi | +----------+-----------+---------+------------+ |2009-01-06| Absent | Absent | | +----------+-----------+---------+------------+ |2009-01-07| Absent | Absent | | +----------+-----------+---------+------------+ |2009-01-08| Absent | Absent | | +----------+-----------+---------+------------+ |2009-01-09| Absent | Absent | | +----------+-----------+---------+------------+ |2009-01-10| Coding | 4 | | +----------+-----------+---------+------------+ How can I modify the above query to get the desired output(for a particular user (gautam in this case))?

    Read the article

  • Can't get any speedup from parallelizing Quicksort using Pthreads

    - by Murat Ayfer
    I'm using Pthreads to create a new tread for each partition after the list is split into the right and left halves (less than and greater than the pivot). I do this recursively until I reach the maximum number of allowed threads. When I use printfs to follow what goes on in the program, I clearly see that each thread is doing its delegated work in parallel. However using a single process is always the fastest. As soon as I try to use more threads, the time it takes to finish almost doubles, and keeps increasing with number of threads. I am allowed to use up to 16 processors on the server I am running it on. The algorithm goes like this: Split array into right and left by comparing the elements to the pivot. Start a new thread for the right and left, and wait until the threads join back. If there are more available threads, they can create more recursively. Each thread waits for its children to join. Everything makes sense to me, and sorting works perfectly well, but more threads makes it slow down immensely. I tried setting a minimum number of elements per partition for a thread to be started (e.g. 50000). I tried an approach where when a thread is done, it allows another thread to be started, which leads to hundreds of threads starting and finishing throughout. I think the overhead was way too much. So I got rid of that, and if a thread was done executing, no new thread was created. I got a little more speedup but still a lot slower than a single process. The code I used is below. http://pastebin.com/UaGsjcq2 Does anybody have any clue as to what I could be doing wrong?

    Read the article

  • scraping text from multiple html files into a single csv file

    - by Lulu
    I have just over 1500 html pages (1.html to 1500.html). I have written a code using Beautiful Soup that extracts most of the data I need but "misses" out some of the data within the table. My Input: e.g file 1500.html My Code: #!/usr/bin/env python import glob import codecs from BeautifulSoup import BeautifulSoup with codecs.open('dump2.csv', "w", encoding="utf-8") as csvfile: for file in glob.glob('*html*'): print 'Processing', file soup = BeautifulSoup(open(file).read()) rows = soup.findAll('tr') for tr in rows: cols = tr.findAll('td') #print >> csvfile,"#".join(col.string for col in cols) #print >> csvfile,"#".join(td.find(text=True)) for col in cols: print >> csvfile, col.string print >> csvfile, "===" print >> csvfile, "***" Output: One CSV file, with 1500 lines of text and columns of data. For some reason my code does not pull out all the required data but "misses" some data, e.g the Address1 and Address 2 data at the start of the table do not come out. I modified the code to put in * and === separators, I then use perl to put into a clean csv file, unfortunately I'm not sure how to work my code to get all the data I'm looking for!

    Read the article

  • My AJAX is only firing once,

    - by sea_1987
    Hi There, I ahave some ajax that is fired when a checkbox is clicked, it essentially sends a query string to a PHP script and then returns the relevant HTML, however, if I select a select it works fine if I then slect another checkbox as well as the previous I get no activity what so ever, not even any errors in firebug, it is very curious, does anyone have any ideas? //Location AJAX //var dataObject = new Object(); var selected = new Array(); //alert(selected); $('#areas input.radio').change(function(){ // will trigger when the checked status changes var checked = $(this).attr("checked"); // will return "checked" or false I think. // Do whatever request you like with the checked status if(checked == true) { //selected.join('&'); selected = $('input:checked').map(function() { return $(this).attr('name')+"="+$(this).val(); }).get(); getQuery = selected.join('&')+"&location_submit=Next"; alert(getQuery); $.ajax({ type:"POST", url:"/search/location", data: getQuery, success:function(data){ //alert(getQuery); //console.log(data); $('body.secEmp').html(data); } }); } else { //do something to remove the content here alert($(this).attr('name')); } });

    Read the article

  • SSRS - Oracle DB, Passing Date parameter

    - by davidl98
    Using SSRS with an Oracle Database. I need to prompt the user when running the report to enter a date for report. What is the best way to add in the parameter in my SSRS Report. Having problem finding the right date format. under the "Report Parameter" menu, I have setup the Report Parameters using the DateTime Datatype. Keep getting this error "ORA-01843: Not a Valid Month" Thank you for your help. Select a.OPR_Name, a.OPR, a.Trans_Desc, a.Trans_Start_Date, Cast(a.S_Date as date) as S_Date, Sum(a.Duration) as T From ( Select US_F.OPR_Name, ITH_F.OPR, ITH_F.ITH_RID, ITH_F.TRANSACT, Transact.DESC_1 as Trans_Desc, To_CHAR(ITH_F.Start_Time,'DD-Mon-YY') as Trans_Start_Date, To_CHAR(ITH_F.Start_Time,'MM/DD/YYYY') as S_Date, Substr(To_CHAR(ITH_F.Start_Time,'HH24:MI'),1,6) as Start_Time, To_CHAR(ITH_F.End_Time,'DD-Mon-YY') as Trans_End_Date, Substr(To_CHAR(ITH_F.End_Time,'HH24:MI'),1,6) as End_Time, Cast(Case When To_CHAR(ITH_F.Start_Time,'DD-Mon-YY') = To_CHAR(ITH_F.End_Time,'DD-Mon-YY') Then (((To_CHAR(ITH_F.End_Time,'SSSSS') - To_CHAR(ITH_F.Start_Time,'SSSSS')) / 60))/60 Else ((86399 - (To_CHAR(ITH_F.Start_Time,'SSSSS')) + To_CHAR(ITH_F.End_Time,'SSSSS'))/60)/60 End as Decimal(3,1)) as Duration from Elite_76_W1.ITH_F Left Join Elite_76_W1.Transact on Transact.Transact = ITH_F.Transact Left Join Elite_76_W1.US_F on US_F.OPR = ITH_F.OPR Where ITH_F.TRANSACT not in ('ASN','QC','LGOT') ) a Where a.S_Date = @Event_Date Having Sum(a.Duration) < 0 Group By a.OPR_Name, a.OPR, a.Trans_Desc, a.Trans_Start_Date, a.S_Date Order by a.OPR_Name

    Read the article

  • sql charateristic function for avg dates

    - by holden
    I have a query which I use to grab specific dates and a price for the date, but now I'd like to use something similar to grab the avg prices for particular days of the week. Here's my current query which works for specific dates to pull from a table called availables: SELECT rooms.name, rooms.roomtype, rooms.id, max(availables.updated_at), MAX(IF(to_days(availables.bookdate) - to_days('2009-12-10') = 0, (availables.price*0.66795805223432), '')) AS day1, MAX(IF(to_days(availables.bookdate) - to_days('2009-12-10') = 1, (availables.price*0.66795805223432), '')) AS day2, MAX(IF(to_days(availables.bookdate) - to_days('2009-12-10') = 2, (availables.price*0.66795805223432), '')) AS day3, MAX(IF(to_days(availables.bookdate) - to_days('2009-12-10') = 3, (availables.price*0.66795805223432), '')) AS day4, MAX(IF(to_days(availables.bookdate) - to_days('2009-12-10') = 4, (availables.price*0.66795805223432), '')) AS day5, MAX(IF(to_days(availables.bookdate) - to_days('2009-12-10') = 5, (availables.price*0.66795805223432), '')) AS day6, MAX(IF(to_days(availables.bookdate) - to_days('2009-12-10') = 6, (availables.price*0.66795805223432), '')) AS day7, MIN(spots) as spots FROM `availables` INNER JOIN rooms ON availables.room_id=rooms.id WHERE rooms.hotel_id = '5064' AND bookdate BETWEEN '2009-12-10' AND DATE_ADD('2009-12-10', INTERVAL 6 DAY) GROUP BY rooms.name ORDER BY rooms.ppl My first stab which doesn't work, probably because the DAYSOFWEEK function is much different from the to_days... SELECT rooms.id, rooms.name, MAX(IF(DAYOFWEEK(availables.bookdate) - DAYOFWEEK('2009-12-10') = 0, (availables.price*0.66795805223432), '')) AS day1, MAX(IF(DAYOFWEEK(availables.bookdate) - DAYOFWEEK('2009-12-10') = 1, (availables.price*0.66795805223432), '')) AS day2, MAX(IF(DAYOFWEEK(availables.bookdate) - DAYOFWEEK('2009-12-10') = 2, (availables.price*0.66795805223432), '')) AS day3, MAX(IF(DAYOFWEEK(availables.bookdate) - DAYOFWEEK('2009-12-10') = 3, (availables.price*0.66795805223432), '')) AS day4, MAX(IF(DAYOFWEEK(availables.bookdate) - DAYOFWEEK('2009-12-10') = 4, (availables.price*0.66795805223432), '')) AS day5, MAX(IF(DAYOFWEEK(availables.bookdate) - DAYOFWEEK('2009-12-10') = 5, (availables.price*0.66795805223432), '')) AS day6, MAX(IF(DAYOFWEEK(availables.bookdate) - DAYOFWEEK('2009-12-10') = 6, (availables.price*0.66795805223432), '')) AS day7,rooms.ppl AS spots FROM `availables` INNER JOIN `rooms` ON `rooms`.id = `availables`.room_id WHERE (rooms.hotel_id = 5064 AND rooms.ppl > 3 AND availables.price > 0 AND availables.spots > 1) GROUP BY rooms.name ORDER BY rooms.ppl Maybe i'm making this crazy hard and someone knows a much simpler way. It takes data that looks like this #Availables id room_id price spots bookdate 1 26 $5 5 2009-10-20 2 26 $6 5 2009-10-21 to: +----+-------+--------------------+---------------------+---------------------+---------------------+------+------+------+------+ | id | spots | name | day1 | day2 | day3 | day4 | day5 | day6 | day7 | +----+-------+--------------------+---------------------+---------------------+---------------------+------+------+------+------+ | 25 | 4 | Blue Room | 14.9889786921381408 | 14.9889786921381408 | 14.9889786921381408 | | | | | | 26 | 6 | Whatever | 13.7398971344599624 | 13.7398971344599624 | 13.7398971344599624 | | | | | | 27 | 8 | Some name | 11.2417340191036056 | 11.2417340191036056 | 11.2417340191036056 | | | | | | 28 | 8 | Another | 9.9926524614254272 | 9.9926524614254272 | 9.9926524614254272 | | | | | | 29 | 10 | Stuff | 7.4944893460690704 | 7.4944893460690704 | 7.4944893460690704 | | | | | +----+-------+--------------------+---------------------+---------------------+---------------------+------+------+------+---

    Read the article

  • Query Optimizing Request

    - by mithilatw
    I am very sorry if this question is structured in not a very helpful manner or the question itself is not a very good one! I need to update a MSSQL table call component every 10 minutes based on information from another table call materials_progress I have nearly 60000 records in component and more than 10000 records in materials_progress I wrote an update query to do the job, but it takes longer than 4 minutes to complete execution! Here is the query : UPDATE component SET stage_id = CASE WHEN t.required_quantity <= t.total_received THEN 27 WHEN t.total_ordered < t.total_received THEN 18 ELSE 18 END FROM ( SELECT mp.job_id, mp.line_no, mp.component, l.quantity AS line_quantity, CASE WHEN mp.component_name_id = 2 THEN l.quantity*2 ELSE l.quantity END AS required_quantity, SUM(ordered) AS total_ordered, SUM(received) AS total_received , c.component_id FROM line l LEFT JOIN component c ON c.line_id = l.line_id LEFT JOIN materials_progress mp ON l.job_id = mp.job_id AND l.line_no = mp.line_no AND c.component_name_id = mp.component_name_id WHERE mp.job_id IS NOT NULL AND (mp.cancelled IS NULL OR mp.cancelled = 0) AND (mp.manual_override IS NULL OR mp.manual_override = 0) AND c.stage_id = 18 GROUP BY mp.job_id, mp.line_no, mp.component, l.quantity, mp.component_name_id, component_id ) AS t WHERE component.component_id = t.component_id I am not going to explain the scenario as it too complex.. could somebody please please tell me what makes this query this much expensive and a way to get around it? Thank you very very much in advance!!!

    Read the article

  • Spring-hibernate mapping problem

    - by James
    I have a spring-hibernate application which is failing to map an object properly: basically I have 2 domain objects, a Post and a User. The semantics are that every Post has 1 corresponding User. The Post domain object looks roughly as follows: class Post { private int pId; private String attribute; ... private User user; //getters and setters here } As you can see, Post contains a reference to User. When I load a Post object, I want to corresponding User object to be loaded (lazily - only when its needed). My mapping looks as follows: <class name="com...Post" table="post"> <id name="pId" column="PostId" /> <property name="attribute" column="Attribute" type="java.lang.String" /> <one-to-one name="User" fetch="join" class="com...User"></one-to-one> </class> And of course I have a basic mapping for User set up. As far as my table schema is concerned, I have a table called post with a foreign UserId which links to the user table. I thought this setup should work, BUT when I load a page that forces the lazy loading of the User object, I notice the following Hiberate query being generated: Select ... from post this_ left outer join user user2_ on this.PostId=user2_.UserId ... Obviously this is wrong: it should be joining UserId from post with UserId from user, but instead its incorrectly joining PostId from post (its primary key) with UserId from user. Any ideas? Thanks!

    Read the article

  • Hibernate - query caching/second level cache does not work by value object containing subitems

    - by Zoltan Hamori
    Hi! I have been struggling with the following problem: I have a value object containing different panels. Each panel has a list of fields. Mapping: <class name="com.aviseurope.core.application.RACountryPanels" table="CTRY" schema="DBDEV1A" where="PEARL_CTRY='Y'" lazy="join"> <cache usage="read-only"/> <id name="ctryCode"> <column name="CTRY_CD_ID" sql-type="VARCHAR2(2)" not-null="true"/> </id> <bag name="panelPE" table="RA_COUNTRY_MAPPING" fetch="join" where="MANDATORY_FLAG!='N'"> <key column="COUNTRY_LOCATION_ID"/> <many-to-many class="com.aviseurope.core.application.RAFieldVO" column="RA_FIELD_MID" where="PANEL_ID='PE'"/> </bag> </class> I use the following criteria to get the value object: Session m_Session = HibernateUtil.currentSession(); m_Criteria = m_Session.createCriteria(RACountryPanels.class); m_Criteria.add(Expression.eq("ctryCode", p_Country)); m_Criteria.setCacheable(true); As I see the query cache contains only the main select like select * from CTRY where ctry_cd_id=? Both RACountryPanels and RAFieldVO are second level cached. If I check the 2nd level cache content I can see that it cointains the RAFields and the RACountryPanels as well and I can see the select .. from CTRY where ctry_cd_id=... in query cache region as well. When I call the servlet it seems that it is using the cache, but second time not. If I check the content of the cache using JMX, everything seems to be ok, but when I measure the object access time, it seems that it does not always use the cache. Cheers Zoltan

    Read the article

  • SQL 2005 indexed queries slower than unindexed queries

    - by uos??
    Adding a seemingly perfectly index is having an unexpectedly adverse affect on a query performance... -- [Data] has a predictable structure and a simple clustered index of the primary key: ALTER TABLE [dbo].[Data] ADD PRIMARY KEY CLUSTERED ( [ID] ) -- My query, joins on itself looking for a certain kind of "overlapping" records SELECT DISTINCT [Data].ID AS [ID] FROM dbo.[Data] AS [Data] JOIN dbo.[Data] AS [Compared] ON [Data].[A] = [Compared].[A] AND [Data].[B] = [Compared].[B] AND [Data].[C] = [Compared].[C] AND ([Data].[D] = [Compared].[D] OR [Data].[E] = [Compared].[E]) AND [Data].[F] <> [Compared].[F] WHERE 1=1 AND [Data].[A] = @A AND @CS <= [Data].[C] AND [Data].[C] < @CE -- Between a range [Data] has about a quarter-million records so far, 10% to 50% of the data satisfies the where clause depending on @A, @CS, and @CE. As is, the query takes 1 second to return about 300 rows when querying 10%, and 30 seconds to return 3000 rows when querying 50% of the data. Curiously, the estimated/actual execution plan indicates two parallel Clustered Index Scans, but the clustered index is only of the ID, which isn't part of the conditions of the query, only the output. ?? If I add this hand-crafted [IDX_A_B_C_D_E_F] index which I fully expected to improve performance, the query slows down by a factor of 8 (8 seconds for 10% & 4 minutes for 50%). The estimated/actual execution plans show an Index Seek, which seems like the right thing to be doing, but why so slow?? CREATE UNIQUE INDEX [IDX_A_B_C_D_E_F] ON [dbo].[Data] ([A], [B], [C], [D], [E], [F]) INCLUDE ([ID], [X], [Y], [Z]); The Data Engine Tuning wizard suggests a similar index with no noticeable difference in performance from this one. Moving AND [Data].[F] <> [Compared].[F] from the join condition to the where clause makes no difference in performance. I need these and other indexes for other queries. I'm sure I could hint that the query should refer to the Clustered Index, since that's currently winning - but we all know it is not as optimized as it could be, and without a proper index, I can expect the performance will get much worse with additional data. What gives?

    Read the article

  • Data in two databases, eager spool resulting in query

    - by Valkyrie
    I have two databases in SQL2k5: one that holds a large amount of static data (SQL Database 1) (never updated but frequently inserted into) and one that holds relational data (SQL Database 2) related to the static data. They're separated mainly because of corporate guidelines and business requirements: assume for the following problem that combining them is not practical. There are places in SQLDB2 that PKs in SQLDB1 are referenced; triggers control the referential integrity, since cross-database relationships are troublesome in SQL Server. BUT, because of the large amount of data in SQLDB1, I'm getting eager spools on queries that join from the Id in SQLDB2 that references the data in SQLDB1. (With me so far? Maybe an example will help:) SELECT t.Id, t.Name, t2.Company FROM SQLDB1.table t INNER JOIN SQLDB2.table t2 ON t.Id = t2.FKId This query results in a eager spool that's 84% of the load of the query; the table in SQLDB1 has 35M rows, so it's completely choking this query. I can't create a view on the table in SQLDB1 and use that as my FK/index; it doesn't want me to create a constraint based on a view. Anyone have any idea how I can fix this huge bottleneck? (Short of putting the static data in the first db: believe me, I've argued that one until I'm blue in the face to no avail.) Thanks! valkyrie Edit: also can't create an indexed view because you can't put schemabinding on a view that references a table outside the database where the view resides. Dang it.

    Read the article

  • define javascript functions on iframe facebook app inside <fb:serverfbml> tag

    - by user233486
    Hi all, How we can define JS function on inside ? I tried to load file javascript on the end tag , but I still can't call the function from javascript file. Here the FBML tag <fb:serverfbml> <script type="text/fbml"> <fb:fbml> <a href="#" id="this" onclick="do_colors(this); return false">Hello World!</a> <script src="http://absolute.path.to/your/javascript/file.js"></script> </fb:fbml> </script> </fb:serverfbml> And here the javacript function on file.js function random_int(lo, hi) { return Math.floor((Math.random() * (hi - lo)) + lo) } function do_colors(obj) { var r = random_int(0, 255), b = random_int(0, 255), g = random_int(0, 255); obj.setStyle({background: 'rgb('+[r, g, b].join(',')+')', color: 'rgb('+[r<128?r+128:r-128, g<128?g+128:g-128, b<128?b+128:b-128].join(',')+')'}); } I use rails and facebooker to develop the application Any idea or suggestion for define javascript functions? Thanks,

    Read the article

  • Usage of putty in command line from Hudson

    - by kij
    Hi, I'm trying to use putty in command line from an hudson job. The command is the following one: putty -ssh -2 -P 22 USERNAME@SERVER_ADDR -pw PASS -m command.txt Where 'command.txt' is a shell script to execute in the server through SSH. If i launch this command from the Window command prompt, it works, the shell script is executed on the server machine. If i launch a build of the hudson job configured with this batch command, it doesn't work. The build is running... and running... and running.. without doing anything, and i have to stop it manually. So my question is: Is it possible to launch an external programm (i.e. putty) from an hudson job ? ps: i tried SSH plugin but... not a really good plugin (pre/post build, fail status of the commands launched not caught by hudson, etc.) Thanks in advance for your help. Best regards. kij EDIT: These are the build logs: [workspace] $ cmd /c call C:\WINDOWS\TEMP\hudson7429256014041663539.bat C:\Hudson\jobs\Artifact deployer\workspace>putty -ssh -2 -P 22 USER@SERV_ADD -pw PASS -m com.txt Le build a été annulé Finished: ABORTED And the Hudson.err.log file at the same time (after a stop): 3 juin 2010 18:27:28 hudson.model.Run run INFO: Artifact deployer #6 aborted java.lang.InterruptedException at java.lang.ProcessImpl.waitFor(Native Method) at hudson.Proc$LocalProc.join(Proc.java:179) at hudson.Launcher$ProcStarter.join(Launcher.java:278) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:83) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:601) at hudson.model.Build$RunnerImpl.build(Build.java:174) at hudson.model.Build$RunnerImpl.doRun(Build.java:138) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:416) at hudson.model.Run.run(Run.java:1241) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:124) My shell script only write "hello" in a "hello.txt" file on the server, and nothing is done.

    Read the article

  • how to write this typical mysql query( ho to use subquery column into main query)

    - by I Like PHP
    I HAVE TWO TABLES shown below table_joining id join_id(PK) transfer_id(FK) unit_id transfer_date joining_date 1 j_1 t_1 u_1 2010-06-05 2010-03-05 2 j_2 t_2 u_3 2010-05-10 2010-03-10 3 j_3 t_3 u_6 2010-04-10 2010-01-01 4 j_5 NULL u_3 NULL 2010-06-05 5 j_6 NULL u_4 NULL 2010-05-05 table_transfer id transfer_id(PK) pastUnitId futureUnitId effective_transfer_date 1 t_1 u_3 u_1 2010-06-05 2 t_2 u_6 u_1 2010-05-10 3 t_3 u_5 u_3 2010-04-10 now i want to know total employee detalis( using join_id) which are currently working on unit u_3 . means i want only join_id j_1 (has transfered but effective_transfer_date is future date, right now in u_3) j_2 ( tansfered and right now in `u_3` bcoz effective_transfer_date has been passed) j_6 ( right now in `u_3` and never transfered) what i need to take care of below steps( as far as i know ) <1> first need to check from table_joining whether transfer_id is NULL or not <2> if transfer_id= is NULL then see unit_id=u_3 where joining_date <=CURDATE() ( means that person already joined u_3) <3> if transfer_id is NOT NULL then go to table_transfer using transfer_id (foreign key reference) <4> now see the effective_transfer_date regrading that transfer_id whether effective_transfer_date<=CURDATE() <5> if transfer date has been passed(means transfer has been done) then return futureUnitID otherwise return pastUnitID i used two separate query but don't know how to join those query?? for step <1 ans <2 SELECT unit_id FROM table_joining WHERE joining_date<=CURDATE() AND transfer_id IS NULL AND unit_id='u_3' for step<5 SELECT IF(effective_transfer_date <= CURDATE(),futureUnitId,pastUnitId) AS currentUnitID FROM table_transfer // here how do we select only those rows which have currentUnitID='u_3' ?? please guide me the process?? i m just confused with JOINS. i think using LEFT JOIN can return the data i need, but i m not getting how to implement ...please help me. Thanks for helping me alwayz

    Read the article

  • No acceleration for OpenGL and ImportError for modules that exist

    - by Aku
    I'm writing a program using wxPython and OpenGL. The program works, but without any antialiasing, and I get these error messages: (I'm using ArchLinux) INFO:OpenGL.acceleratesupport:No OpenGL_accelerate module loaded: No module named OpenGL_accelerate INFO:OpenGL.formathandler:Unable to load registered array format handler numpy: Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/OpenGL/arrays/formathandler.py", line 44, in loadPlugin plugin_class = entrypoint.load() File "/usr/lib/python2.6/site-packages/OpenGL/plugins.py", line 14, in load return importByName( self.import_path ) File "/usr/lib/python2.6/site-packages/OpenGL/plugins.py", line 28, in importByName module = __import__( ".".join(moduleName), {}, {}, moduleName) File "/usr/lib/python2.6/site-packages/OpenGL/arrays/numpymodule.py", line 11, in <module> raise ImportError( """No numpy module present: %s"""%(err)) ImportError: No numpy module present: No module named numpy INFO:OpenGL.formathandler:Unable to load registered array format handler numeric: Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/OpenGL/arrays/formathandler.py", line 44, in loadPlugin plugin_class = entrypoint.load() File "/usr/lib/python2.6/site-packages/OpenGL/plugins.py", line 14, in load return importByName( self.import_path ) File "/usr/lib/python2.6/site-packages/OpenGL/plugins.py", line 28, in importByName module = __import__( ".".join(moduleName), {}, {}, moduleName) File "/usr/lib/python2.6/site-packages/OpenGL/arrays/numeric.py", line 15, in <module> raise ImportError( """No Numeric module present: %s"""%(err)) ImportError: No Numeric module present: No module named Numeric However, when I look into my site-packages folder, I see those modules present there. I have a wxPython demo program that uses GLCanvas, and it works fine, without any errors. My program is quite similar to the GLCanvas demo, involving just translations, rotations, drawing quads and some basic lighting. What am I doing wrong here? (The code is over 200 lines, if necessary I'll edit this and put it here.)

    Read the article

  • Fulltext search on many tables

    - by Rob
    I have three tables, all of which have a column with a fulltext index. The user will enter search terms into a single text box, and then all three tables will be searched. This is better explained with an example: documents doc_id name FULLTEXT table2 id doc_id a_field FULLTEXT table3 id doc_id another_field FULLTEXT (I realise this looks stupid but that's because I've removed all the other fields and tables to simplify it). So basically I want to do a fulltext search on name, a_field and another_field, and then show the results as a list of documents, preferably with what caused that document to be found, e.g. if another_field matched, I would display what another_field is. I began working on a system whereby three fulltext search queries are performed and the results inserted into a table with a structure like: search_results table_name row_id score (This could later be made to cache results for a few days with e.g. a hash of the search terms). This idea has two problems. The first is that the same document can be in the search results up to three times with different scores. Instead of that, if the search term is matched in two tables, it should have one result, but a higher score. The second is that parsing the results is difficult. I want to display a list of documents, but I don't immediately know the doc_id without a join of some kind; however the table to join to is dependant on the table_name column, and I'm not sure how to accomplish that. Wanting to search multiple related tables like this must be a common thing, so I guess what I'm asking is am I approaching this in the right way? Can someone tell me the best way of doing it please.

    Read the article

  • Boost Thread Synchronization

    - by Dave18
    I don't see synchronized output when i comment the the line wait(1) in thread(). can I make them run at the same time (one after another) without having to use 'wait(1)'? #include <boost/thread.hpp> #include <iostream> void wait(int seconds) { boost::this_thread::sleep(boost::posix_time::seconds(seconds)); } boost::mutex mutex; void thread() { for (int i = 0; i < 100; ++i) { wait(1); mutex.lock(); std::cout << "Thread " << boost::this_thread::get_id() << ": " << i << std::endl; mutex.unlock(); } } int main() { boost::thread t1(thread); boost::thread t2(thread); t1.join(); t2.join(); }

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >