Search Results

Search found 7706 results on 309 pages for 'inner join'.

Page 103/309 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Complex relationship between tables in NHibernate

    - by Ilya Kogan
    Hi all, I'm writing a Fluent NHibernate mapping for a legacy Oracle database. The challenge is that the tables have composite primary keys. If I were at total freedom, I would redesign the relationships and auto-generate primary keys, but other applications must write to the same database and read from it, so I cannot do it. These are the two tables I'll focus on: Example data Trips table: 1, 10:00, 11:00 ... 1, 12:00, 15:00 ... 1, 16:00, 19:00 ... 2, 12:00, 13:00 ... 3, 9:00, 18:00 ... Faults table: 1, 13:00 ... 1, 23:00 ... 2, 12:30 ... In this case, vehicle 1 made three trips and has two faults. The first fault happened during the second trip, and the second fault happened while the vehicle was resting. Vehicle 2 had one trip, during which a fault happened. Constraints Trips of the same vehicle never overlap. So the tables have an optional one-to-many relationship, because every fault either happens during a trip or it doesn't. If I wanted to join them in SQL, I would write: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime and then I'd get a dataset where every fault appears exactly once (one-to-many as I said). Note that there is no Vehicles table, and I don't need one. But I did create a view that contains all VehicleIds from both tables, so I can use it as a junction table. What am I actually looking for? The tables are huge because they cover years of data, and every time I only need to fetch a range of a few hours. So I need a mapping and a criteria that will run something like the following SQL underneath: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime where Faults.FaultTime between :p0 and :p1 Do you have any ideas how to achieve it? Note 1: Currently the application shouldn't write to the database, so persistence is not a must, although if the mapping supports persistence, it may help at some point in the future. Note 2: I know it's a tough one, so if you give me a great answer, you will be properly rewarded :) Thank you for reading this long question, and now I only hope for the best :)

    Read the article

  • Nhibernate criteria query inserts an extra order by expression when using JoinType.LeftOuterJoin and Projections

    - by Aaron Palmer
    Why would this nhibernate criteria query produce the sql query below? return Session.CreateCriteria(typeof(FundingCategory), "fc") .CreateCriteria("FundingPrograms", "fp") .CreateCriteria("Projects", "p", JoinType.LeftOuterJoin) .Add(Restrictions.Disjunction() .Add(Restrictions.Eq("fp.Recipient.Id", recipientId)) .Add(Restrictions.Eq("p.Recipient.Id", recipientId)) ) .SetProjection(Projections.ProjectionList() .Add(Projections.GroupProperty("fc.Name"), "fcn") .Add(Projections.Sum("fp.ObligatedAmount"), "fpo") .Add(Projections.Sum("p.ObligatedAmount"), "po") ) .AddOrder(Order.Desc("fpo")) .AddOrder(Order.Desc("po")) .AddOrder(Order.Asc("fcn")) .List<object[]>(); SELECT this_.Name as y0_, sum(fp1_.ObligatedAmount) as y1_, sum(p2_.ObligatedAmount) as y2_ FROM fundingCategories this_ inner join fundingPrograms fp1_ on this_.fundingCategoryId = fp1_.fundingCategoryId left outer join projects p2_ on fp1_.fundingProgramId = p2_.fundingProgramId WHERE (fp1_.recipientId = 6 /* @p0 */ or p2_.recipientId = 6 /* @p1 */) GROUP BY this_.Name ORDER BY p2_.name asc, y1_ desc, y2_ desc, y0_ asc It is incorrectly putting the p2_name asc into the ORDER BY statement, and causing it to crash. This only happens when I use JoinType.LeftOuterJoin on my Projects criteria. Is this a known nhibernate bug? I'm using nhibernate 2.0.1.4000. Thanks for any insight.

    Read the article

  • mysql result set joining existing table

    - by Yang
    is there any way to avoid using tmp table? I am using a query with aggregate function (sum) to generate the sum of each product: the result looks like this: product_name | sum(qty) product_1 | 100 product_2 | 200 product_5 | 300 now i want to join the above result to another table called products. so that i will have a summary like this: product_name | sum(qty) product_1 | 100 product_2 | 200 product_3 | 0 product_4 | 0 product_5 | 300 i know 1 way of doing this is the dump the 1st query result to a temp table then join it with products table. is there a better way?

    Read the article

  • How to enable i18n from within setup_app in websetup.py ?

    - by daniel
    From within the setup_app function (websetup.py) of a pylons i18n application, which is making use of a db, I was trying to initiate multilingual content to be inserted into the db. To do so the idea was something like: necessary imports here def setup_app(command, conf, vars): .... for lang in langs: set_lang(lang) content=model.Content() content.content=_('content') Session.add(content) Session.commit() Unfortunately it seems that it doesn't work. the set_lang code line is firing an exception as follows: File ".. i18n/translation.py", line 179, in set_lang translator = _get_translator(lang, **kwargs) File ".. i18n/translation.py", line 160, in _get_translator localedir = os.path.join(rootdir, 'i18n') File ".. /posixpath.py", line 67, in join elif path == '' or path.endswith('/'): AttributeError: 'NoneType' object has no attribute 'endswith' Actually I'm even not sure it could be possible launching i18n mechanisms from within this setup_app function without an active request object. Anyone has tried some trick on a similar story ?

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • Why am I getting a segmentation fault when I use binmode with threads in Perl?

    - by jAndy
    Hi Folks, this call my $th = threads->create(\&print, "Hello thread World!\n"); $th->join(); works fine. But as soon as I add binmode(STDOUT, ":encoding(ISO-8859-1)"); to my script file, I get an error like "segmentation fault", "access denied". What is wrong to define an encoding type when trying to call a perl thread? Example: use strict; use warnings; use threads; binmode(STDOUT, ":encoding(ISO-8859-1)"); my $th = threads->create(\&print, "Hello thread World!\n"); $th->join(); sub print { print @_; } This code does not work for me. Kind Regards --Andy

    Read the article

  • How to do call function after client finishes download from tornado web server?

    - by Shabbyrobe
    I would like to be able to run some cleanup functions if and only if the client successfully completes the download of a file I'm serving using Tornado. I installed the firefox throttle tool and had it slow the connection down to dialup speed and installed this handler to generate a bunch of rubbish random text: class CrapHandler(BaseHandler): def get(self, token): crap = ''.join(random.choice(string.ascii_uppercase + string.digits) for x in range(100000)) self.write(crap) print "done" I get the following output from tornado immediately after making the request: done I 100524 19:45:45 web:772] 200 GET /123 (192.168.45.108) 195.10ms The client then plods along downloading for about 20 seconds. I expected that it would print "done" after the client was done. Also, if I do the following I get pretty much the same result: class CrapHandler(BaseHandler): @tornado.web.asynchronous def get(self, token): crap = ''.join(random.choice(string.ascii_uppercase + string.digits) for x in range(100000)) self.write(crap) self.finish() print "done" Am I missing something fundamental here? Can tornado even support what I'm trying to do? If not, is there an alternative that does?

    Read the article

  • SSIS - Update flag of selected rows from more than one table

    - by Rob Bowman
    Hi I have a SSIS package that copies data from table A to table B and sets a flag in table A so that the same data is not copied subsequently. This works great by using the following as the SQL command text on the ADO Net Source object: update transfer set ProcessDateTimeStamp = GetDate(), LastUpdatedBy = 'legacy processed' output inserted.* where LastUpdatedBy = 'legacy' and ProcessDateTimeStamp is not null The problem I have is that I need to run a similar data copy but from two sources table, joined on a primary / foreign key - select from table A join table B update flag in table A. I don't think I can use the technique above because I don't know where I'd put the join! Is there another way around this problem? Thanks Rob.

    Read the article

  • Logical python question - handling directories and files in them

    - by Konstantin
    Hello! I'm using this function to extract files from .zip archive and store it on the server: def unzip_file_into_dir(file, dir): import sys, zipfile, os, os.path os.makedirs(dir, 0777) zfobj = zipfile.ZipFile(file) for name in zfobj.namelist(): if name.endswith('/'): os.mkdir(os.path.join(dir, name)) else: outfile = open(os.path.join(dir, name), 'wb') outfile.write(zfobj.read(name)) outfile.close() And the usage: unzip_file_into_dir('/var/zips/somearchive.zip', '/var/www/extracted_zip') somearchive.zip have this structure: somearchive.zip 1.jpeg 2.jpeg another.jpeg or, somethimes, this one: somearchive.zip somedir/ 1.jpeg 2.jpeg another.jpeg Question is: how do I modify my function, so that my extracted_zip catalog would always contain just images, not images in another subdirectory, even if images are stored in somedir inside an archive.

    Read the article

  • How to combine two sql queries?

    - by plasmuska
    Hi Guys, I have a stock table and I would like to create a report that will show how often were items ordered. "stock" table: item_id | pcs | operation apples | 100 | order oranges | 50 | order apples | -100 | delivery pears | 100 | order oranges | -40 | delivery apples | 50 | order apples | 50 | delivery Basically I need to join these two queries together. A query which prints stock balances: SELECT stock.item_id, Sum(stock.pcs) AS stock_balance FROM stock GROUP BY stock.item_id; A query which prints sales statistics SELECT stock.item_id, Sum(stock.pcs) AS pcs_ordered, Count(stock.item_id) AS number_of_orders FROM stock GROUP BY stock.item_id, stock.operation HAVING stock.operation="order"; I think that some sort of JOIN would do the job but I have no idea how to glue queries together. Desired output: item_id | stock_balance | pcs_ordered | number_of_orders apples | 0 | 150 | 2 oranges | 10 | 50 | 1 pears | 100 | 100 | 1 This is just example. Maybe, I will need to add more conditions because there is more columns. Is there a universal technique of combining multiple queries together?

    Read the article

  • Deadlock in SQL Server 2005! Two real-time bulk upserts are fighting. WHY?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!! In response to the request for the XDL, here it is: <deadlock-list> <deadlock victim="processc19978"> <process-list> <process id="processaf0b68" taskpriority="0" logused="0" waitresource="KEY: 6:72057594090487808 (d900ed5a6cc6)" waittime="718" ownerId="1102128174" transactionname="user_transaction" lasttranstarted="2010-06-11T16:30:44.750" XDES="0xffffffff817f9a40" lockMode="U" schedulerid="3" kpid="8228" status="suspended" spid="73" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-06-11T16:30:44.750" lastbatchcompleted="2010-06-11T16:30:44.750" clientapp=".Net SqlClient Data Provider" hostname="RISKAPPS_VM" hostpid="3836" loginname="RiskOpt" isolationlevel="read committed (2)" xactid="1102128174" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="MKP_RISKDB.dbo.MarketDataCurrent_BulkUpload" line="28" stmtstart="1062" stmtend="1720" sqlhandle="0x03000600a28e5e4ef4fd8e00849d00000100000000000000"> UPDATE c WITH (ROWLOCK) SET LastUpdate = getdate(), Value = t.Value, Source = @source FROM MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid WHERE c.lastUpdate &lt; @updateTime and c.mdid not in (select mdid from MarketData where BloombergTicker is not null and PriceSource like &apos;Blbg.%&apos;) and c.value &lt;&gt; t.value </frame> <frame procname="adhoc" line="1" stmtstart="88" sqlhandle="0x01000600c1653d0598706ca7000000000000000000000000"> exec MarketDataCurrent_BulkUpload @clearBefore, @source </frame> <frame procname="unknown" line="1" sqlhandle="0x000000000000000000000000000000000000000000000000"> unknown </frame> </executionStack> <inputbuf> (@clearBefore datetime,@source nvarchar(10))exec MarketDataCurrent_BulkUpload @clearBefore, @source </inputbuf> </process> <process id="processc19978" taskpriority="0" logused="0" waitresource="KEY: 6:72057594090487808 (74008e31572b)" waittime="718" ownerId="1102128228" transactionname="user_transaction" lasttranstarted="2010-06-11T16:30:44.780" XDES="0x380be9d8" lockMode="U" schedulerid="5" kpid="8464" status="suspended" spid="248" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-06-11T16:30:44.780" lastbatchcompleted="2010-06-11T16:30:44.780" clientapp=".Net SqlClient Data Provider" hostname="RISKBBG_VM" hostpid="4480" loginname="RiskOpt" isolationlevel="read committed (2)" xactid="1102128228" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="MKP_RISKDB.dbo.MarketDataCurrentBlbgRtUpload" line="14" stmtstart="840" stmtend="1220" sqlhandle="0x03000600005f9d24c8878f00849d00000100000000000000"> UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID </frame> <frame procname="adhoc" line="1" sqlhandle="0x010006004a58132228bf8d73000000000000000000000000"> MarketDataCurrentBlbgRtUpload </frame> </executionStack> <inputbuf> MarketDataCurrentBlbgRtUpload </inputbuf> </process> </process-list> <resource-list> <keylock hobtid="72057594090487808" dbid="6" objectname="MKP_RISKDB.dbo.MarketDataCurrent" indexname="PK_MarketDataCurrent" id="lock5ba77b00" mode="U" associatedObjectId="72057594090487808"> <owner-list> <owner id="processc19978" mode="U"/> </owner-list> <waiter-list> <waiter id="processaf0b68" mode="U" requestType="wait"/> </waiter-list> </keylock> <keylock hobtid="72057594090487808" dbid="6" objectname="MKP_RISKDB.dbo.MarketDataCurrent" indexname="PK_MarketDataCurrent" id="lock65dca340" mode="U" associatedObjectId="72057594090487808"> <owner-list> <owner id="processaf0b68" mode="U"/> </owner-list> <waiter-list> <waiter id="processc19978" mode="U" requestType="wait"/> </waiter-list> </keylock> </resource-list> </deadlock> </deadlock-list>

    Read the article

  • How to eager fetch a child collection while joining child collection entities to an association

    - by ShaneC
    Assuming the following fictional layout Dealership has many Cars has a Manufacturer I want to write a query that says get me a Dealership with a Name of X and also get the Cars collection but use a join against the manufacturer when you do so. I think this would require usage of ICriteria. I'm thinking something like this.. var dealershipQuery = Session.CreateCriteria< Dealership>("d") .Add(Restrictions.InsenstiveLike("d.Name", "Foo")) .CreateAlias("d.Cars", "c") .SetFetchMode("d.Cars", FetchMode.Select) .SetFetchMode("c.Manufacturer", FetchMode.Join) .UniqueResult< Dealership>(); But the resulting query looks nothing like I would have expected. I'm starting to think a DetachedCriteria may be required somewhere but I'm not sure. Thoughts?

    Read the article

  • Get python tarfile to skip files without read permission

    - by chris
    I'm trying to write a function that backs up a directory with files of different permission to an archive on Windows XP. I'm using the tarfile module to tar the directory. Currently as soon as the program encounters a file that does not have read permissions, it stops giving the error: IOError: [Errno 13] Permission denied: 'path to file'. I would like it to instead just skip over the files it cannot read rather than end the tar operation. This is the code I am using now: def compressTar(): """Build and gzip the tar archive.""" folder = 'C:\\Documents and Settings' tar = tarfile.open ("C:\\WINDOWS\\Program\\archive.tar.gz", "w:gz") try: print "Attempting to build a backup archive" tar.add(folder) except: print "Permission denied attempting to create a backup archive" print "Building a limited archive conatining files with read permissions." for root, dirs, files in os.walk(folder): for f in files: tar.add(os.path.join(root, f)) for d in dirs: tar.add(os.path.join(root, d))

    Read the article

  • [Hibernate Mapping] relationship set between table and mapping table to use joins.

    - by Matthew De'Loughry
    Hi guys, I have two table a "Module" table and a "StaffModule" I'm wanting to display a list of modules by which staff are present on the staffmodule mapping table. I've tried from Module join Staffmodule sm with ID = sm.MID with no luck, I get the following error Path Expected for join! however I thought I had the correct join too allow this but obviously not can any one help StaffModule HBM <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <!-- Generated Apr 26, 2010 9:50:23 AM by Hibernate Tools 3.2.1.GA --> <hibernate-mapping> <class name="Hibernate.Staffmodule" schema="WALK" table="STAFFMODULE"> <composite-id class="Hibernate.StaffmoduleId" name="id"> <key-many-to-one name="mid" class="Hibernate.Module"> <column name="MID"/> </key-many-to-one> <key-property name="staffid" type="int"> <column name="STAFFID"/> </key-property> </composite-id> </class> </hibernate-mapping> and Module.HBM <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <!-- Generated Apr 26, 2010 9:50:23 AM by Hibernate Tools 3.2.1.GA --> <hibernate-mapping> <class name="Hibernate.Module" schema="WALK" table="MODULE"> <id name="id" type="int"> <column name="ID"/> <generator class="assigned"/> </id> <property name="modulename" type="string"> <column length="50" name="MODULENAME"/> </property> <property name="teacherid" type="int"> <column name="TEACHERID" not-null="true"/> </property> </class> hope thats enough information! and thanks in advance.

    Read the article

  • How to display all the dates between two given dates in SQL

    - by Gopal
    Using SQL server 2000. If the Start date is 06/23/2008 and End date is 06/30/2008 Then I need the Output of query as 06/23/2008 06/24/2008 06/25/2008 . . . 06/30/2008 I Created a Table names as Integer which has 1 Column, column values are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 then I used the below mentioned query Tried Query SELECT DATEADD(d, H.i * 100 + T .i * 10 + U.i, '" & dtpfrom.Value & "') AS Dates FROM integers H CROSS JOIN integers T CROSS JOIN integers U order by dates The above query is displaying 999 Dates only. 999 Dates means (365 + 365 + 269) Dates Only. Suppose I want to select more than 3 Years (01/01/2003 to 01/01/2008). The above query should not suitable. How to modify my query? Or any other query is available for the above condition. Please kindly provide me the Query.

    Read the article

  • C# IQueryable<T> does my code make sense?

    - by Pandiya Chendur
    I use this to get a list of materials from my database.... public IQueryable<MaterialsObj> FindAllMaterials() { var materials = from m in db.Materials join Mt in db.MeasurementTypes on m.MeasurementTypeId equals Mt.Id select new MaterialsObj() { Id = Convert.ToInt64(m.Mat_id), Mat_Name = m.Mat_Name, Mes_Name = Mt.Name, }; return materials; } But i have seen in an example that has this, public IQueryable<MaterialsObj> FindAllMaterials() { return from m in db.Materials join Mt in db.MeasurementTypes on m.MeasurementTypeId equals Mt.Id select new MaterialsObj() { Id = Convert.ToInt64(m.Mat_id), Mat_Name = m.Mat_Name, Mes_Name = Mt.Name, }; } Is there a real big difference between the two methods... Assigning my linq query to a variable and returning it... Is it a good/bad practise? Any suggestion which should i use?

    Read the article

  • Python - Things I shouldn't be doing?

    - by cornjuliox
    I've got a few questions about best practices in Python. Not too long ago I would do something like this with my code: ... junk_block = "".join(open("foo.txt","rb").read().split()) ... I don't do this anymore because I can see that it makes code harder to read, but would the code run slower if I split the statements up like so: f_obj = open("foo.txt", "rb") f_data = f_obj.read() f_data_list = f_data.split() junk_block = "".join(f_data_list) I also noticed that there's nothing keeping you from doing an 'import' within a function block, is there any reason why I should do that?

    Read the article

  • nested form & habtm

    - by brewster
    so i am trying to save to a join table in a habtm relationship, but i am having problems. from my view, i pass in a group id with: = link_to "Create New User", new_user_url(:group => 1) User model (user.rb) class User < ActiveRecord::Base has_and_belongs_to_many :user_groups accepts_nested_attributes_for :user_groups end UserGroups model (user_groups.rb) class UserGroup < ActiveRecord::Base has_and_belongs_to_many :users end users_controller.rb def new @user = User.new(:user_group_ids => params[:group]) end in the new user view, i have access to the User.user_groups object, however when i submit the form, not only does it not save into my join table (user_groups_users), but the object is no longer there. all the other objects & attributes of my User object are persistent except for the user group. i just started learning rails, so maybe i am missing something conceptually here, but i have been really struggling with this.

    Read the article

  • Is do-notation specific to "base:GHC.Base.Monad"?

    - by yairchu
    The idea that the standard Monad class is flawed and that it should actually extend Functor or Pointed is floating around. I'm not necessarily claiming that it is the right thing to do, but suppose that one was trying to do it: import Prelude hiding (Monad(..)) class Functor m => Monad m where return :: a -> m a join :: m (m a) -> m a join = (>>= id) (>>=) :: m a -> (a -> m b) -> m b a >>= t = join (fmap t a) (>>) :: m a -> m b -> m b a >> b = a >>= const b So far so good, but then when trying to use do-notation: whileM :: Monad m => m Bool -> m () whileM iteration = do done <- iteration if done then return () else whileM iteration The compiler complains: Could not deduce (base:GHC.Base.Monad m) from the context (Monad m) Question: Does do-notation work only for base:GHC.Base.Monad? Is there a way to make it work with an alternative Monad class? Extra context: What I really want to do is replace base:Control.Arrow.Arrow with a "generalized" Arrow class: {-# LANGUAGE TypeFamilies #-} class Category a => Arrow a where type Pair a :: * -> * -> * arr :: (b -> c) -> a b c first :: a b c -> a (Pair a b d) (Pair a c d) second :: a b c -> a (Pair a d b) (Pair a d c) (***) :: a b c -> a b' c' -> a (Pair a b b') (Pair a c c') (&&&) :: a b c -> a b c' -> a b (Pair a c c') And then use the Arrow's proc-notation with my Arrow class, but that fails like in the example above of do-notation and Monad. I'll use mostly Either as my pair type constructor and not the (,) type constructor as with the current Arrow class. This might allow to make the code of my toy RTS game (cabal install DefendTheKind) much prettier.

    Read the article

  • sql query question / count

    - by scheibenkleister
    Hi, I have houses that belongs to streets. A user can buy several houses. How do I find out, if the user owns an entire street? street table with columns (id/name) house table with columns (id/street_id [foreign key] owner table with columns (id/house_id/user_id) [join table with foreign keys] So far, I'm using count which returns the result: select count(*), street_id from owner left join house on owner.house_id = house.id group by street_id where user_id = 1 count(*) | street_id 3 | 1 2 | 2 A more general count: select count(*) from house group by street_id returns: count(*) | street_id 3 | 1 3 | 2 How can I find out, that user 1 owns the entire street 1 but not street 2? Thanks.

    Read the article

  • MYSQL Convert rows to columns performance problem

    - by Tarski
    I am doing a query that converts rows to columns similar to this post but have encountered a performance problem. Here is the query:- SELECT Info.Customer, Answers.Answer, Answers.AnswerDescription, Details.Code1, Details.Code2, Details.Code3 FROM Info LEFT OUTER JOIN Answers ON Info.AnswerID = Answers.AnswerID LEFT OUTER JOIN (SELECT ReferenceNo, MAX(CASE DetailsIndicator WHEN 'cde1' THEN DetailsCode ELSE NULL END ) Code1, MAX(CASE DetailsIndicator WHEN 'cde2' THEN DetailsCode ELSE NULL END ) Code2, MAX(CASE DetailsIndicator WHEN 'cde3' THEN DetailsCode ELSE NULL END ) Code3 FROM DetailsData GROUP BY ReferenceNo) Details ON Info.ReferenceNo = Details.ReferenceNo There are less than 300 rows returned, but the Details table is about 180 thousand rows. The query takes 45 seconds to run and needs to take only a few seconds. When I type show processlist; into MYSQL it is hanging on "Sending Data". Any thoughts as to what the performance problem might be?

    Read the article

  • how to pass url in mailto's body

    - by Simer
    i need to send a url of my site in body so that user can click on that to join my site. but it is coming like this in mail client: Link goes here http://www.example.com/foo.php?this=a url after & is not coming then whole process of joining failed. how can i pass url like these in mailto body http://www.example.com/foo.php?this=a&join=abc&user454 <a href="mailto:[email protected]?body=Link goes here http://www.example.com/foo.php?this=a&amp;really=long&amp;url=with&amp;lots=and&amp;lots=and&amp;lots=of&prameters=on_it ">Link text goes here</a> i have searched alot but did't got right answer thanks

    Read the article

  • Two Tables Serving as one Model in Rails

    - by matsko
    Is is possible in rails to setup on model which is dependant on a join from two tables? This would mean that for the the model record to be found/updated/destroyed there would need to be both records in both database tables linked together in a join. The model would just be all the columns of both tables wrapped together which may then be used for the forms and so on. This way when the model gets created/updated it is just one form variable hash that gets applied to the model? Is this possible in Rails 2 or 3?

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >