Search Results

Search found 5233 results on 210 pages for 'a records'.

Page 84/210 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Peoplesoft queries - performance

    - by DBa
    Hi, I'm facing a problem with PeopleSoft queries (using Oracle backend database): when a rather complex query involving multiple records is set off by a user, PS does an enforced join of security records, thus producing SQL like this: select .... from ps_job a, PS_EMPL_SRCQRY a1, ps_table2 b, ps_sec_rcd2 b1, ps_table3 c, ps_sec_rcd3 c1 where (...security joins a-a1, b-b1, c-c1...) and (...joins of a, b and c...) and a.setid_dept = 'XYZ'; (let's assume the last condition has a high selectivity and there is an index on the column) Obviously, due to the arrangement of the conditions, first a huge join is created, written to the temp segment, and when the last condition is finally applied, only a small subset is selected. A query formulated in this way is very likely to hit the preset timeout of the APPSRV, and even of the QRYSRV. When writing the query manually, I would rather move the most selective condition to the start, thus limiting the amount of the data being handled, to a considerable level. Any ideas on how to make PS behave like this? Actually, already rewriting "Oracle-styled" SQL to ANSI SQL seems to accelerate the queries - however, PS writes Oracle-style queries... Thanks in advance DBa

    Read the article

  • NHibernate lazy properties behavior?

    - by GeReV
    I've been trying to get NHibernate into development for a project I'm working on at my workplace. Since I have to put a strong emphasis on performance, I've been running a proof-of-concept stress test on an existing project's table with thousands of records, all of which contain a large text column. However, when selecting a collection of these records, the select statement takes a relatively long time to execute; apparently due to the aforementioned column. The first solution that comes to mind is setting this property as lazy: <property name="Content" lazy="true"/> But there seems to be no difference in the SQL generated by NHibernate. My question is, how do lazy properties behave in NHibernate? Is there some kind of type limitations I could be missing? Should I take a different approach altogether? Using HQL's new Class(column1, column2) approach works, but lazy properties sounds like a simpler solution. It's perhaps worth mentioning I'm using NHibernate 2.1.2GA with the Castle DynamicProxy. Thanks!

    Read the article

  • Peculiar JRE behaviour running RMI server under load, should I worry?

    - by darri
    I've been developing a minimalistic Java rich client CRUD application framework for the past few years, mostly as a hobby but also actively using it to write applications for my current employer. The framework provides database access to clients either via a local JDBC based connection or a lightweight RMI server. Last night I started a load testing application, which ran 100 headless clients, bombarding the server with requests, each client waiting only 1 - 2 seconds between running simple use cases, consisting of selecting records along with associated detail records from a simple e-store database (Chinook). This morning when I looked at the telemetry results from the server profiling session I noticed something which to me seemed strange (and made me keep the setup running for the remainder of the day), I don't really know what conclusions to draw from it. Here are the results: Memory GC activity Threads CPU load Interesting, right? So the question is, is this normal or erratic? Is this simply the JRE (1.6.0_03 on Windows XP) doing it's thing (perhaps related to the JRE configuration) or is my framework design somehow causing this? Running the server against MySQL as opposed to an embedded H2 database does not affect the pattern. I am leaving out the details of my server design, but I'll be happy to elaborate if this behaviour is deemed erratic.

    Read the article

  • After a few same hql query the application freezes

    - by Oktay
    I am calling below function with the same batchNumber and it is working without problem 15 times and takes the records fromm database without problem but at 16. time the application freezes when the query.list() row is called. It just loses debug focus and not give any exception. This problem probably is not about the hql because I've seen this problem before and I used criteria instead of hql and I got pass this problem. But for this when I use "group by" in criteria(setrojection....) it doesn't return the result as hibernate model(object) just returns a list. But I need the results as model. Note: about 15 times it is just for test. This is a web aplication and user may click the button many times that calls this funtion to see the taken records from database. public List<SiteAddressModel> getSitesByBatch(String batchNumber) { try{ List<SiteAddressModel> siteList; MigrationPlanDao migrationPlanDao = ServiceFactory.getO2SiteService().getMigrationPlanDao(); Query query = this.getSession().createQuery("from " + persistentClass.getName() + " where " + "siteType =:" + "type and siteName in " + "(select distinct exchange from " + migrationPlanDao.getPersistentClass().getName() + " where migrationBatchNumber =:" + "batchNumber" + ")" ); query.setString("batchNumber", batchNumber); query.setString("type", "LLU/ASN"); System.out.println("before query"); siteList = query.list(); System.out.println("after query"); return siteList; }catch (Exception e) { e.printStackTrace(); } Hibernate version 3.2.0.ga

    Read the article

  • SQLite3 - select date range not working

    - by iFloh
    yet anotherone that gives me grief. In a SQLite3 DB Select I query for a date range specified in (NSDate *)fromDate to (NSDate *)toDate const char *sql = "SELECT * FROM A, B WHERE A.key = B.key AND A.date between ? and ?"; After opening the DB I run the query in Objective-C as follows: NSDateFormatter *tmpDatFmt = [[[NSDateFormatter alloc] init] autorelease]; [tmpDatFmt setDateFormat:@"dd-MM-yyyy"]; sqlite3_stmt *stmt; if(sqlite3_prepare_v2(db, sql, -1, &stmt, NULL) == SQLITE_OK) { NSLog(@"From %s to %s;", [[tmpDatFmt stringFromDate:fromDate] UTF8String], [[tmpDatFmt stringFromDate:toDate] UTF8String]); sqlite3_bind_text(stmt, 1, [[tmpDatFmt stringFromDate:fromDate] UTF8String], -1, SQLITE_STATIC); // first '?' sqlite3_bind_text(stmt, 2, [[tmpDatFmt stringFromDate:toDate] UTF8String], -1, SQLITE_STATIC); // second '?' while(sqlite3_step(stmt) == SQLITE_ROW) { NSLog(@"Success");} In the database I have several records that match the date range: 12-04-2010 = in seconds 1271059200 13-04-2010 = in seconds 1271145600 13-04-2010 = in seconds 1271152800 14-04-2010 = in seconds 1271267100 When I run it the first NSLog shows From 2010-04-01 to 2010-04-30 my problem is the records are not selected (no "Success" shows in the log) and I don't understand why. earlier I had miscalculated the dates 2 days later as 14-04-2010 = in seconds 1271232000 15-04-2010 = in seconds 1271318400 15-04-2010 = in seconds 1271325600 16-04-2010 = in seconds 1271439936 These dates worked fine (4 x "Success in the log). I am puzzled ...

    Read the article

  • Perl help dereferencing a reference to an array of hash references, containing record set data

    - by user1724150
    I'm using the a Amazon Perl module that returns a reference to an array of hash references as $record_sets, containing record set data and I'm having a hard time dereferencing it. I can print the data using data dumper but I need to be able to manipulate the data. Below is the documentation provided for the module Thanks In Advance: #list_resource_record_sets #Lists resource record sets for a hosted zone. #Called in scalar context: $record_sets = $r53->list_resource_record_sets(zone_id => '123ZONEID'); #Returns: A reference to an array of hash references, containing record set data. Example: $record_sets = [ { name => 'example.com.', type => 'MX' ttl => 86400, records => [ '10 mail.example.com' ] }, { name => 'example.com.', type => 'NS', ttl => 172800, records => [ 'ns-001.awsdns-01.net.', 'ns-002.awsdns-02.net.', 'ns-003.awsdns-03.net.', 'ns-004.awsdns-04.net.' ]

    Read the article

  • Oracle Query Optimization: Why is My Second Query Faster?

    - by Patrick Cuff
    I was having some performance issues with an Oracle query, so I downloaded a trial of the Quest SQL Optimizer for Oracle, which made some changes that dramatically improved the query's performance. I'm not exactly sure why the recommended query had such an improvement; can anyone provide an explanation? Before: SELECT t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table1 t1, table2 t2, table3 t3 WHERE t1.id = t2.id AND t1.version_id = t2.version_id AND t2.id = 123 AND t1.version_id = t3.version_id AND t1.VERSION_NAME <> 'AA' order by t1.id Plan Cost: 831 Elapsed Time: 00:00:21.40 Number of Records: 40,717 After: SELECT /*+ USE_NL_WITH_INDEX(t1) */ t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table2 t2, table3 t3, table1 t1 WHERE t1.id = t2.id + 0 AND t1.version_id = t2.version_id + 0 AND t2.id = 123 AND t1.version_id = t3.version_id + 0 AND t1.VERSION_NAME || '' <> 'AA' AND t3.version_id = t2.version_id + 0 order by t1.id Plan Cost: 686 Elapsed Time: 00:00:00.95 Number of Records: 40,717 Questions: Why does re-arranging the order of the tables in the FROM clause help? Why does adding + 0 to the WHERE clause comparisons help? Why does || '' <> 'AA' in the WHERE clause VERSION_NAME comparison help? Is this a more efficient way of handling possible nulls on this column?

    Read the article

  • UPDATE Table SET Field

    - by davlyo
    This is my Very first Post! Bear with me. I have an Update Statement that I am trying to understand how SQL Server handles it. UPDATE a SET a.vField3 = b.vField3 FROM tableName a INNER JOIN tableName b ON a.vField1 = b.vField1 AND b.nField2 = a.nField2 – 1 This is my query in its simplest form. vField1 is a Varchar nField2 is an int (autonumber) vField3 is a Varchar I have left the WHERE clause out so understand there is logic that otherwise makes this a nessessity. Say vField1 is a Customer Number and that Customer has 3 records The value in nField2 is 1, 2, and 3 consecutively. vField3 is a Status When the Update comes to a.nField2 = 1 there is no a.nField2 -1 so it continues When the Update comes to a.nField2 = 2, b.nField2 = 1 When the Update comes to a.nField2 = 3, b.nField2 = 2 So when the Update is on a.nField2 = 2, alias b reflects what is on the line prior (b.nField2 = 1) And it SETs the Varchar Value of a.vField3 = b.vField3 When the Update is on a.nField2 = 3, alias b reflects what is on the line prior (b.nField2 = 2) And it (should) SET the Varchar Value of a.vField3 = b.vField3 When the process is complete –the Second of three records looks as expected –hence the value in vField3 of the second record reflects the value in vField3 from the First record However, vField3 of the Third record does not reflect the value in vField3 from the Second record. I think this demonstrates that SQL Server may be producing a transaction of some sort and then an update. Question: How can I get the DB to Update after each transaction so I can reference the values generated by each transaction?

    Read the article

  • FreeText COUNT query on multiple tables is super slow

    - by Eric P
    I have two tables: **Product** ID Name SKU **Brand** ID Name Product table has about 120K records Brand table has 30K records I need to find count of all the products with name and brand matching a specific keyword. I use freetext 'contains' like this: SELECT count(*) FROM Product inner join Brand on Product.BrandID = Brand.ID WHERE (contains(Product.Name, 'pants') or contains(Brand.Name, 'pants')) This query takes about 17 secs. I rebuilt the FreeText index before running this query. If I only check for Product.Name. They query is less then 1 sec. Same, if I only check the Brand.Name. The issue occurs if I use OR condition. If I switch query to use LIKE: SELECT count(*) FROM Product inner join Brand on Product.BrandID = Brand.ID WHERE Product.Name LIKE '%pants%' or Brand.Name LIKE '%pants%' It takes 1 secs. I read on MSDN that: http://msdn.microsoft.com/en-us/library/ms187787.aspx To search on multiple tables, use a joined table in your FROM clause to search on a result set that is the product of two or more tables. So I added an INNER JOINED table to FROM: SELECT count(*) FROM (select Product.Name ProductName, Product.SKU ProductSKU, Brand.Name as BrandName FROM Product inner join Brand on product.BrandID = Brand.ID) as TempTable WHERE contains(TempTable.ProductName, 'pants') or contains(TempTable.BrandName, 'pants') This results in error: Cannot use a CONTAINS or FREETEXT predicate on column 'ProductName' because it is not full-text indexed. So the question is - why OR condition could be causing such as slow query?

    Read the article

  • Optimize SQL query (Facebook-like application)

    - by fabriciols
    My application is similar to Facebook, and I'm trying to optimize the query that get user records. The user records are that he as src ou dst. The src is in usermuralentry directly, the dst list are in usermuralentry_user. So, a entry can have one src and many dst. I have those tables: mysql> desc usermuralentry ; +-----------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_src_id | int(11) | NO | MUL | NULL | | | private | tinyint(1) | NO | | NULL | | | content | longtext | NO | | NULL | | | date | datetime | NO | | NULL | | | last_update | datetime | NO | | NULL | | +-----------------+------------------+------+-----+---------+----------------+ 10 rows in set (0.10 sec) mysql> desc usermuralentry_user ; +-------------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | usermuralentry_id | int(11) | NO | MUL | NULL | | | userinfo_id | int(11) | NO | MUL | NULL | | +-------------------+---------+------+-----+---------+----------------+ 3 rows in set (0.00 sec) And the following query to retrieve information from two users. mysql> explain SELECT * FROM usermuralentry AS a , usermuralentry_user AS b WHERE a.user_src_id IN ( 1, 2 ) OR ( a.id = b.usermuralentry_id AND b.userinfo_id IN ( 1, 2 ) ); +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | 1 | SIMPLE | b | ALL | usermuralentry_id,usermuralentry_user_bcd7114e,usermuralentry_user_6b192ca7 | NULL | NULL | NULL | 147188 | | | 1 | SIMPLE | a | ALL | PRIMARY | NULL | NULL | NULL | 1371289 | Range checked for each record (index map: 0x1) | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ 2 rows in set (0.00 sec) but it is taking A LOT of time... Some tips to optimize? Can the table schema be better in my application?

    Read the article

  • How do you send email from IMAP account with PHP?

    - by arthurakay
    I'm having an issue sending email via PHP/IMAP - and I don't know if it's because: I don't correctly understand IMAP, or there's an issue with my server My application opens an IMAP connection to an email account to read messages in the inbox. It does this successfully. The problem I have is that I want to send messages from this account and have them display in the outbox/sent folder. As far as I can tell, the PHP imap_mail() function doesn't in any way hook into the IMAP stream I currently have open. My code executes without throwing an error. However, the email never arrives to the recipient and never displays in my sent folder. private function createHeaders() { return "MIME-Version: 1.0" . "\r\n" . "Content-type: text/html; charset=iso-8859-1" . "\r\n" . "From: " . $this->accountEmail . "\r\n"; } private function notifyAdminForCompleteSet($urlToCompleteSet) { $message = " <p> In order to process the latest records, you must visit <a href='$urlToCompleteSet'>the website</a> and manually export the set. </p> "; try { imap_mail( $this->adminEmail, "Alert: Manual Export of Records Required", wordwrap($message, 70), $this->createHeaders() ); echo(" ---> Admin notified via email!\n"); } catch (Exception $e) { throw new Exception("Error in notifyAdminForCompleteSet()"); } } I'm guessing I need to copy the message into the IMAP account manually... or is there a different solution to this problem? Also, does it matter if the domain in the "from" address is different than that of the server on whicn this script is running? I can't explain why the message is never sent.

    Read the article

  • Integration testing - can it be done right?

    - by Max
    I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program? What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want? An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile. To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation. What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?

    Read the article

  • How to check dates don't overlap in a table using TSQL.

    - by Jon
    I have a table with start and finish datetimes that I need to determine if any overlap and not quite sure the best way to go. Initially I was thinking of using a nested cursor as shown below which does work, however I'm checking the same records against each other twice and I'm sure it is not very efficient. eg: this table would result in an overlap. id start end ------------------------------------------------------- 1 2009-10-22 10:19:00.000 2009-10-22 11:40:00.000 2 2009-10-22 10:31:00.000 2009-10-22 13:34:00.000 3 2009-10-22 16:31:00.000 2009-10-22 17:34:00.000 Declare @Start datetime, @End datetime, @OtherStart datetime, @OtherEnd datetime, @id int, @endCheck bit Set @endCheck = 0 DECLARE Cur1 CURSOR FOR select id, start, end from table1 OPEN Cur1 FETCH NEXT FROM Cur1 INTO @id, @Start, @End WHILE @@FETCH_STATUS = 0 AND @endCheck = 0 BEGIN -- Get a cursor on all the other records DECLARE Cur2 CURSOR FOR select start, end from table1 and id != @id OPEN Cur2 FETCH NEXT FROM Cur2 INTO @OtherStart, @OtherEnd WHILE @@FETCH_STATUS = 0 AND @endCheck = 0 BEGIN if ( @Start > @OtherStart AND @Start < @OtherEnd OR @End > @OtherStart AND @End < @OtherEnd ) or ( @OtherStart > @Start AND @OtherStart < @End OR @OtherEnd > @Start AND @OtherEnd < @End ) BEGIN SET @endCheck = 1 END FETCH NEXT FROM Cur2 INTO @OtherStart, @OtherEnd END CLOSE Cur2 DEALLOCATE Cur2 FETCH NEXT FROM Cur1 INTO @id, @Start, @End END CLOSE Cur1 DEALLOCATE Cur1

    Read the article

  • Write out to text file using T-SQL

    - by sasfrog
    I am creating a basic data transfer task using TSQL where I am retrieving certain records from one database that are more recent than a given datetime value, and loading them into another database. This will happen periodically throughout the day. It's such a small task that SSIS seems like overkill - I want to just use a scheduled task which runs a .sql file. Where I need guidance is that I need to persist the datetime from the last run of this task, then use this to filter the records next time the task runs. My initial thought is to just store the datetime in a text file, and update (overwrite) it as part of the task each time it runs. I can read the file in without problems using T-SQL, but writing back out has got me stuck. I've seen plenty of examples which make use of a dynamically-built bcp command, which is then executed using xp_cmdshell. Trouble is, security on the server I'm deploying to precludes the use of xp_cmdshell. So, my question is, are there other ways to simply write a datetime value to a file using TSQL, or should I be thinking about a different approach? EDIT: happy to be corrected about SSIS being "overkill"...

    Read the article

  • Mysql query, need suggestion or solution

    - by Xi Kam
    Can anyone help me, i have two tables and i need records from both the table //////////////////////////////++ Query 1 ++//////////////////////////////////// SELECT SUM(rec_issued) AS issed, regen_id, YEAR(issue_date) AS iYear, MONTH(issue_date) AS iMonth FROM `view_rec_issued` WHERE `regen_id` = 2 GROUP BY YEAR(issue_date) DESC, MONTH(issue_date) DESC ORDER BY issue_date ASC issed regen_id iYear iMonth 424 2 2011 3 4340 2 2011 4 4235 2 2011 5 10570 2 2012 2 4761 2 2012 3 5000 2 2012 4 3700 2 2012 5 3414 2 2012 6 3700 2 2012 7 2992 2 2012 8 995 2 2012 10 ![Result from Query 1][1] //////////////////////////////++ Query 2 ++//////////////////////////////////// SELECT SUM(total_redem) AS redemed, regen_id, YEAR(redemption_date) AS rYear, MONTH(redemption_date) AS rMonth FROM `recredem_month_wise` WHERE `regen_id` = 2 GROUP BY YEAR(redemption_date) DESC, MONTH(redemption_date) DESC order by redemption_date ASC redemed regen_id rYear rMonth 424 2 2011 3 260 2 2011 4 6523 2 2011 5 1070 2 2011 6 200 2 2011 10 500 2 2011 11 9750 2 2012 2 5000 2 2012 3 5500 2 2012 4 3803 2 2012 5 3700 2 2012 7 3000 2 2012 8 ![Result from Query 2][2] But i want it as - issed regen_id iYear iMonth redemed regen_id rYear rMonth 424 2 2011 3 424 2 2011 3 4340 2 2011 4 260 2 2011 4 4235 2 2011 5 6523 2 2011 5 NULL NULL NULL NULL 1070 2 2011 6 NULL NULL NULL NULL 200 2 2011 10 NULL NULL NULL NULL 500 2 2011 11 10570 2 2012 2 9750 2 2012 2 4761 2 2012 3 5000 2 2012 3 5000 2 2012 4 5500 2 2012 4 3700 2 2012 5 3803 2 2012 5 3414 2 2012 6 NULL NULL NULL NULL 3700 2 2012 7 3700 2 2012 7 2992 2 2012 8 3000 2 2012 8 995 2 2012 10 NULL NULL NULL NULL ![I want this output][3] In these table regen_id is unique and i need data as YEAR and MONTH, if in any table not have the records in perticular month and year it should retrieve zero or null. But in every record year and month should equal like this - iYear = rYear and iMonth = rMonth So we can merge both the fields - No need to show year and month twice iYear and rYear = year iMonth and rMonth = month Thank You Please look at this problem.

    Read the article

  • EF 4 Query - Issue with Multiple Parameters

    - by Brian
    Hello, A trick to avoiding filtering by nullable parameters in SQL was something like the following: select * from customers where (@CustomerName is null or CustomerName = @CustomerName) This worked well for me in LINQ to SQL: string customerName = "XYZ"; var results = (from c in ctx.Customers where (customerName == null || (customerName != null && c.CustomerName == customerName)) select c); But that above query, when in ADO.NET EF, doesn't work for me; it should filter by customer name because it exists, but it doesn't. Instead, it's querying all the customer records. Now, this is a simplified example, because I have many fields that I'm utilizing this kind of logic with. But it never actually filters, queries all the records, and causes a timeout exception. But the wierd thing is another query does something similarly, with no issues. Any ideas why? Seems like a bug to me, or is there a workaround for this? I've since switched to extension methods which works. Thanks.

    Read the article

  • How (and if) to write a single-consumer queue using the task parallel library?

    - by Eric
    I've heard a bunch of podcasts recently about the TPL in .NET 4.0. Most of them describe background activities like downloading images or doing a computation, using tasks so that the work doesn't interfere with a GUI thread. Most of the code I work on has more of a multiple-producer / single-consumer flavor, where work items from multiple sources must be queued and then processed in order. One example would be logging, where log lines from multiple threads are sequentialized into a single queue for eventual writing to a file or database. All the records from any single source must remain in order, and records from the same moment in time should be "close" to each other in the eventual output. So multiple threads or tasks or whatever are all invoking a queuer: lock( _queue ) // or use a lock-free queue! { _queue.enqueue( some_work ); _queueSemaphore.Release(); } And a dedicated worker thread processes the queue: while( _queueSemaphore.WaitOne() ) { lock( _queue ) { some_work = _queue.dequeue(); } deal_with( some_work ); } It's always seemed reasonable to dedicate a worker thread for the consumer side of these tasks. Should I write future programs using some construct from the TPL instead? Which one? Why?

    Read the article

  • XSLT Type Checking

    - by mo
    Hi Folks Is it possible to check an elements ComplexType? i have this (simplified): complexType Record complexType Customer extension of Record complexType Person extension of Record <xsl:template match="/"> <records> <xsl:apply-templates /> </records> </xsl:template> <xsl:template match="!!! TYPECHECK FOR RECORD !!!" name="Record"> <record><xsl:value-of select="." /></record> </xsl:template> is it possible to check elementstype incl. inheritence? i dont know the elements name only that they are a subtype of Record. schema 1: complexType name="Customer" extension base="Record" element name="customers" element name="customer" type="Customer" schema 2: complexType name="Person" extension base="Record" element name="persons" element name="person" type="Person" schema ?: complexType name="UnknownType" extension base="Record" element name="unknowns" element name="unknown" type="UnknownType" xml 1: <customers> <customer /> <customer /> </customers> xml 2: <persons> <person /> <person /> </persons> xml ?: <?s> <? /> <? /> </?s> the xml input ist custom so i have to match by the type (i think)

    Read the article

  • Need help INSERT record(s) MySQL DB

    - by JM4
    I have an online form which collects member(s) information and stores it into a very long MySQL database. We allow up to 16 members to enroll at a single time and originally structured the DB to allow such. For example: If 1 Member enrolls, his personal information (first name, last name, address, phone, email) are stored on a single row. If 15 Members enroll (all at once), their personal information are stored in the same single row. The row has information housing columns for all 'possible' inputs. I am trying to consolidate this code and having every nth member that enrolls put onto a new record within the database. I have seen sugestions before for inserting multiple records as such: INSERT INTO tablename VALUES (('$f1name', '$f1address', '$f1phone'), ('$f2name', '$f2address', '$f2phone')... The issue with this is two fold: I do not know how many records are being enrolled from person to person so the only way to make the statement above is to use a loop The information collected from the forms is NOT a single array so I can't loop through one array and have it parse out. My information is collected as individual input fields like such: Member1FirstName, Member1LastName, Member1Phone, Member2Firstname, Member2LastName, Member2Phone... and so on Is it possible to store information in separate rows WITHOUT using a loop (and therefore having to go back and completely restructure my form field names and such (which can't happen due to the way the validation rules are built.)

    Read the article

  • Is it possible in SQLAlchemy to filter by a database function or stored procedure?

    - by Rico Suave
    We're using SQLalchemy in a project with a legacy database. The database has functions/stored procedures. In the past we used raw SQL and we could use these functions as filters in our queries. I would like to do the same for SQLAlchemy queries if possible. I have read about the @hybrid_property, but some of these functions need one or more parameters, for example; I have a User model that has a JOIN to a bunch of historical records. These historical records for this user, have a date and a debit and credit field, so we can look up the balance of a user at a specific point in time, by doing a SUM(credit) - SUM(debit) up until the given date. We have a database function for that called dbo.Balance(user_id, date_time). I can use this to check the balance of a user at a given point in time. I would like to use this as a criterium in a query, to select only users that have a negative balance at a specific date/time. selection = users.filter(coalesce(Users.status, 0) == 1, coalesce(Users.no_reminders, 0) == 0, dbo.pplBalance(Users.user_id, datetime.datetime.now()) < -0.01).all() This is of course a non-working example, just for you to get the gist of what I'd like to do. The solution looks to be to use hybrd properties, but as I mentioned above, these only work without parameters (as they are properties, not methods). Any suggestions on how to implement something like this (if it's even possible) are welcome. Thanks,

    Read the article

  • PHP: Trying to come up with a "prev" and "next" link

    - by fwaokda
    I'm displaying 10 records per page. The variables I have currently that I'm working with are.. $total = total number of records $page = whats the current page I'm displaying I placed this at the top of my page... if ( $_GET['page'] == '' ) { $page = 1; } //if no page is specified set it to `1` else { $page = ($_GET['page']); } // if page is specified set it Here are my two links... if ( $page != 1 ) { echo '<div style="float:left" ><a href="index.php?page='. ( $page - 1 ) .'" rev="prev" >Prev</a></div>'; } if ( !( ( $total / ( 10 * $page ) ) < $page ) ) { echo '<div style="float:right" ><a href="index.php?page='. ( $page + 1 ) .'" rev="next" >Next</a></div>'; } Now I guess (unless I'm not thinking of something) that I can display the "Prev" link every time except when the page is '1'. How can make it where the "Next" link doesn't show on the last page though?

    Read the article

  • '<=' operator is not working in sql server 2000

    - by Lalit
    Hello, Scenario is, database is in the maintenance phase. this database is not developed by ours developer. it is an existing database developed by the 'xyz' company in sql server 2000. This is real time database, where i am working now. I wanted to write the stored procedure which will retrieve me the records From date1 to date 2.so query is : Select * from MyTableName Where colDate>= '3-May-2010' and colDate<= '5-Oct-2010' and colName='xyzName' whereas my understanding I must get data including upper bound date as well as lower bound date. but somehow I am getting records from '3-May-2010' (which is fine but) to '10-Oct-2010' As i observe in table design , for ColDate, developer had used 'varchar' to store the date. i know this is wrong remedy by them. so in my stored procedure I have also used varchar parameters as @FromDate1 and @ToDate to get inputs for SP. this is giving me result which i have explained. i tried to take the parameter type as 'Datetime' but it is showing error while saving/altering the stored procedure that "@FromDate1 has invalid datatype", same for "@ToDate". situation is that, I can not change the table design at all. what i have to do here ? i know we can use user defined table in sql server 2008 , but there is version sql server 2000. which does not support the same. Please guide me for this scenario. **Edited** I am trying to write like this SP: CREATE PROCEDURE USP_Data (@Location varchar(100), @FromDate DATETIME, @ToDate DATETIME) AS SELECT * FROM dbo.TableName Where CAST(Dt AS DATETIME) >=@fromDate and CAST(Dt AS DATETIME)<=@ToDate and Location=@Location GO but getting Error: Arithmetic overflow error converting expression to data type datetime. in sql server 2000 What should be that ? is i am wrong some where ? also (202 row(s) affected) is changes every time in circular manner means first time sayin (122 row(s) affected) run again saying (80 row(s) affected) if again (202 row(s) affected) if again (122 row(s) affected) I can not understand what is going on ?

    Read the article

  • MySql: Is it reasonable to use 'view' or I would better denormalize my DB?

    - by Budda
    There is 'team_sector' table with following fields: Id, team_id, sect_id, size, level It contains few records for each 'team' entity (referenced with 'team_id' field). Each record represent sector of team's stadium (totally 8 sectors). Now it is necessary to implement few searches: by overall stadium size (SUM(size)); the best quality (SUM(level)/COUNT(*)). I could create query something like this: SELECT TS.team_id, SUM(TS.size) as OverallSize, SUM(TS.Level)/COUNT(TS.Id) AS QualityLevel FROM team_sector GROUP BY team_id ORDER BY OverallSize DESC / ORDER BY QualityLevel DESC But my concern here is that calculation for each team will be done each time on query performed. It is not too big overhead (at least now), but I would like to avoid performance issues later. I see 2 options here. The 1st one is to create 2 additional fields in 'team' table (for example) and store there OverallSize and QualityLevel fields. If information if 'sector' table is changed - update those table too (probably would be good to do that with triggers, as sector table doesn't change too often). The 2nd option is to create a view that will provide required data. The 2nd option seems much easier for me, but I don't have a lot of experience/knowledge of work with views. Q1: What is the best option from your perspective here and why? Probably you could suggest other options? Q2: Can I create view in such way that it will do calculations rarely (at least once per day)? If yes - how? Q3: Is it reasonable to use triggers for such purpose (1st option). P.S. MySql 5.1 is used, overall number of teams is around 1-2 thousand, overall number of records in sector table - overall 6-8 thousand. I understand, those numbers are pretty small, but I would like to implement the best practice here.

    Read the article

  • Query to bring count from comma seperated Value

    - by Mugil
    I have Two Tables One for Storing Products and Other for Storing Orders List. CREATE TABLE ProductsList(ProductId INT NOT NULL PRIMARY KEY, ProductName VARCHAR(50)) INSERT INTO ProductsList(ProductId, ProductName) VALUES(1,'Product A'), (2,'Product B'), (3,'Product C'), (4,'Product D'), (5,'Product E'), (6,'Product F'), (7,'Product G'), (8,'Product H'), (9,'Product I'), (10,'Product J'); CREATE TABLE OrderList(OrderId INT NOT NULL PRIMARY KEY AUTO_INCREMENT, EmailId VARCHAR(50), CSVProductIds VARCHAR(50)) SELECT * FROM OrderList INSERT INTO OrderList(EmailId, CSVProductIds) VALUES('[email protected]', '2,4,1,5,7'), ('[email protected]', '5,7,4'), ('[email protected]', '2'), ('[email protected]', '8,9'), ('[email protected]', '4,5,9'), ('[email protected]', '1,2,3'), ('[email protected]', '9,10'), ('[email protected]', '1,5'); Output ItemName NoOfOrders Product A 4 Product B 3 Product C 1 Product D 3 Product E 4 Product F 0 Product G 2 Product H 1 Product I 2 Product J 1 The Order List Stores the ItemsId as Comma separated value for every customer who places order.Like this i am having more than 40k Records in my dB table Now I am assigned with a task of creating report in which I should display Items and No of People ordered Items as Shown Below I Used Query as below in my PHP to bring the Orders One By One and storing in array. SELECT COUNT(PL.EmailId) FROM OrderList PL WHERE CSVProductIds LIKE '2' OR CSVProductIds LIKE '%,2,%' OR CSVProductIds LIKE '%,2' OR CSVProductIds LIKE '2,%'; 1.Is it possible to get the same out put by using Single Query 2.Does using a like in mysql query slows down the dB when the table has more no of records i.e 40k rows

    Read the article

  • Categorize data without consolidating?

    - by sqlnoob
    I have a table with about 1000 records and 2000 columns. What I want to do is categorize each row such that all records with equal column values for all columns except 'ID' are given a category ID. My final answer would look like: ID A B C ..... Category ID 1 1 0 3 1 2 2 1 3 2 3 1 0 3 1 4 2 1 3 2 5 4 5 6 3 6 4 5 6 3 where all columns (besides ID) are equal for IDs 1,3 so they get the same category ID and so on. I guess my thought was to just write a SQL query that does a group by on every single column besides 'ID' and assign a number to each group and then join back to my original table. My current input is a text file, and I have SAS, MS Access, and Excel to work with. (I could use proc sql from within SAS). Before I go this route and construct the whole query, I was just wondering if there was a better way to do this? It will take some work just to write the query, and I'm not even sure if it is practical to join on 2000 columns (never tried), so I thought I'd ask for ideas before I got too far down the wrong path. EDIT: I just realized my title doesn't really make sense. What I was originally thinking was "Is there a way I can group by and categorize at the same time without actually consolidating into groups?"

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >