Search Results

Search found 6017 results on 241 pages for 'universal records managem'.

Page 102/241 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • NHibernate lazy properties behavior?

    - by GeReV
    I've been trying to get NHibernate into development for a project I'm working on at my workplace. Since I have to put a strong emphasis on performance, I've been running a proof-of-concept stress test on an existing project's table with thousands of records, all of which contain a large text column. However, when selecting a collection of these records, the select statement takes a relatively long time to execute; apparently due to the aforementioned column. The first solution that comes to mind is setting this property as lazy: <property name="Content" lazy="true"/> But there seems to be no difference in the SQL generated by NHibernate. My question is, how do lazy properties behave in NHibernate? Is there some kind of type limitations I could be missing? Should I take a different approach altogether? Using HQL's new Class(column1, column2) approach works, but lazy properties sounds like a simpler solution. It's perhaps worth mentioning I'm using NHibernate 2.1.2GA with the Castle DynamicProxy. Thanks!

    Read the article

  • After a few same hql query the application freezes

    - by Oktay
    I am calling below function with the same batchNumber and it is working without problem 15 times and takes the records fromm database without problem but at 16. time the application freezes when the query.list() row is called. It just loses debug focus and not give any exception. This problem probably is not about the hql because I've seen this problem before and I used criteria instead of hql and I got pass this problem. But for this when I use "group by" in criteria(setrojection....) it doesn't return the result as hibernate model(object) just returns a list. But I need the results as model. Note: about 15 times it is just for test. This is a web aplication and user may click the button many times that calls this funtion to see the taken records from database. public List<SiteAddressModel> getSitesByBatch(String batchNumber) { try{ List<SiteAddressModel> siteList; MigrationPlanDao migrationPlanDao = ServiceFactory.getO2SiteService().getMigrationPlanDao(); Query query = this.getSession().createQuery("from " + persistentClass.getName() + " where " + "siteType =:" + "type and siteName in " + "(select distinct exchange from " + migrationPlanDao.getPersistentClass().getName() + " where migrationBatchNumber =:" + "batchNumber" + ")" ); query.setString("batchNumber", batchNumber); query.setString("type", "LLU/ASN"); System.out.println("before query"); siteList = query.list(); System.out.println("after query"); return siteList; }catch (Exception e) { e.printStackTrace(); } Hibernate version 3.2.0.ga

    Read the article

  • Oracle Query Optimization: Why is My Second Query Faster?

    - by Patrick Cuff
    I was having some performance issues with an Oracle query, so I downloaded a trial of the Quest SQL Optimizer for Oracle, which made some changes that dramatically improved the query's performance. I'm not exactly sure why the recommended query had such an improvement; can anyone provide an explanation? Before: SELECT t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table1 t1, table2 t2, table3 t3 WHERE t1.id = t2.id AND t1.version_id = t2.version_id AND t2.id = 123 AND t1.version_id = t3.version_id AND t1.VERSION_NAME <> 'AA' order by t1.id Plan Cost: 831 Elapsed Time: 00:00:21.40 Number of Records: 40,717 After: SELECT /*+ USE_NL_WITH_INDEX(t1) */ t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table2 t2, table3 t3, table1 t1 WHERE t1.id = t2.id + 0 AND t1.version_id = t2.version_id + 0 AND t2.id = 123 AND t1.version_id = t3.version_id + 0 AND t1.VERSION_NAME || '' <> 'AA' AND t3.version_id = t2.version_id + 0 order by t1.id Plan Cost: 686 Elapsed Time: 00:00:00.95 Number of Records: 40,717 Questions: Why does re-arranging the order of the tables in the FROM clause help? Why does adding + 0 to the WHERE clause comparisons help? Why does || '' <> 'AA' in the WHERE clause VERSION_NAME comparison help? Is this a more efficient way of handling possible nulls on this column?

    Read the article

  • SQLite3 - select date range not working

    - by iFloh
    yet anotherone that gives me grief. In a SQLite3 DB Select I query for a date range specified in (NSDate *)fromDate to (NSDate *)toDate const char *sql = "SELECT * FROM A, B WHERE A.key = B.key AND A.date between ? and ?"; After opening the DB I run the query in Objective-C as follows: NSDateFormatter *tmpDatFmt = [[[NSDateFormatter alloc] init] autorelease]; [tmpDatFmt setDateFormat:@"dd-MM-yyyy"]; sqlite3_stmt *stmt; if(sqlite3_prepare_v2(db, sql, -1, &stmt, NULL) == SQLITE_OK) { NSLog(@"From %s to %s;", [[tmpDatFmt stringFromDate:fromDate] UTF8String], [[tmpDatFmt stringFromDate:toDate] UTF8String]); sqlite3_bind_text(stmt, 1, [[tmpDatFmt stringFromDate:fromDate] UTF8String], -1, SQLITE_STATIC); // first '?' sqlite3_bind_text(stmt, 2, [[tmpDatFmt stringFromDate:toDate] UTF8String], -1, SQLITE_STATIC); // second '?' while(sqlite3_step(stmt) == SQLITE_ROW) { NSLog(@"Success");} In the database I have several records that match the date range: 12-04-2010 = in seconds 1271059200 13-04-2010 = in seconds 1271145600 13-04-2010 = in seconds 1271152800 14-04-2010 = in seconds 1271267100 When I run it the first NSLog shows From 2010-04-01 to 2010-04-30 my problem is the records are not selected (no "Success" shows in the log) and I don't understand why. earlier I had miscalculated the dates 2 days later as 14-04-2010 = in seconds 1271232000 15-04-2010 = in seconds 1271318400 15-04-2010 = in seconds 1271325600 16-04-2010 = in seconds 1271439936 These dates worked fine (4 x "Success in the log). I am puzzled ...

    Read the article

  • Perl help dereferencing a reference to an array of hash references, containing record set data

    - by user1724150
    I'm using the a Amazon Perl module that returns a reference to an array of hash references as $record_sets, containing record set data and I'm having a hard time dereferencing it. I can print the data using data dumper but I need to be able to manipulate the data. Below is the documentation provided for the module Thanks In Advance: #list_resource_record_sets #Lists resource record sets for a hosted zone. #Called in scalar context: $record_sets = $r53->list_resource_record_sets(zone_id => '123ZONEID'); #Returns: A reference to an array of hash references, containing record set data. Example: $record_sets = [ { name => 'example.com.', type => 'MX' ttl => 86400, records => [ '10 mail.example.com' ] }, { name => 'example.com.', type => 'NS', ttl => 172800, records => [ 'ns-001.awsdns-01.net.', 'ns-002.awsdns-02.net.', 'ns-003.awsdns-03.net.', 'ns-004.awsdns-04.net.' ]

    Read the article

  • Optimize SQL query (Facebook-like application)

    - by fabriciols
    My application is similar to Facebook, and I'm trying to optimize the query that get user records. The user records are that he as src ou dst. The src is in usermuralentry directly, the dst list are in usermuralentry_user. So, a entry can have one src and many dst. I have those tables: mysql> desc usermuralentry ; +-----------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_src_id | int(11) | NO | MUL | NULL | | | private | tinyint(1) | NO | | NULL | | | content | longtext | NO | | NULL | | | date | datetime | NO | | NULL | | | last_update | datetime | NO | | NULL | | +-----------------+------------------+------+-----+---------+----------------+ 10 rows in set (0.10 sec) mysql> desc usermuralentry_user ; +-------------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | usermuralentry_id | int(11) | NO | MUL | NULL | | | userinfo_id | int(11) | NO | MUL | NULL | | +-------------------+---------+------+-----+---------+----------------+ 3 rows in set (0.00 sec) And the following query to retrieve information from two users. mysql> explain SELECT * FROM usermuralentry AS a , usermuralentry_user AS b WHERE a.user_src_id IN ( 1, 2 ) OR ( a.id = b.usermuralentry_id AND b.userinfo_id IN ( 1, 2 ) ); +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | 1 | SIMPLE | b | ALL | usermuralentry_id,usermuralentry_user_bcd7114e,usermuralentry_user_6b192ca7 | NULL | NULL | NULL | 147188 | | | 1 | SIMPLE | a | ALL | PRIMARY | NULL | NULL | NULL | 1371289 | Range checked for each record (index map: 0x1) | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ 2 rows in set (0.00 sec) but it is taking A LOT of time... Some tips to optimize? Can the table schema be better in my application?

    Read the article

  • UPDATE Table SET Field

    - by davlyo
    This is my Very first Post! Bear with me. I have an Update Statement that I am trying to understand how SQL Server handles it. UPDATE a SET a.vField3 = b.vField3 FROM tableName a INNER JOIN tableName b ON a.vField1 = b.vField1 AND b.nField2 = a.nField2 – 1 This is my query in its simplest form. vField1 is a Varchar nField2 is an int (autonumber) vField3 is a Varchar I have left the WHERE clause out so understand there is logic that otherwise makes this a nessessity. Say vField1 is a Customer Number and that Customer has 3 records The value in nField2 is 1, 2, and 3 consecutively. vField3 is a Status When the Update comes to a.nField2 = 1 there is no a.nField2 -1 so it continues When the Update comes to a.nField2 = 2, b.nField2 = 1 When the Update comes to a.nField2 = 3, b.nField2 = 2 So when the Update is on a.nField2 = 2, alias b reflects what is on the line prior (b.nField2 = 1) And it SETs the Varchar Value of a.vField3 = b.vField3 When the Update is on a.nField2 = 3, alias b reflects what is on the line prior (b.nField2 = 2) And it (should) SET the Varchar Value of a.vField3 = b.vField3 When the process is complete –the Second of three records looks as expected –hence the value in vField3 of the second record reflects the value in vField3 from the First record However, vField3 of the Third record does not reflect the value in vField3 from the Second record. I think this demonstrates that SQL Server may be producing a transaction of some sort and then an update. Question: How can I get the DB to Update after each transaction so I can reference the values generated by each transaction?

    Read the article

  • Integration testing - can it be done right?

    - by Max
    I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program? What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want? An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile. To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation. What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?

    Read the article

  • How do you send email from IMAP account with PHP?

    - by arthurakay
    I'm having an issue sending email via PHP/IMAP - and I don't know if it's because: I don't correctly understand IMAP, or there's an issue with my server My application opens an IMAP connection to an email account to read messages in the inbox. It does this successfully. The problem I have is that I want to send messages from this account and have them display in the outbox/sent folder. As far as I can tell, the PHP imap_mail() function doesn't in any way hook into the IMAP stream I currently have open. My code executes without throwing an error. However, the email never arrives to the recipient and never displays in my sent folder. private function createHeaders() { return "MIME-Version: 1.0" . "\r\n" . "Content-type: text/html; charset=iso-8859-1" . "\r\n" . "From: " . $this->accountEmail . "\r\n"; } private function notifyAdminForCompleteSet($urlToCompleteSet) { $message = " <p> In order to process the latest records, you must visit <a href='$urlToCompleteSet'>the website</a> and manually export the set. </p> "; try { imap_mail( $this->adminEmail, "Alert: Manual Export of Records Required", wordwrap($message, 70), $this->createHeaders() ); echo(" ---> Admin notified via email!\n"); } catch (Exception $e) { throw new Exception("Error in notifyAdminForCompleteSet()"); } } I'm guessing I need to copy the message into the IMAP account manually... or is there a different solution to this problem? Also, does it matter if the domain in the "from" address is different than that of the server on whicn this script is running? I can't explain why the message is never sent.

    Read the article

  • FreeText COUNT query on multiple tables is super slow

    - by Eric P
    I have two tables: **Product** ID Name SKU **Brand** ID Name Product table has about 120K records Brand table has 30K records I need to find count of all the products with name and brand matching a specific keyword. I use freetext 'contains' like this: SELECT count(*) FROM Product inner join Brand on Product.BrandID = Brand.ID WHERE (contains(Product.Name, 'pants') or contains(Brand.Name, 'pants')) This query takes about 17 secs. I rebuilt the FreeText index before running this query. If I only check for Product.Name. They query is less then 1 sec. Same, if I only check the Brand.Name. The issue occurs if I use OR condition. If I switch query to use LIKE: SELECT count(*) FROM Product inner join Brand on Product.BrandID = Brand.ID WHERE Product.Name LIKE '%pants%' or Brand.Name LIKE '%pants%' It takes 1 secs. I read on MSDN that: http://msdn.microsoft.com/en-us/library/ms187787.aspx To search on multiple tables, use a joined table in your FROM clause to search on a result set that is the product of two or more tables. So I added an INNER JOINED table to FROM: SELECT count(*) FROM (select Product.Name ProductName, Product.SKU ProductSKU, Brand.Name as BrandName FROM Product inner join Brand on product.BrandID = Brand.ID) as TempTable WHERE contains(TempTable.ProductName, 'pants') or contains(TempTable.BrandName, 'pants') This results in error: Cannot use a CONTAINS or FREETEXT predicate on column 'ProductName' because it is not full-text indexed. So the question is - why OR condition could be causing such as slow query?

    Read the article

  • How to check dates don't overlap in a table using TSQL.

    - by Jon
    I have a table with start and finish datetimes that I need to determine if any overlap and not quite sure the best way to go. Initially I was thinking of using a nested cursor as shown below which does work, however I'm checking the same records against each other twice and I'm sure it is not very efficient. eg: this table would result in an overlap. id start end ------------------------------------------------------- 1 2009-10-22 10:19:00.000 2009-10-22 11:40:00.000 2 2009-10-22 10:31:00.000 2009-10-22 13:34:00.000 3 2009-10-22 16:31:00.000 2009-10-22 17:34:00.000 Declare @Start datetime, @End datetime, @OtherStart datetime, @OtherEnd datetime, @id int, @endCheck bit Set @endCheck = 0 DECLARE Cur1 CURSOR FOR select id, start, end from table1 OPEN Cur1 FETCH NEXT FROM Cur1 INTO @id, @Start, @End WHILE @@FETCH_STATUS = 0 AND @endCheck = 0 BEGIN -- Get a cursor on all the other records DECLARE Cur2 CURSOR FOR select start, end from table1 and id != @id OPEN Cur2 FETCH NEXT FROM Cur2 INTO @OtherStart, @OtherEnd WHILE @@FETCH_STATUS = 0 AND @endCheck = 0 BEGIN if ( @Start > @OtherStart AND @Start < @OtherEnd OR @End > @OtherStart AND @End < @OtherEnd ) or ( @OtherStart > @Start AND @OtherStart < @End OR @OtherEnd > @Start AND @OtherEnd < @End ) BEGIN SET @endCheck = 1 END FETCH NEXT FROM Cur2 INTO @OtherStart, @OtherEnd END CLOSE Cur2 DEALLOCATE Cur2 FETCH NEXT FROM Cur1 INTO @id, @Start, @End END CLOSE Cur1 DEALLOCATE Cur1

    Read the article

  • Write out to text file using T-SQL

    - by sasfrog
    I am creating a basic data transfer task using TSQL where I am retrieving certain records from one database that are more recent than a given datetime value, and loading them into another database. This will happen periodically throughout the day. It's such a small task that SSIS seems like overkill - I want to just use a scheduled task which runs a .sql file. Where I need guidance is that I need to persist the datetime from the last run of this task, then use this to filter the records next time the task runs. My initial thought is to just store the datetime in a text file, and update (overwrite) it as part of the task each time it runs. I can read the file in without problems using T-SQL, but writing back out has got me stuck. I've seen plenty of examples which make use of a dynamically-built bcp command, which is then executed using xp_cmdshell. Trouble is, security on the server I'm deploying to precludes the use of xp_cmdshell. So, my question is, are there other ways to simply write a datetime value to a file using TSQL, or should I be thinking about a different approach? EDIT: happy to be corrected about SSIS being "overkill"...

    Read the article

  • Mysql query, need suggestion or solution

    - by Xi Kam
    Can anyone help me, i have two tables and i need records from both the table //////////////////////////////++ Query 1 ++//////////////////////////////////// SELECT SUM(rec_issued) AS issed, regen_id, YEAR(issue_date) AS iYear, MONTH(issue_date) AS iMonth FROM `view_rec_issued` WHERE `regen_id` = 2 GROUP BY YEAR(issue_date) DESC, MONTH(issue_date) DESC ORDER BY issue_date ASC issed regen_id iYear iMonth 424 2 2011 3 4340 2 2011 4 4235 2 2011 5 10570 2 2012 2 4761 2 2012 3 5000 2 2012 4 3700 2 2012 5 3414 2 2012 6 3700 2 2012 7 2992 2 2012 8 995 2 2012 10 ![Result from Query 1][1] //////////////////////////////++ Query 2 ++//////////////////////////////////// SELECT SUM(total_redem) AS redemed, regen_id, YEAR(redemption_date) AS rYear, MONTH(redemption_date) AS rMonth FROM `recredem_month_wise` WHERE `regen_id` = 2 GROUP BY YEAR(redemption_date) DESC, MONTH(redemption_date) DESC order by redemption_date ASC redemed regen_id rYear rMonth 424 2 2011 3 260 2 2011 4 6523 2 2011 5 1070 2 2011 6 200 2 2011 10 500 2 2011 11 9750 2 2012 2 5000 2 2012 3 5500 2 2012 4 3803 2 2012 5 3700 2 2012 7 3000 2 2012 8 ![Result from Query 2][2] But i want it as - issed regen_id iYear iMonth redemed regen_id rYear rMonth 424 2 2011 3 424 2 2011 3 4340 2 2011 4 260 2 2011 4 4235 2 2011 5 6523 2 2011 5 NULL NULL NULL NULL 1070 2 2011 6 NULL NULL NULL NULL 200 2 2011 10 NULL NULL NULL NULL 500 2 2011 11 10570 2 2012 2 9750 2 2012 2 4761 2 2012 3 5000 2 2012 3 5000 2 2012 4 5500 2 2012 4 3700 2 2012 5 3803 2 2012 5 3414 2 2012 6 NULL NULL NULL NULL 3700 2 2012 7 3700 2 2012 7 2992 2 2012 8 3000 2 2012 8 995 2 2012 10 NULL NULL NULL NULL ![I want this output][3] In these table regen_id is unique and i need data as YEAR and MONTH, if in any table not have the records in perticular month and year it should retrieve zero or null. But in every record year and month should equal like this - iYear = rYear and iMonth = rMonth So we can merge both the fields - No need to show year and month twice iYear and rYear = year iMonth and rMonth = month Thank You Please look at this problem.

    Read the article

  • EF 4 Query - Issue with Multiple Parameters

    - by Brian
    Hello, A trick to avoiding filtering by nullable parameters in SQL was something like the following: select * from customers where (@CustomerName is null or CustomerName = @CustomerName) This worked well for me in LINQ to SQL: string customerName = "XYZ"; var results = (from c in ctx.Customers where (customerName == null || (customerName != null && c.CustomerName == customerName)) select c); But that above query, when in ADO.NET EF, doesn't work for me; it should filter by customer name because it exists, but it doesn't. Instead, it's querying all the customer records. Now, this is a simplified example, because I have many fields that I'm utilizing this kind of logic with. But it never actually filters, queries all the records, and causes a timeout exception. But the wierd thing is another query does something similarly, with no issues. Any ideas why? Seems like a bug to me, or is there a workaround for this? I've since switched to extension methods which works. Thanks.

    Read the article

  • How (and if) to write a single-consumer queue using the task parallel library?

    - by Eric
    I've heard a bunch of podcasts recently about the TPL in .NET 4.0. Most of them describe background activities like downloading images or doing a computation, using tasks so that the work doesn't interfere with a GUI thread. Most of the code I work on has more of a multiple-producer / single-consumer flavor, where work items from multiple sources must be queued and then processed in order. One example would be logging, where log lines from multiple threads are sequentialized into a single queue for eventual writing to a file or database. All the records from any single source must remain in order, and records from the same moment in time should be "close" to each other in the eventual output. So multiple threads or tasks or whatever are all invoking a queuer: lock( _queue ) // or use a lock-free queue! { _queue.enqueue( some_work ); _queueSemaphore.Release(); } And a dedicated worker thread processes the queue: while( _queueSemaphore.WaitOne() ) { lock( _queue ) { some_work = _queue.dequeue(); } deal_with( some_work ); } It's always seemed reasonable to dedicate a worker thread for the consumer side of these tasks. Should I write future programs using some construct from the TPL instead? Which one? Why?

    Read the article

  • Is it possible in SQLAlchemy to filter by a database function or stored procedure?

    - by Rico Suave
    We're using SQLalchemy in a project with a legacy database. The database has functions/stored procedures. In the past we used raw SQL and we could use these functions as filters in our queries. I would like to do the same for SQLAlchemy queries if possible. I have read about the @hybrid_property, but some of these functions need one or more parameters, for example; I have a User model that has a JOIN to a bunch of historical records. These historical records for this user, have a date and a debit and credit field, so we can look up the balance of a user at a specific point in time, by doing a SUM(credit) - SUM(debit) up until the given date. We have a database function for that called dbo.Balance(user_id, date_time). I can use this to check the balance of a user at a given point in time. I would like to use this as a criterium in a query, to select only users that have a negative balance at a specific date/time. selection = users.filter(coalesce(Users.status, 0) == 1, coalesce(Users.no_reminders, 0) == 0, dbo.pplBalance(Users.user_id, datetime.datetime.now()) < -0.01).all() This is of course a non-working example, just for you to get the gist of what I'd like to do. The solution looks to be to use hybrd properties, but as I mentioned above, these only work without parameters (as they are properties, not methods). Any suggestions on how to implement something like this (if it's even possible) are welcome. Thanks,

    Read the article

  • XSLT Type Checking

    - by mo
    Hi Folks Is it possible to check an elements ComplexType? i have this (simplified): complexType Record complexType Customer extension of Record complexType Person extension of Record <xsl:template match="/"> <records> <xsl:apply-templates /> </records> </xsl:template> <xsl:template match="!!! TYPECHECK FOR RECORD !!!" name="Record"> <record><xsl:value-of select="." /></record> </xsl:template> is it possible to check elementstype incl. inheritence? i dont know the elements name only that they are a subtype of Record. schema 1: complexType name="Customer" extension base="Record" element name="customers" element name="customer" type="Customer" schema 2: complexType name="Person" extension base="Record" element name="persons" element name="person" type="Person" schema ?: complexType name="UnknownType" extension base="Record" element name="unknowns" element name="unknown" type="UnknownType" xml 1: <customers> <customer /> <customer /> </customers> xml 2: <persons> <person /> <person /> </persons> xml ?: <?s> <? /> <? /> </?s> the xml input ist custom so i have to match by the type (i think)

    Read the article

  • Need help INSERT record(s) MySQL DB

    - by JM4
    I have an online form which collects member(s) information and stores it into a very long MySQL database. We allow up to 16 members to enroll at a single time and originally structured the DB to allow such. For example: If 1 Member enrolls, his personal information (first name, last name, address, phone, email) are stored on a single row. If 15 Members enroll (all at once), their personal information are stored in the same single row. The row has information housing columns for all 'possible' inputs. I am trying to consolidate this code and having every nth member that enrolls put onto a new record within the database. I have seen sugestions before for inserting multiple records as such: INSERT INTO tablename VALUES (('$f1name', '$f1address', '$f1phone'), ('$f2name', '$f2address', '$f2phone')... The issue with this is two fold: I do not know how many records are being enrolled from person to person so the only way to make the statement above is to use a loop The information collected from the forms is NOT a single array so I can't loop through one array and have it parse out. My information is collected as individual input fields like such: Member1FirstName, Member1LastName, Member1Phone, Member2Firstname, Member2LastName, Member2Phone... and so on Is it possible to store information in separate rows WITHOUT using a loop (and therefore having to go back and completely restructure my form field names and such (which can't happen due to the way the validation rules are built.)

    Read the article

  • '<=' operator is not working in sql server 2000

    - by Lalit
    Hello, Scenario is, database is in the maintenance phase. this database is not developed by ours developer. it is an existing database developed by the 'xyz' company in sql server 2000. This is real time database, where i am working now. I wanted to write the stored procedure which will retrieve me the records From date1 to date 2.so query is : Select * from MyTableName Where colDate>= '3-May-2010' and colDate<= '5-Oct-2010' and colName='xyzName' whereas my understanding I must get data including upper bound date as well as lower bound date. but somehow I am getting records from '3-May-2010' (which is fine but) to '10-Oct-2010' As i observe in table design , for ColDate, developer had used 'varchar' to store the date. i know this is wrong remedy by them. so in my stored procedure I have also used varchar parameters as @FromDate1 and @ToDate to get inputs for SP. this is giving me result which i have explained. i tried to take the parameter type as 'Datetime' but it is showing error while saving/altering the stored procedure that "@FromDate1 has invalid datatype", same for "@ToDate". situation is that, I can not change the table design at all. what i have to do here ? i know we can use user defined table in sql server 2008 , but there is version sql server 2000. which does not support the same. Please guide me for this scenario. **Edited** I am trying to write like this SP: CREATE PROCEDURE USP_Data (@Location varchar(100), @FromDate DATETIME, @ToDate DATETIME) AS SELECT * FROM dbo.TableName Where CAST(Dt AS DATETIME) >=@fromDate and CAST(Dt AS DATETIME)<=@ToDate and Location=@Location GO but getting Error: Arithmetic overflow error converting expression to data type datetime. in sql server 2000 What should be that ? is i am wrong some where ? also (202 row(s) affected) is changes every time in circular manner means first time sayin (122 row(s) affected) run again saying (80 row(s) affected) if again (202 row(s) affected) if again (122 row(s) affected) I can not understand what is going on ?

    Read the article

  • Query to bring count from comma seperated Value

    - by Mugil
    I have Two Tables One for Storing Products and Other for Storing Orders List. CREATE TABLE ProductsList(ProductId INT NOT NULL PRIMARY KEY, ProductName VARCHAR(50)) INSERT INTO ProductsList(ProductId, ProductName) VALUES(1,'Product A'), (2,'Product B'), (3,'Product C'), (4,'Product D'), (5,'Product E'), (6,'Product F'), (7,'Product G'), (8,'Product H'), (9,'Product I'), (10,'Product J'); CREATE TABLE OrderList(OrderId INT NOT NULL PRIMARY KEY AUTO_INCREMENT, EmailId VARCHAR(50), CSVProductIds VARCHAR(50)) SELECT * FROM OrderList INSERT INTO OrderList(EmailId, CSVProductIds) VALUES('[email protected]', '2,4,1,5,7'), ('[email protected]', '5,7,4'), ('[email protected]', '2'), ('[email protected]', '8,9'), ('[email protected]', '4,5,9'), ('[email protected]', '1,2,3'), ('[email protected]', '9,10'), ('[email protected]', '1,5'); Output ItemName NoOfOrders Product A 4 Product B 3 Product C 1 Product D 3 Product E 4 Product F 0 Product G 2 Product H 1 Product I 2 Product J 1 The Order List Stores the ItemsId as Comma separated value for every customer who places order.Like this i am having more than 40k Records in my dB table Now I am assigned with a task of creating report in which I should display Items and No of People ordered Items as Shown Below I Used Query as below in my PHP to bring the Orders One By One and storing in array. SELECT COUNT(PL.EmailId) FROM OrderList PL WHERE CSVProductIds LIKE '2' OR CSVProductIds LIKE '%,2,%' OR CSVProductIds LIKE '%,2' OR CSVProductIds LIKE '2,%'; 1.Is it possible to get the same out put by using Single Query 2.Does using a like in mysql query slows down the dB when the table has more no of records i.e 40k rows

    Read the article

  • PHP: Trying to come up with a "prev" and "next" link

    - by fwaokda
    I'm displaying 10 records per page. The variables I have currently that I'm working with are.. $total = total number of records $page = whats the current page I'm displaying I placed this at the top of my page... if ( $_GET['page'] == '' ) { $page = 1; } //if no page is specified set it to `1` else { $page = ($_GET['page']); } // if page is specified set it Here are my two links... if ( $page != 1 ) { echo '<div style="float:left" ><a href="index.php?page='. ( $page - 1 ) .'" rev="prev" >Prev</a></div>'; } if ( !( ( $total / ( 10 * $page ) ) < $page ) ) { echo '<div style="float:right" ><a href="index.php?page='. ( $page + 1 ) .'" rev="next" >Next</a></div>'; } Now I guess (unless I'm not thinking of something) that I can display the "Prev" link every time except when the page is '1'. How can make it where the "Next" link doesn't show on the last page though?

    Read the article

  • MySql: Is it reasonable to use 'view' or I would better denormalize my DB?

    - by Budda
    There is 'team_sector' table with following fields: Id, team_id, sect_id, size, level It contains few records for each 'team' entity (referenced with 'team_id' field). Each record represent sector of team's stadium (totally 8 sectors). Now it is necessary to implement few searches: by overall stadium size (SUM(size)); the best quality (SUM(level)/COUNT(*)). I could create query something like this: SELECT TS.team_id, SUM(TS.size) as OverallSize, SUM(TS.Level)/COUNT(TS.Id) AS QualityLevel FROM team_sector GROUP BY team_id ORDER BY OverallSize DESC / ORDER BY QualityLevel DESC But my concern here is that calculation for each team will be done each time on query performed. It is not too big overhead (at least now), but I would like to avoid performance issues later. I see 2 options here. The 1st one is to create 2 additional fields in 'team' table (for example) and store there OverallSize and QualityLevel fields. If information if 'sector' table is changed - update those table too (probably would be good to do that with triggers, as sector table doesn't change too often). The 2nd option is to create a view that will provide required data. The 2nd option seems much easier for me, but I don't have a lot of experience/knowledge of work with views. Q1: What is the best option from your perspective here and why? Probably you could suggest other options? Q2: Can I create view in such way that it will do calculations rarely (at least once per day)? If yes - how? Q3: Is it reasonable to use triggers for such purpose (1st option). P.S. MySql 5.1 is used, overall number of teams is around 1-2 thousand, overall number of records in sector table - overall 6-8 thousand. I understand, those numbers are pretty small, but I would like to implement the best practice here.

    Read the article

  • Categorize data without consolidating?

    - by sqlnoob
    I have a table with about 1000 records and 2000 columns. What I want to do is categorize each row such that all records with equal column values for all columns except 'ID' are given a category ID. My final answer would look like: ID A B C ..... Category ID 1 1 0 3 1 2 2 1 3 2 3 1 0 3 1 4 2 1 3 2 5 4 5 6 3 6 4 5 6 3 where all columns (besides ID) are equal for IDs 1,3 so they get the same category ID and so on. I guess my thought was to just write a SQL query that does a group by on every single column besides 'ID' and assign a number to each group and then join back to my original table. My current input is a text file, and I have SAS, MS Access, and Excel to work with. (I could use proc sql from within SAS). Before I go this route and construct the whole query, I was just wondering if there was a better way to do this? It will take some work just to write the query, and I'm not even sure if it is practical to join on 2000 columns (never tried), so I thought I'd ask for ideas before I got too far down the wrong path. EDIT: I just realized my title doesn't really make sense. What I was originally thinking was "Is there a way I can group by and categorize at the same time without actually consolidating into groups?"

    Read the article

  • Pagination in Java

    - by user569125
    I wrote paging logic: My requirement: total elements to display:100 per page,if i click next it should display next 100 records,if i click previous 100 records. Initial varaible values: showFrom:1, showTo:100 max elements:depends on size of data. pageSize:100. Code: if(p*emphasized text*aging.getAction().equalsIgnoreCase("Next")){ paging.setTotalRec(availableList.size()); showFrom = (showTo + 1); showTo = showFrom + 100- 1; if(showTo >= paging.getTotalRec()) showTo = paging.getTotalRec(); paging.setShowFrom(showFrom); paging.setShowTo(showTo); } else if(paging.getAction().equalsIgnoreCase("Previous")){ showTo = showFrom - 1; showFrom = (showFrom - 100); paging.setShowTo(showTo); paging.setShowFrom(showFrom); paging.setTotalRec(availableList.size()); } Here i can remove and add the elements to the existing data.above code works fine if i add and remove few elements.but if i remove or add 100 elements at a time counts are not displaying properly above code works fine if i add and remove few elements.

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >