Search Results

Search found 30436 results on 1218 pages for 'sql prompt 6 1'.

Page 620/1218 | < Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

  • JPA native query join returns object but dereference throws class cast exception

    - by masato-san
    I'm using JPQL Native query to join table and query result is stored in List<Object[]>. public String getJoinJpqlNativeQuery() { String final SQL_JOIN = "SELECT v1.bitbit, v1.numnum, v1.someTime, t1.username, t1.anotherNum FROM MasatosanTest t1 JOIN MasatoView v1 ON v1.username = t1.username;" System.out.println("get join jpql native query is being called ============================"); EntityManager em = null; List<Object[]> out = null; try { em = EmProvider.getDefaultManager(); Query query = em.createNativeQuery(SQL_JOIN); out = query.getResultList(); System.out.println("return object ==========>" + out); System.out.println(out.get(0)); String one = out.get(0).toString(); //LINE 77 where ClassCastException System.out.println(one); } catch(Exception e) { } finally { if(em != null) { em.close; } } } The problem is System.out.println("return object ==========>" + out); outputs: return object ==========> [[true, 0, 2010-12-21 15:32:53.0, masatosan, 0.020], [false, 0, 2010-12-21 15:32:53.0, koga, 0.213]] System.out.println(out.get(0)) outputs: [true, 0, 2010-12-21 15:32:53.0, masatosan, 0.020] So I assumed that I can assign return value of out.get(0) which should be String: String one = out.get(0).toString(); But I get weird ClassCastException. java.lang.ClassCastException: java.util.Vector cannot be cast to [Ljava.lang.Object; at local.test.jaxrs.MasatosanTestResource.getJoinJpqlNativeQuery (MasatosanTestResource.java:77) So what's really going on? Even Object[] foo = out.get(0); would throw an ClassCastException :(

    Read the article

  • How do you compare dates in a LINQ query?

    - by Gina
    I am tring to compare a date from a asp calendar control to a date in the table.... here's what i have... it doesn't like the == ? var query = from details in db.TD_TravelCalendar_Details where details.StartDate == calStartDate.SelectedDate && details.EndDate == calEndDate.SelectedDate select details;

    Read the article

  • Updating multiple tables with LinqToSql in one unit of work

    - by zsharp
    Table 1: int ID-a(pk) Table 2: int ID-a(pk), int ID-b(pk) Table 3: int ID-b(pk), string C I have the data to insert into Table 1. But I do not have the ID-a, which is autogenerated. I have many string C to insert in Table 3. I am trying to insert row into Table 1, get the ID-a to insert in Table 2 along with the ID-b that is auto-Generated in Table 3 when I submit each string C, all in one submission to db. Right now I am calling dc.SubmitChanges twice in same call. Is it efficient to have to submit changes twice on same DataContext or can this be combined further?

    Read the article

  • Howto monitor traffic between IIS and MSSQL

    - by kockiren
    Hello @all, i try to check how much traffic flows between MSSQL Server and IIS Server in different Locations. There are 1 ipcop in every Location and i download the tcpdump file from one Firewall and search for DST=ipmssql and SRC=ipIIS but i did not find the ip from the Database Server. But there are traffic between both. Any suggestions why i did not find the IP Adress from the MSSQL Server? Is this an configuration failure in IPCop or is the Traffic between ISS and MSSQL so strange :-) Regards Rene

    Read the article

  • Subset sum problem

    - by MadBoy
    I'm having a problem with counting which is continuation of this question. I am not really a math person so it's really hard for me to figure out this subset sum problem which was suggested as resolution. I'm having 4 ArrayList in which I hold data: alId, alTransaction, alNumber, alPrice Type | Transaction | Number | Price 8 | Buy | 95.00000000 | 305.00000000 8 | Buy | 126.00000000 | 305.00000000 8 | Buy | 93.00000000 | 306.00000000 8 | Transfer out | 221.00000000 | 305.00000000 8 | Transfer in | 221.00000000 | 305.00000000 8 | Sell | 93.00000000 | 360.00000000 8 | Sell | 95.00000000 | 360.00000000 8 | Sell | 126.00000000 | 360.00000000 8 | Buy | 276.00000000 | 380.00000000 In the end I'm trying to get what's left for customer and what's left I put into 3 array lists: alNew (corresponds to alNumber), alNewPoIle (corresponds to alPrice), and alNewCo (corrseponds to alID) ArrayList alNew = new ArrayList(); ArrayList alNewPoIle = new ArrayList(); ArrayList alNewCo = new ArrayList(); for (int i = 0; i < alTransaction.Count; i++) { string tempAkcjeCzynnosc = (string) alTransaction[i]; string tempAkcjeInId = (string) alID[i]; decimal varAkcjeCena = (decimal) alPrice[i]; decimal varAkcjeIlosc = (decimal) alNumber[i]; int index; switch (tempAkcjeCzynnosc) { case "Transfer out": case "Sell": index = alNew.IndexOf(varAkcjeIlosc); if (index != -1) { alNew.RemoveAt(index); alNewPoIle.RemoveAt(index); alNewCo.RemoveAt(index); } else { ArrayList alTemp = new ArrayList(); decimal varAkcjeSuma = 0; for (int j = 0; j < alNew.Count; j ++) { string akcjeInId = (string) alNewCo[j]; decimal akcjeCena = (decimal) alNewPoIle[j]; decimal akcjeIlosc = (decimal) alNew[j]; if (tempAkcjeInId == akcjeInId && akcjeCena == varAkcjeCena) { alTemp.Add(j); varAkcjeSuma = varAkcjeSuma + akcjeIlosc; } } if (varAkcjeSuma == varAkcjeIlosc) { for (int j = alTemp.Count -1 ; j >=0 ; j --) { int tempIndex = (int) alTemp[j]; alNew.RemoveAt(tempIndex); alNewPoIle.RemoveAt(tempIndex); alNewCo.RemoveAt(tempIndex); } } } break; case "Transfer In": case "Buy": alNew.Add(varAkcjeIlosc); alNewPoIle.Add(varAkcjeCena); alNewCo.Add(tempAkcjeInId); break; } } Basically I'm adding and removing things from Array depending on Transaction Type, Transaction ID and Numbers. I'm adding numbers to ArrayList like 156, 340 (when it is TransferIn or Buy) etc and then i remove them doing it like 156, 340 (when it's TransferOut, Sell). My solution works for that without a problem. The problem I have is that for some old data employees were entering sum's like 1500 instead of 500+400+100+500. How would I change it so that when there's Sell/TransferOut or Buy/Transfer In and there's no match inside ArrayList it should try to add multiple items from thatArrayList and find elements that combine into aggregate. Inside my code I tried to resolve that problem with simple summing everything when there's no match (index == 1) if (index != -1) { alNew.RemoveAt(index); alNewPoIle.RemoveAt(index); alNewCo.RemoveAt(index); } else { ArrayList alTemp = new ArrayList(); decimal varAkcjeSuma = 0; for (int j = 0; j < alNew.Count; j ++) { string akcjeInId = (string) alNewCo[j]; decimal akcjeCena = (decimal) alNewPoIle[j]; decimal akcjeIlosc = (decimal) alNew[j]; if (tempAkcjeInId == akcjeInId && akcjeCena == varAkcjeCena) { alTemp.Add(j); varAkcjeSuma = varAkcjeSuma + akcjeIlosc; } } if (varAkcjeSuma == varAkcjeIlosc) { for (int j = alTemp.Count -1 ; j >=0 ; j --) { int tempIndex = (int) alTemp[j]; alNew.RemoveAt(tempIndex); alNewPoIle.RemoveAt(tempIndex); alNewCo.RemoveAt(tempIndex); } } But it only works if certain conditions are met, and fails for the rest.

    Read the article

  • NULL-keys for key/value table

    - by user72185
    (Using Oracle) I have a table with key/value pairs like this: create table MESSAGE_INDEX ( KEY VARCHAR2(256) not null, VALUE VARCHAR2(4000) not null, MESSAGE_ID NUMBER not null ) I now want to find all the messages where key = 'someKey' and value is 'val1', 'val2' or 'val3' - OR value is null in which case there will be no entry in the table at all. This is to save space; there would be a large number of keys with null values if I stored them all. I think this works: SELECT message_id FROM message_index idx WHERE ((key = 'someKey' AND value IN ('val1', 'val2', 'val3')) OR NOT EXISTS (SELECT 1 FROM message_index WHERE key = 'someKey' AND idx.message_id = message_id)) But is is extremely slow. Takes 8 seconds with 700K records in message_index and there will be many more records and more search criteria when moving outside of my test environment. Primary key is key, value, message_id: add constraint PK_KEY_VALUE primary key (KEY, VALUE, MESSAGE_ID) And I added another index for message_id, to speed up searching for missing keys: create index IDX_MESSAGE_ID on MESSAGE_INDEX (MESSAGE_ID) I will be doing several of these key/value lookups in every search, not just one as shown above. So far I am doing them nested, where output id's of one level is the input to the next. E.g.: SELECT message_id from message_index WHERE (key/value compare) AND message_id IN ( SELECT ... and so on ) What can I do to speed this up?

    Read the article

  • username and password check linq query in c#

    - by b0x0rz
    this linq query var users = from u in context.Users where u.UserEMailAdresses.Any(e1 => e1.EMailAddress == userEMail) && u.UserPasswords.Any(e2 => e2.PasswordSaltedHash == passwordSaltedHash) select u; return users.Count(); returns: 1 even when there is nothing in password table. how come? what i am trying to do is get the values of email and passwordHash from two separate tables (UserEMailAddresses and UserPasswords) linked via foreign keys to the third table (Users). it should be simple - checking if email and password mach from form to database. but it is not working for me. i get 1 (for count) even when there are NO entries in the UserPasswords table. is the linq query above completely wrong, or...?

    Read the article

  • What is the most efficient/elegant way to parse a flat table into a tree?

    - by Tomalak
    Assume you have a flat table that stores an ordered tree hierarchy: Id Name ParentId Order 1 'Node 1' 0 10 2 'Node 1.1' 1 10 3 'Node 2' 0 20 4 'Node 1.1.1' 2 10 5 'Node 2.1' 3 10 6 'Node 1.2' 1 20 What minimalistic approach would you use to output that to HTML (or text, for that matter) as a correctly ordered, correctly intended tree? Assume further you only have basic data structures (arrays and hashmaps), no fancy objects with parent/children references, no ORM, no framework, just your two hands. The table is represented as a result set, which can be accessed randomly. Pseudo code or plain English is okay, this is purely a conceptional question. Bonus question: Is there a fundamentally better way to store a tree structure like this in a RDBMS? EDITS AND ADDITIONS To answer one commenter's (Mark Bessey's) question: A root node is not necessary, because it is never going to be displayed anyway. ParentId = 0 is the convention to express "these are top level". The Order column defines how nodes with the same parent are going to be sorted. The "result set" I spoke of can be pictured as an array of hashmaps (to stay in that terminology). For my example was meant to be already there. Some answers go the extra mile and construct it first, but thats okay. The tree can be arbitrarily deep. Each node can have N children. I did not exactly have a "millions of entries" tree in mind, though. Don't mistake my choice of node naming ('Node 1.1.1') for something to rely on. The nodes could equally well be called 'Frank' or 'Bob', no naming structure is implied, this was merely to make it readable. I have posted my own solution so you guys can pull it to pieces.

    Read the article

  • Swap unique indexed column values in database.

    - by Ramesh Soni
    I have a database table and one of the fields (not primary key) is having unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hack I know are: Delete both rows and re-insert them Update rows with some other value and swap and then update to actual value. But I don't want to go for these as they do not seem to be the appropriate solution to the problem. Could anyone help me out?

    Read the article

  • Using OUTPUT/INTO within instead of insert trigger invalidates 'inserted' table

    - by Dan
    I have a problem using a table with an instead of insert trigger. The table I created contains an identity column. I need to use an instead of insert trigger on this table. I also need to see the value of the newly inserted identity from within my trigger which requires the use of OUTPUT/INTO within the trigger. The problem is then that clients that perform INSERTs cannot see the inserted values. For example, I create a simple table: CREATE TABLE [MyTable]( [MyID] [int] IDENTITY(1,1) NOT NULL, [MyBit] [bit] NOT NULL, CONSTRAINT [PK_MyTable_MyID] PRIMARY KEY NONCLUSTERED ( [MyID] ASC )) Next I create a simple instead of trigger: create trigger [trMyTableInsert] on [MyTable] instead of insert as BEGIN DECLARE @InsertedRows table( MyID int, MyBit bit); INSERT INTO [MyTable] ([MyBit]) OUTPUT inserted.MyID, inserted.MyBit INTO @InsertedRows SELECT inserted.MyBit FROM inserted; -- LOGIC NOT SHOWN HERE THAT USES @InsertedRows END; Lastly, I attempt to perform an insert and retrieve the inserted values: DECLARE @tbl TABLE (myID INT) insert into MyTable (MyBit) OUTPUT inserted.MyID INTO @tbl VALUES (1) SELECT * from @tbl The issue is all I ever get back is zero. I can see the row was correctly inserted into the table. I also know that if I remove the OUTPUT/INTO from within the trigger this problem goes away. Any thoughts as to what I'm doing wrong? Or is how I want to do things not feasible? Thanks.

    Read the article

  • SQL query to get latest record for all distinct items in a table

    - by David Buckley
    I have a table of all sales defined like: mysql> describe saledata; +-------------------+---------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------------------+------+-----+---------+-------+ | SaleDate | datetime | NO | | NULL | | | StoreID | bigint(20) unsigned | NO | | NULL | | | Quantity | int(10) unsigned | NO | | NULL | | | Price | decimal(19,4) | NO | | NULL | | | ItemID | bigint(20) unsigned | NO | | NULL | | +-------------------+---------------------+------+-----+---------+-------+ I need to get the last sale price for all items (as the price may change). I know I can run a query like: SELECT price FROM saledata WHERE itemID = 1234 AND storeID = 111 ORDER BY saledate DESC LIMIT 1 However, I want to be able to get the last sale price for all items (the ItemIDs are stored in a separate item table) and insert them into a separate table. How can I get this data? I've tried queries like this: SELECT storeID, itemID, price FROM saledata WHERE itemID IN (SELECT itemID from itemmap) ORDER BY saledate DESC LIMIT 1 and then wrap that into an insert, but it's not getting the proper data. Is there one query I can run to get the last price for each item and insert that into a table defined like: mysql> describe lastsale; +-------------------+---------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------------------+------+-----+---------+-------+ | StoreID | bigint(20) unsigned | NO | | NULL | | | Price | decimal(19,4) | NO | | NULL | | | ItemID | bigint(20) unsigned | NO | | NULL | | +-------------------+---------------------+------+-----+---------+-------+

    Read the article

  • LINQ for Java tool

    - by Milhous
    Would a LINQ for java be a useful tool? I have been working on a tool that will allow a Java object to map to a row in a database. Would this be useful for Java programmers? What features would be useful?

    Read the article

  • Best Way to Generate Unique and consecutives numbers in Oracle

    - by RRUZ
    I need to generate unique and consecutive numbers (for use on an invoice), in a fast and reliable way. currently use a Oracle sequence, but in some cases generated numbers are not consecutive because of exceptions that may occur. I thought a couple of solutions to manage this problem, but neither of they convincing me. What solution do you recommend? Use a select max () SELECT MAX (NVL (doc_num, 0)) +1 FROM invoices Use a table to store the last number generated for the invoice. UPDATE docs_numbers SET last_invoice = last_invoice + 1 Another Solution?

    Read the article

  • Freebase Query with "JOINS"

    - by codemonkey
    ... Yeah, yeah, I know traditional joins don't exist. I actually like the freebase query methodology in theory, just having a little trouble getting it to actually work for me : ) Anyone have a dumb-simple example of getting Freebase data via MQL that pulls from two different "tables"? In particular, I'm trying to get automotive data... so for example, pulling fields from both /automotive/model_year and /automotive/trim_year. I've read the documentation (for hours actually). There's a distinct possibility that I'm looking right at such an example somewhere and just not seeing it because my OLTP brain just doesn't comprehend what it's seeing. * Note * ... that the two "types" I'm working with above are siblings, not parent/child. Does freebase even allow joining data between sibling nodes... I see examples of queries pulling from parent/child, but not from siblings I don't think (or I've overlooked them).

    Read the article

  • Date format error in vb.net ?

    - by ahmed
    I get this error when I run the application Incorrect syntax near 12, on debugging I found that this error is caused due to the # along with the date. Dim backdate as datetime backdate = DateTime.Now.AddDays(-1) on binding the data to the grid to filter the backdate records this error is caused Incorrect syntax near 12. myqry = " select SRNO,SUBJECT,ID where datesend =" backdate Now i am trying to extract only the date or shall I divide the date into day , month and year with DATEPART and take into a variable or convert the date or what should i do , Please help ???

    Read the article

  • date comparisons in Rails

    - by aressidi
    Hi there, I'm having trouble with a date comparison in a named scope. I'm trying to determine if an event is current based on its start and end date. Here's the named scope I'm using which kind of works, though not for events that have the same start and end date. named_scope :date_current, :conditions => ["Date(start_date) <= ? AND Date(end_date) >= ?", Time.now, Time.now] This returns the following record, though it should return two records, not one... >> Event.date_current => [#<Event id: 2161, start_date: "2010-02-15 00:00:00", end_date: "2010-02-21 00:00:00", ...] What it's not returning is this as well >> Event.find(:last) => #<Event id: 2671, start_date: "2010-02-16 00:00:00", end_date: "2010-02-16 00:00:00", ...> The server time seems to be in UTC and I presume that the entries are being stored in the DB in UTC. Any ideas as to what I'm doing wrong or what to try? Thanks!

    Read the article

  • Return order of MySQL SHOW COLUMNS

    - by rich
    Hey guys. Simple one this, but one I can't seem to find any information on so here goes. I need to find the columns in a specific table, which is no problem.... SHOW COLUMNS FROM tablename LIKE '%ColumnPrefix%'; But I need to know what order they will be returned, preferable by choosing to order the results ascending alphabetically. I have had no luck with using ORDER BY Field. Any ideas? Cheers!

    Read the article

  • Compare and find differences in two tables in Oracle

    - by Ruslan
    Hi! i have 2 tables: account: ID, ACC, AE_CCY, DRCR_IND, AMOUNT, MODULE flex: ID, ACC, AE_CCY, DRCR_IND, AMOUNT, MODULE I want to show differences comparing only by: AE_CCY, DRCR_IND, AMOUNT, MODULE and ACC by first 4 characters Example: ID ACC AE_CCY DRCR_IND AMOUNT MODULE -- --------- ------ -------- ------ ------ 1 734647674 USD D 100 OP and in flex: ID ACC AE_CCY DRCR_IND AMOUNT MODULE -- --------- ------ -------- ------ ------ 1 734647654 USD D 100 OP 2 734665474 USD D 100 OP 9 734611111 USD D 100 OP ID's 2 and 9 should be shown as differences. If I use FULL JOIN I'll get no differences as substr(account.ACC,1,4) = substr(flex.ACC,1,4) are equal and others are equal and MINUS doesn't work because ID's different. Thanks.

    Read the article

  • GROUP BY as a way to pick the first row from a group of similar rows, is this correct, is there any

    - by FipS
    I have a table which stores test results like this: user | score | time -----+-------+------ aaa | 90% | 10:30 bbb | 50% | 9:15 *** aaa | 85% | 10:15 aaa | 90% | 11:00 *** ... What I need is to get the top 10 users: user | score | time -----+-------+------ aaa | 90% | 11:00 bbb | 50% | 9:15 ... I've come up with the following SELECT: SELECT * FROM (SELECT user, score, time FROM tests_score ORDER BY user, score DESC, time DESC) t1 GROUP BY user ORDER BY score DESC, time LIMIT 10 It works fine but I'm not quite sure if my use of ORDER BY is the right way to pick the first row of each group of sorted records. Is there any better practice to achieve the same result? (I use MySQL 5)

    Read the article

  • query optimization

    - by Gaurav
    I have a query of the form SELECT uid1,uid2 FROM friend WHERE uid1 IN (SELECT uid2 FROM friend WHERE uid1='.$user_id.') and uid2 IN (SELECT uid2 FROM friend WHERE uid1='.$user_id.') The problem now is that the nested query SELECT uid2 FROM friend WHERE uid1='.$user_id.' returns a very large number of ids(approx. 5000). The table structure of the friend table is uid1(int), uid2(int). This table is used to determine whether two users are linked together as friends. Any workaround? Can I write the query in a different way? Or is there some other way to solve this issue. I'm sure I am not the first person to face such a problem. Any help would be greatly appreciated.

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • Why am I getting "Enter Parameter Value" when running my MS Access query?

    - by DanM
    In my query, I use the IIF function to assign either "Before" or "After" to a field named BeforeOrAfter using AS. When I run this query, however, the "Enter Parameter Value" dialog appears, requesting a value for BeforeOrAfter. If I remove BeforeOrAfter DESC from the ORDER BY clause, I don't get the dialog. Here is the offending query: SELECT d.Scenario, e.Event, IIF(d.LogTime < e.Time, 'Before','After') AS BeforeOrAfter, d.HeartRate FROM Data d INNER JOIN Events e ON d.Scenario = e.Scenario WHERE e.Include = Yes ORDER BY d.Scenario, e.Id, BeforeOrAfter DESC Question: Why is my AS BeforeOrAfter not being recognized by the ORDER BY clause? Why does it ask me to enter a parameter value for "BeforeOrAfter" when I run this query? Note: I tried using brackets, single quotes, double quotes, etc., but none of that made any difference.

    Read the article

< Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >