Search Results

Search found 32970 results on 1319 pages for 'zend db select'.

Page 455/1319 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • Capistrano deploy:migrate Could not find rake-0.9.2.2 in any of the sources

    - by Kyle
    My Capistrano deploy:migrate task is set to run a simple rake db:migrate command, as follows: env PATH=/home/user/.gems/bin sh -c 'cd /home/user/app/releases/20121003140503 && rake RAILS_ENV=production db:migrate' When I run this task during an ssh session manually it completes successfully. However when I run from my local development box, I receive the following error: ** [out :: app] Could not find rake-0.9.2.2 in any of the sources I am able to locate my rake gem by typing which rake via ssh (/home/user/.gems/bin/rake) and rake --version gives me "rake, version 0.9.2.2," so I don't understand why this command fails via Capistrano?

    Read the article

  • Not Inserted into database in PHP script

    - by Aruna
    Hi, i am having an form for uploading an excel file like <form enctype="multipart/form-data" action="http://localhost/joomla/Joomla_1.5.7/index.php?option=com_jumi&fileid=7" method="POST"> Please choose a file: <input name="file" type="file" id="file" /><br /> <input type="submit" value="Upload" /> </form> And in the FIle http://localhost/joomla/Joomla_1.5.7/index.php?option=com_jumi&fileid=7 i have retrived the uploaded file contents by <?php echo "Name". $_FILES['file']['name'] . "<br />"; echo "Type: " . $_FILES["file"]["type"] . "<br />"; echo "Size: " . ($_FILES["file"]["size"] / 1024) . " Kb<br />"; echo "Stored in: " . $_FILES["file"]["tmp_name"] . "<br />"; $string = file_get_contents( $_FILES["file"]["tmp_name"] ); foreach ( explode( "\n", $string ) as $userString ) { $userString = trim( $userString ); echo $userString; $array = explode( ';', $userString ); $userdata = array ( 'cf_id'=>trim( $array[0] ), 'text_0'=>trim( $array[1] ), 'text_1'=>trim( $array[2] ), 'text_2'=>trim( $array[3] ), 'text_3'=>trim($array[4]), 'text_6'=>trim($array[5]), 'text_7'=>trim($array[6]), 'text_9'=>trim($array[7]), 'text_12'=>trim($array[8])); global $db; $db =& JFactory::getDBO(); $query = 'INSERT INTO #__chronoforms_UploadAuthor VALUES ("'.$userdata['cf_id'].'","'.$userdata['text_0'].'","'.$userdata['text_1'].'","'.$userdata['text_2'].'","'.$userdata['text_3'].'","'.$userdata['text_6'].'","'.$userdata['text_7'].'","'.$userdata['text_9'].'")'; $db->Execute($query); echo "<br>"; } i am trying to insert the contents into the table #__chronoforms_UploadAuthor in Joomla database . it doent shows any error but it is not inserted into the database.. Please help me.. Y the contents are not inserted into the database...And how to make it inserted into the database..

    Read the article

  • Displaying data from linked tables in netbeans JTable

    - by Darc
    I have been writing in java for a few months now and have just started using netbeans. I have spent all day today trying to work out how to connect to an SQL database and display data from 2 tables (ie display the data from from a select statement with an inner join) in a JTable. I have tried using JPQL with the following statment SELECT j, cust.name FROM Job j JOIN j.jobnumber cust where the job table has a field called customer that references id in the customer table. This throws the exception: Caused by: Exception [TOPLINK-8029] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EJBQLException Exception Description: Error compiling the query [SELECT j, cust.name FROM Job j JOIN j.jobnumber cust], line 1, column 11: invalid navigation expression [cust.name], cannot navigate expression [cust] of type [java.lang.Integer] inside a query. at oracle.toplink.essentials.exceptions.EJBQLException.invalidNavigation(EJBQLException.java:430) What am i doing wrong? Can anyone point me to some examples of how to make a linked table java application? I am still in the very early stages of development so a complete change is not out of the question if using a mysql database isnt the best way to go about things thanks

    Read the article

  • how to revert non-corrupted mssql database with stopat

    - by Yisman
    i have a good, working valid non-corrupted database in mssql that i want to revert to a point in time how is that done? the standard RESTORE command requires a full backup as a starting point, and then log backups thereafter. i cant understand why this must be done from a backup. if my db is good and the logs are OK, why cant i just revert with a STOPAT from the live logs in the db? one dba suggested that whenever i want to restore i should THEN make a log backup and then RESTORE with STOPAT. i believe it would work but sounds a little backwards any better ideas? thank you very much

    Read the article

  • high tweet status IDs causing failed to open stream errors?

    - by escarp
    Erg. Starting in the past few days high tweet IDs (at least, it appears it's ID related, but I suppose it could be some recent change in the api returns) are breaking my code. At first I tried passing the ID as a string instead of an integer to this function, and I thought this worked, but in reality it was just the process of uploading the file from my end. In short, a php script generates these function calls, and when it does so, they fail. If I download the php file the call is generated into, delete the server copy and re-upload the exact same file without changing it, it works fine. Does anyone know what could be causing this behavior? Below is what I suspect to be the most important part of the individual files that are pulling the errors. Each of the files is named for a status ID (e.g. the below file is named 12058543656.php) <?php require "singlePost.php"; SinglePost(12058543656) ?> Here's the code that writes the above files: $postFileName = $single_post_id.".php"; if(!file_exists($postFileName)){ $created_at_full = date("l, F jS, Y", strtotime($postRow[postdate])-(18000)); $postFileHandle = fopen($postFileName, 'w+'); fwrite($postFileHandle, '<html> <head> <title><?php $thisTITLE = "escarp | A brief poem or short story by '.$authorname.' on '.$created_at_full.'"; echo $thisTITLE;?></title><META NAME="Description" CONTENT="This brief poem or short story, by '.$authorname.', was published on '.$created_at_full.'"> <?php include("head.php");?> To receive other poems or short stories like this one from <a href=http://twitter.com/escarp>escarp</a> on your cellphone, <a href=http://twitter.com/signup>create</a> and/or <a href=http://twitter.com/devices>associate</a> a Twitter account with your cellphone</a>, follow <a href=http://twitter.com/escarp>us</a>, and turn device updates on. <pre><?php require "singlePost.php"; SinglePost("'.$single_post_id.'") ?> </div></div></pre><?php include("foot.php");?> </body> </html>'); fclose($postFileHandle);} $postcounter++; } I can post more if you don't see anything here, but there are several files involved and I'm trying to avoid dumping tons of irrelevant code. Error: Warning: include(head.php) [function.include]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 4 Warning: include(head.php) [function.include]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 4 Warning: include() [function.include]: Failed opening 'head.php' for inclusion (include_path='.:/nfsn/apps/php5/lib/php/:/nfsn/apps/php/lib/php/') in /f2/escarp/public/12177797583.php on line 4 To receive other poems or short stories like this one from escarp on your cellphone, create and/or associate a Twitter account with your cellphone, follow us, and turn device updates on. Warning: require(singlePost.php) [function.require]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 7 Warning: require(singlePost.php) [function.require]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 7 Fatal error: require() [function.require]: Failed opening required 'singlePost.php' (include_path='.:/nfsn/apps/php5/lib/php/:/nfsn/apps/php/lib/php/') in /f2/escarp/public/12177797583.php on line 7 <?php function SinglePost($statusID) { require "nicetime.php"; $db = sqlite_open("db.escarp"); $updates = sqlite_query($db, "SELECT * FROM posts WHERE postID = '$statusID'"); $row = sqlite_fetch_array($updates, SQLITE_ASSOC); $id = $row[authorID]; $result = sqlite_query($db, "SELECT * FROM authors WHERE authorID = '$id'"); $row5 = sqlite_fetch_array($result, SQLITE_ASSOC); $created_at_full = date("l, F jS, Y", strtotime($row[postdate])-(18000)); $created_at = nicetime($row[postdate]); if($row5[url]==""){ $authorurl = ''; } else{ /*I'm omitting a few pages of output code and associated regex*/ return; } ?>

    Read the article

  • PHP syntax for postgresql Mixed-case table names

    - by yam
    I have a code below: <?php require "institution.php" /* in this portion, query for database connection is executed, and */ $institution= $_POST['institutionname']; $sCampID = 'SELECT ins_id FROM institution where ins_name= '$institution' '; $qcampID = pg_query($sCampID) or die("Error in query: $query." . pg_last_error($connection)); /* this portion outputs the ins_id */ ?> My database before has no mixed-case table names, that's why when I run this query, it shows no error at all. But because I've changed my database for some reasons, and it contains now mixed-case table names, i have to change the code above into this one: $sCampID = 'SELECT ins_id FROM "Institution" where ins_name= '$institution' '; where the Institution has to be double quoted. The query returned parse error. When i removed this portion: where ins_name= '$institution', no error occured. My question is how do I solve this problem where the table name which contains a mixed-case letter and a value stored in a variable ($institution in this case) will be combined in a single select statement? Your answers and suggestions will be very much appreciated.

    Read the article

  • SQL aggregate query question

    - by Phil
    Hi, Can anyone help me with a SQL query in Apache Derby SQL to get a "simple" count. Given a table ABC that looks like this... id a b c 1 1 1 1 2 1 1 2 3 2 1 3 4 2 1 1 ** 5 2 1 2 ** ** 6 2 2 1 ** 7 3 1 2 8 3 1 3 9 3 1 1 How can I write a query to get a count of how may distinct values of 'a' have both (b=1 and c=2) AND (b=2 and c=1) to get the correct result of 1. (the two rows marked match the criteria and both have a value of a=2, there is only 1 distinct value of a in this table that match the criteria) The tricky bit is that (b=1 and c=2) AND (b=2 and c=1) are obviously mutually exclusive when applied to a single row. .. so how do I apply that expression across multiple rows of distinct values for a? These queries are wrong but to illustrate what I'm trying to do... "SELECT DISTINCT COUNT(a) WHERE b=1 AND c=2 AND b=2 AND c=1 ..." .. (0) no go as mutually exclusive "SELECT DISTINCT COUNT(a) WHERE b=1 AND c=2 OR b=2 AND c=1 ..." .. (3) gets me the wrong result. SELECT COUNT(a) (CASE WHEN b=1 AND c=10 THEN 1 END) FROM ABC WHERE b=2 AND c=1 .. (0) no go as mutually exclusive Cheers, Phil.

    Read the article

  • what is serialization and how it works

    - by Rozer
    I know the serialization process but have't implemented it. In my application i have seen there are various classes that has been implemented serilizable interface. consider following class public class DBAccessRequest implements Serializable { private ActiveRequest request = null; private Connection connection = null; private static Log log = LogFactory.getLog(DBAccessRequest.class); public DBAccessRequest(ActiveRequest request,Connection connection) { this.request = request; this.connection = connection; } /** * @return Returns the DB Connection object. */ public Connection getConnection() { return connection; } /** * @return Returns the active request object for the db connection. */ public ActiveRequest getRequest() { return request; } } just setting request and connection in constructor and having getter setter for them. so what is the use of serilizable implementation over here...

    Read the article

  • sql server - bulk insert error

    - by user554134
    I am using bulk insert and getting below error: Note: The data in the load file is not beyong the configured column length Running Command: bulk insert load_data from 'C:\temp\dataload\load_file.txt' with (firstrow = 1, fieldterminator = '0x09', rowterminator = '\n',MAXERRORS = 0, ERRORFILE = 'C:\temp\dataload\load_file') Contents of load file: user_name file_path asset_owner city import_date admin C:\ admin toronto 04/12/2012 Error: Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error (truncation) for row 1, column 6 (validated). Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".

    Read the article

  • Rails 3 MySQL 2 reports an error in what looks to be valid SQL syntax

    - by John Judd
    I am trying to use the following bit of code to help in seeding my database. I need to add data continually over development and do not want to have to completely reseed data every time I add something new to the seeds.rb file. So I added the following function to insert the data if it doesn't already exist. def AddSetting(group, name, value, desc) Admin::Setting.create({group: group, name: name, value: value, description: desc}) unless Admin::Setting.find_by_sql("SELECT * FROM admin_settings WHERE group = '#{group}' AND name = '#{name}';").exists? end AddSetting('google', 'analytics_id', '', 'The ID of your Google Analytics account.') AddSetting('general', 'page_title', '', '') AddSetting('general', 'tag_line', '', '') This function is included in the db/seeds.rb file. Is this the right way to do this? However I am getting the following error when I try to run it through rake. rake aborted! Mysql2::Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'group = 'google' AND name = 'analytics_id'' at line 1: SELECT * FROM admin_settings WHERE group = 'google' AND name = 'analytics_id'; Tasks: TOP => db:seed (See full trace by running task with --trace) Process finished with exit code 1 What is confusing me is that I am generating correct SQL as far as I can tell. In fact my code generates the SQL and I pass that to the find_by_sql function for the model, Rails itself can't be changing the SQL, or is it? SELECT * FROM admin_settings WHERE group = 'google' AND name = 'analytics_id'; I've written a lot of SQL over the years and I've looked through similar questions here. Maybe I've missed something, but I cannot see it.

    Read the article

  • How to integrate Python scripting in my Android App (like SL4A)

    - by Seraphim's host
    I need to add scripting layer to my android App. So I can remotely prepare a script that my app download form a web service and execute on the user device. I found a interesting project called Scripting Layer for Android (SL4A) here: http://code.google.com/p/android-scripting/ I'm not sure I can execute Python script without installing the PythonForAndroid_r4.apk first. I can't force my customer to install that application! So my question is, can the SL4A layer be integrated in my app without the need to install other apk? I need to execute actions like update data in the DB, create/read/delete a file on the sd card... Not so complex but I see SL4A can do a lot of things like these. Other scripting libraries? EDIT: Found also MVEL: http://mvel.codehaus.org/ but I think it needs to be integrated to execute complex operations like accessing a DB...

    Read the article

  • Using ExecuteQuery() with entity framework, entity class.

    - by sfreelander
    I am trying to jump from ASP Classic to asp.net. I have followed tutorials to get Entity Framework and LINQ to connect to my test database, but I am having difficulties figuring out ExecuteQuery(). I believe the problem is that I need an "entity class" for my database, but I can't figure out how to do it. Here is my simple code: Dim db as New TestModel.TestEntity Dim results AS IEnumerable(OF ???) = db.ExecuteQuery(Of ???)("Select * from Table1") From the microsoft example site, they use an entity class called Customers, but I don't understand what that means.

    Read the article

  • hide inputs value when input is hidden

    - by toingbou
    Hello, im using this function to hide some inputs from a form when a value selected in select tag. this is the function: function showEntry(obj,optionValue){ //hide all entry selections onchange document.getElementById("group1").style.display="none"; document.getElementById("group2").style.display="none"; if(obj.value=="group1") { document.getElementById('group1').style.display="inline"; } else if(obj.value=="group2") { document.getElementById('group2').style.display="inline"; } } and this is the form: <select class="textinput" name="function_title" onchange="showEntry(this,this.value);"> <option value=""> </option> <option value="group1" >group1</option> <option value="group2" >group2</option> </select> <span id="group1" style="display:none;"> <input class="textinput" type="text" id="input1" name="input1" value="100"/> <input class="textinput" type="text" id="input2" name="input2" value="50"/> </span> <span id="group2" style="display:none;"> <input class="textinput" type="text" id="input3" name="input3" value="60"/> <input class="textinput" type="text" id="input4" name="input4" value="45"/> </span> All i want is to hide the hidden inputs value when this group of inputs are hidden. Something like that: <input class="textinput" type="text" id="input3" name="input3" value="if(obj.value!="group2") { print(60); }"/> Is that right?

    Read the article

  • Django: Change models without clearing all data?

    - by Rosarch
    I have some models I'm working with in a new Django installation. Is it possible to change the fields without losing app data? I tried changing the field and running python manage.py syncdb. There was no output from this command. Renavigating to admin pages for editing the changed models caused TemplateSyntaxErrors as Django sought to display fields that didn't exist in the db. I am using SQLite. I am able to delete the db file, then re-run python manage.py syncdb, but that is kind of a pain. Is there a better way to do it?

    Read the article

  • Which reporting tool would you recommend?

    - by grady
    Hi, I have some reporting functionality in my app and I want to improve it a bit. Its only SQL in a XML file which is read by some parser. There can be some params for that SQL and when its parsed and the params are injected, it is executed against my DB (SQL Server). I want to improve that a bit so that the results look better and are more flexible. This are the most important points I need to have: subtotals layout that can change dynamically according to settings in DB (like logo, slogan) possibility to use the same report template for several customers (same fields, but different logos,colors, slogans etc.) should run from an ASP.NET application It should be as dynamic as possible. I know of Crystal Reports and the Microsoft Reporting Tool. Are there any others that might be of interest and are my above points possible at all? Thanks for some ideas and hints :-)...

    Read the article

  • hadoop - large database query

    - by Mastergeek
    Situation: I have a Postgres DB that contains a table with several million rows and I'm trying to query all of those rows for a MapReduce job. From the research I've done on DBInputFormat, Hadoop might try and use the same query again for a new mapper and since these queries take a considerable amount of time I'd like to prevent this in one of two ways that I've thought up: 1) Limit the job to only run 1 mapper that queries the whole table and call it good. or 2) Somehow incorporate an offset in the query so that if Hadoop does try to use a new mapper it won't grab the same stuff. I feel like option (1) seems more promising, but I don't know if such a configuration is possible. Option(2) sounds nice in theory but I have no idea how I would keep track of the mappers being made and if it is at all possible to detect that and reconfigure. Help is appreciated and I'm namely looking for a way to pull all of the DB table data and not have several of the same query running because that would be a waste of time.

    Read the article

  • MySQL: host name universal change.

    - by ctrlShiftBryan
    I am making some updates to a php site which I did not design. I have a local copy of the site. At the top of each page there are settings for the host name for the db connection. Is there someway I can setup a pointer to the remote address. The address is 'mysqlhost' for example and I want that to point to 'mysql.myhost.com'. I tried creating a HOST record for mysqlhost pointing to the IP address it resolves to but that doesn't work. If I put 'mysql.myhost.com' in the connection it works. If I put that IP address it doesn't so that is probably why the HOST record idea doesn't work. Other than creating a local copy of the DB is there a quick way so that I don't have to modify each file in my dev environment and then again when I redeploy?

    Read the article

  • Python sqlite3 and concurrency

    - by RexE
    I have a Python program that uses the "threading" module. Once every second, my program starts a new thread that fetches some data from the web, and stores this data to my hard drive. I would like to use sqlite3 to store these results, but I can't get it to work. The issue seems to be about the following line: conn = sqlite3.connect("mydatabase.db") If I put this line of code inside each thread, I get an OperationalError telling me that the database file is locked. I guess this means that another thread has mydatabase.db open through a sqlite3 connection and has locked it. If I put this line of code in the main program and pass the connection object (conn) to each thread, I get a ProgrammingError, saying that SQLite objects created in a thread can only be used in that same thread. Previously I was storing all my results in CSV files, and did not have any of these file-locking issues. Hopefully this will be possible with sqlite. Any ideas?

    Read the article

  • Can Django flush its database(s) between every unit test

    - by mikem
    Django (1.2 beta) will reset the database(s) between every test that runs, meaning each test runs on an empty DB. However, the database(s) are not flushed. One of the effects of flushing the database is the auto_increment counters are reset. Consider a test which pulls data out of the database by primary key: class ChangeLogTest(django.test.TestCase): def test_one(self): do_something_which_creates_two_log_entries() log = LogEntry.objects.get(id=1) assert_log_entry_correct(log) log = LogEntry.objects.get(id=2) assert_log_entry_correct(log) This will pass because only two log entries were ever created. However, if another test is added to ChangeLogTest and it happens to run before test_one, the primary keys of the log entries are no longer 1 and 2, they might be 2 and 3. Now test_one fails. This is actually a two part question: Is it possible to force ./manage.py test to flush the database between each test case? Since Django doesn't flush the DB between each test by default, maybe there is a good reason. Does anyone know?

    Read the article

  • casting odd smallint time to to datetime format.

    - by c6400sc
    Hello everyone, I'm working with a db (SQL server 2008), and have an interesting issue with times stored in the db. The DBA who originally set it up was crafty and stored scheduled times as smallints in 12-hour form-- 6:00AM would be represented as 600. I've figured out how to split them into hours and minutes like thus: select floor(time/100) as hr, right(time, 2) as min from table; What I want to do is compare these scheduled times to actual times, which are stored in the proper datetime format. Ideally, I would do this with two datetime fields and use datediff() between them, but this would require converting the smallint time into a datetime, which I can't figure out. Does anyone have suggestions on how to do this? Thanks in advance.

    Read the article

  • c# combobox autocomplete like method

    - by Willem T
    Ihave been looking for an LIKE autocompletion mode. can anyone help me with this. When i enter a text in the combobox, the database should be asked for the data. all that goes well. But then i want my combobox to behave like the Suggest mode, but it doesn't work. I Tried this: cursorPosition = txtNaam.SelectionStart; string query = "SELECT bedr_naam FROM tblbedrijf WHERE bedr_naam LIKE '%" + txtNaam.Text + "%'"; DataTable table = Global.db.Select(query); txtNaam.Items.Clear(); for (int i = 0; i < table.Rows.Count; i++) { txtNaam.Items.Add(table.Rows[i][0].ToString()); } Cursor.Current = Cursors.Default; txtNaam.Select(cursorPosition, 0); But the behavior that this function creates is off it doesnt work like the suggest mode its a bit buggy. Can anyone help me to get it working properly Thanks

    Read the article

  • Class design question (Disposable and singleton behavior)

    - by user137348
    The Repository class has singleton behavior and the _db implements the disposable pattern. As excepted the _db object gets disposed after the first call and because of the singleton behavior any other call of _db will crash. [ServiceBehavior(InstanceContextMode=InstanceContextMode.Single)] public class Repository : IRepository { private readonly DataBase _db; public Repository(DataBase db) { _db = db; } public int GetCount() { using(_db) { return _db.Menus.Count(); } } public Item GetItem(int id) { using(_db) { return _db.Menus.FirstOrDefault(x=>x.Id == id); } } } My question is, is there any way to design this class to work properly without removing the singleton behavior? The Repositoryclass will be serving big amount of requests.

    Read the article

  • Iterating over a database column in Django

    - by curious
    I would like to iterate a calculation over a column of values in a MySQL database. I wondered if Django had any built-in functionality for doing this. Previously, I have just used the following to store each column as a list of tuples with the name table_column: import MySQLdb import sys try: conn = MySQLdb.connect (host = "localhost", user = "user", passwd="passwd", db="db") except MySQLdb.Error, e: print "Error %d: %s" % (e.args[0], e.args[1]) sys.exit (1) cursor = conn.cursor() for table in ['foo', 'bar']: for column in ['foobar1', 'foobar2']: cursor.execute('select %s from %s' % (column, table)) exec "%s_%s = cursor.fetchall()" % (table, column) cursor.close() conn.commit() conn.close() Is there any functionality built into Django to more conveniently iterate through the values of a column in a database table? I'm dealing with millions of rows so speed of execution is important.

    Read the article

  • SqlCeCommand ExecuteNonQuery performance issue

    - by Michael
    I've been asked to resolve an issue with a .Net/SqlServerCe application. Specifically, after repeated inserts against the db, performance becomes increasingly degraded. In one instance at ~200 rows, in another at ~1000 rows. In the latter case the code being used looks like this: Dim cm1 As System.Data.SqlServerCe.SqlCeCommand = cn1.CreateCommand cm1.CommandText = "INSERT INTO Table1 Values(?,?,?,?,?,?,?,?,?,?,?,?,?)" For j = 0 To ds.Tables(0).Rows.Count - 1 'this is 3110 For i = 0 To 12 cm1.Parameters(tbl(i, 0)).Value = Vals(j,i) 'values taken from a different db Next cm1.ExecuteNonQuery() Next The specifics aren't super important (like what 'tbl' is, etc) but rather whether or not this code should be expected to handle this number of inserts, or if the crawl I'm witnessing is to be expected.

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >