Search Results

Search found 20859 results on 835 pages for 'little bobby tables'.

Page 72/835 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • USB-Sticks and multiple Partitions

    - by Bobby
    Hello. I've got an USB-Stick with multiple Partitions on it (FAT32 (active), FAT32, Ext2 <-- that's another story) and it seems like that my Windows XP can only mount the first partition of the stick. If I try to mount the second one using the volume manager it tells me that I need to make it active and reboot...is it really that limited or am I just missing something here? Partitions: FAT32, System Rescue CD, bootable and active FAT32, some tools ext2, some data (I know that I need extra drivers etc., but that's not asked here. Edit (Solution): Thanks to the answer with the RMB (ReMoveable Bit) I was able to dig up a solution described at this site (Section: On flash drive only the first partition works). Basically, there's an Hitachi Driver available which filters the RMB on Driver-Level, which just needs to be a little modified to function with basically every USB-Stick. All you need to do is adding the "Device Instance ID" to the driver and then use this driver.

    Read the article

  • How do I grok NHibernate's QueryOver API?

    - by Brant Bobby
    I've run into the limits of what NHibernate 3.0's LINQ provider is capable of and decided it's time to learn about one of the more powerful (or at least feature-complete) options: the QueryOver API. The problem is, I have zero experience with ICriteria, and all of the tutorials I've been able to find online either: Assume I'm an ICriteria expert and simply show me how to convert ICriteria code to the new fluent interface, or Are trivial "here's how you do an inner join" examples that don't really help me understand more complex concepts like projections, subqueries, requirements, or whatever other magic the API is capable of. What should I read to really learn about QueryOver, and how to make full use of it?

    Read the article

  • Cuda GeForce GT 555m (on Ubuntu 12.04 (Virtual Box))

    - by bobby
    I am running Ubuntu 12.04 in a Virtual Box as guest OS. Today I have tried to enable all functions of my graphic card under Ubuntu. But all my efforts were off the mark... because my experience with Linux is relatively small... I have tried the procedure which is described as in http://sn0v.wordpress.com/2012/05/11/installing-cuda-on-ubuntu-12-04/. Running the cuda-filepack always lead to an error like "installing driver canceled". Then I have tried to install the Bumblebee project. But I always get the error that the bumblebee daemon is not started. After some internet research I found a test-command lspci -nn | grep '[030[02]]' which results in 00:02.0 VGA compatible controller [0300]: InnoTek ... [80ee:beef] I've been trying to activate the card you for hours without success. Could you please give me some hints how to activate CUDA? Kind regards

    Read the article

  • Programming for the iPhone

    - by Bobby Alexander
    Whats the best way to get started on iPhone development if you are an expeienced C++ or C# programmer? Most books either assume you know nothing or something. What are the steps to achieve this? For eg: first learn objective C (let's say), next learn cocoa... I am interested in books/resources. I read Getting started with iPhone development from Oreilly (the missing manuals book) but that just provided an over view on the programming and concentrated more on getting your app into the app store. I need need resources that will help be start coding. Other questions: How much of objective C do you need to know? How do go ahead with learning the cocoa framework? Can I directly start on cocoa touch or do I need to know the MAC cocoa framework first? Inputs from someone who was in the same situation (Know c++/c# but no clue about mac programming/objective c/cocoa) would help greatly.

    Read the article

  • Exploring In-memory OLTP Engine (Hekaton) in SQL Server 2014 CTP1

    The continuing drop in the price of memory has made fast in-memory OLTP increasingly viable. SQL Server 2014 allows you to migrate the most-used tables in an existing database to memory-optimised 'Hekaton' technology, but how you balance between disk tables and in-memory tables for optimum performance requires judgement and experiment. What is this technology, and how can you exploit it? Rob Garrison explains.

    Read the article

  • How does a "Variables introduce state"?

    - by kunj2aan
    I was reading the "C++ Coding Standards" and this line was there: Variables introduce state, and you should have to deal with as little state as possible, with lifetimes as short as possible. Doesn't anything that mutates eventually manipulate state? What does "you should have to deal with little state as possible" mean? In an impure language such as C++, isn't state management really what you are doing? And what are other ways to "deal with as little state as possible" other than limiting variable lifetime?

    Read the article

  • MySQL: Auto-increment value: 0 is smaller than max used value: xx

    - by Rhodri
    Increasingly I'm getting tables having to be repaired dwith the message returned of: Auto-increment value: 0 is smaller than max used value: xx This has happened on tables with 200 rows and tables with ~3 million rows, but so far the same few tables have had the problem. I'm running MySQL 5.0.22. The repairs are run by a script which checks every minute for the need to repair MySQL tables. I also have an automated backup of the 6 Gigabyte database running very two hours and the repairs always get trigged around the time of the backup. Any ideas?

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • Stairway to T-SQL DML Level 5: The Mathematics of SQL: Part 2

    Joining tables is a crucial concept to understanding data relationships in a relational database. When you are working with your SQL Server data, you will often need to join tables to produce the results your application requires. Having a good understanding of set theory, and the mathematical operators available and how they are used to join tables will make it easier for you to retrieve the data you need from SQL Server.

    Read the article

  • Lingering database-connections from Feng Office

    - by Bobby
    I've installed Feng Office on our main server which is working perfectly so far. Unfortunately it seems like there's a problem with the connection to the MySQL-Database. While the connection itself works fine, it's the reuse/pooling of connections which seems to be bugged. There are lingering/sleeping connections to the server from Feng Office which won't close and don't get reused after some time (120 seconds). Of course those lingering processes/connections are piling up pretty fast. I've found a thread at the forums about this behavior, but the suggested fix is already applied (by default). I'm sure this is just a configuration issue, but I'm a little clue less because Feng is besides a MediaWiki, a DokuWiki and homebrewed PHP applications the only one with this issue. The setup is a Microsoft Windows 2003 Server with MySQL 5.0.26 and Apache 2.2. Where can I start looking for clues why this is happening and how do I get rid of lingering MySQL-Connections?

    Read the article

  • Windows XP, USB-Stick and multiple Partitions

    - by Bobby
    Hello. I've got an USB-Stick with multiple Partitions on it (FAT32 (active), FAT32, Ext2 <-- that's another story) and it seems like that my Windows XP can only mount the first partition of the stick. If I try to mount the second one using the volume manager it tells me that I need to make it active and reboot...is it really that limited or am I just missing something here? Partitions: FAT32, System Rescue CD, bootable and active FAT32, some tools ext2, some data (I know that I need extra drivers etc., but that's not asked here. Edit (Solution): Thanks to the answer with the RMB (ReMoveable Bit) I was able to dig up a solution described at this site (Section: On flash drive only the first partition works). Basically, there's an Hitachi Driver available which filters the RMB on Driver-Level, which just needs to be a little modified to function with basically every USB-Stick. All you need to do is adding the "Device Instance ID" to the driver and then use this driver.

    Read the article

  • Steps to rebuild gspca_kinect driver module?

    - by Bobby Ray
    I recently purchased a Kinect for Windows and quickly discovered that the camera drivers included in linux kernel 3.0+ aren't compatible with the Kinect for Windows hardware revision. After looking at the source code it seems like a tiny modification is all that is required for compatibility, so I've been trying to recompile the driver - to no avail. I've been referring to this article and this one as well, though they are a bit outdated. When I try to compile the module, I get an error because the header file "gspca.h" can't be found in the include path. I located the missing header in my filesystem, but the file itself is empty. I've also tried downloading the kernel source (3.2.0-24-generic), which allowed me to compile the module, but when I load the module I get an error. -1 Unknown symbol in module Is there a standard way to go about this without first building the kernel? Will building the kernel ensure that I can build the module? Thanks

    Read the article

  • Test Driven Development Code Order

    - by Bobby Kostadinov
    I am developing my first project using test driven development. I am using Zend Framework and PHPUnit. Currently my project is at 100% code coverage but I am not sure I understand in what order I am supposed to write my code. Am I supposed to write my test FIRST with what my objects are expected to do or write my objects and then test them? Ive been working on completing a controller/model and then writing at test for it but I am not sure this is what TDD is about? Any advice? For example, I wrote my Auth plugin and my Auth controller and tested that they work properly in my browser, and then I sat down to write the tests for them, which proved that there were some logical errors in the code that did work in the browser.

    Read the article

  • Desparately Need Help: After a mishap, a folder shows 0 files in it

    - by bobby
    I'm hoping some of you guys may be able to shed some light on this scenario: I had a odt document on which I was working from one of many files in a folder among many on an internal hard-drive. Some kind of glitch occured and the document crashed (this could have been some kind of power charge whilst another hard drive was being unmounted). As I looked into the folders surrounding the folder in which my odt document was stored, they start to show 0 files in them. I immediately switched off the PC and then re-started. Upon the re-start, the folders would show the 1,000s of files I've stored in them and then within 5 minutes, as I started to back them up, freeze, cut-off the process of transfer. When I tried to open anything on the internal hard-drive, be it an avi film, an mp3, a cbr or a word doc, they all showed blank or would work. Some folders had vastly less files showing. Eventually, things calmed down. I closed the PC, checked that the connections were in firmly, gave it a vacuum and restarted the PC. All the files eventually showed up and I started to back them up (which I'd brought a hard drive for anyway but been distracted and not done). All folders show except the one which contained the document I was working on at the time of the trouble. Strangely, it was one that should itself full on several occasions on restarts. It shows zero files now. Properties shows zero files and zero space taken by it. Yet when I drop a file into this folder by pasting it in, it disappears too. Opening the folder, there is nothing there. But if I paste that document again, the PC asks would I like to replace the existing file with the same name (that I can't see), when I click yes, the file appears. When I exit, the folder shows the 0 files in the folder. Going back into the folder, it has disappeared again. I'm hoping that someone can help give me tips to recover the files in the folder, it would be greatly, greatly appreciated. All other films, music, comics, documents show and are fine!

    Read the article

  • How to store and update data table on client side (iOS MMO)

    - by farseer2012
    Currently i'm developing an iOS MMO game with cocos2d-x, that game depends on many data tables(excel file) given by the designers. These tables contain data like how much gold/crystal will be cost when upgrade a building(barracks, laboratory etc..). We have about 10 tables, each have about 50 rows of data. My question is how to store those tables on client side and how to update them once they have been modified on server side? My opinion: use Sqlite to store data on client side, the server will parse the excel files and send the data to client with JSON format, then the client parse the JOSN string and save it to Sqlite file. Is there any better method? I find that some game stores csv files on client side, how do they update the files? Could server send a whole file directly to client?

    Read the article

  • Using SQL Server's Output Clause

    When you are inserting, updating, or deleting records from a table, SQL Server keeps track of the records that are changed in two different pseudo tables: INSERTED, and DELETED. These tables are normally used in DML triggers. If you use the OUTPUT clause on an INSERT, UPDATE, DELETE or MERGE statement you can expose the records that go to these pseudo tables to your application and/or T-SQL code.

    Read the article

  • Password Authentication Problems

    - by Bobby Hathorn
    I am new to Ubuntu, am extremely delighted with the performance and speed, as compared to Windows 7-However, I messed up, I think...when I booted my USB disc, I set a password, as directed, and when Ubuntu booted up I tried to reset my password via User Accounts to "None". Now, the Password Authentication window prevents me from downloading software, (Audacity and my Ubuntu updates. Also, I've tried to boot into GRUB and the Recovery Console, as directed; however, the PC bypasses GRUB and boots into Ubuntu instead. Also, when attempting to use the terminal as directed to change the password, I'm given a password prompt there also. If the problem is on my end, could you email/reset my password? My PC is an emachines EL1358G. I am otherwise happy with Ubuntu!

    Read the article

  • How to use filegroups for DB split?

    - by Robin Jain
    In my project I have one DB used for everything. I want it to break into two databases. Static tables having look up values are to be stored in one DB and another DB would be having tables with dynamic data. My problem is that how would I use foreign key constraint in between those two DBs. Can someone help me out and suggest a way to proceed, better if I'm provided an example for the same. I thought of using synonyms for tables and then constraints on synonyms. but later I came to know that synonyms couldn't be used for constraints. I need to maintain relationships among the tables from both DB as the issue is with update, with a new release I just want to update look up tables and for the same I want to split my DB. I want to know how FileGroups could be used for this.

    Read the article

  • How to make a GRANT persist for a table that's being dropped and re-created?

    - by Eli Courtwright
    I'm on a fairly new project where we're still modifying the design of our Oracle 11g database tables. As such, we drop and re-create our tables fairly often to make sure that our table creation scripts work as expected whenever we make a change. Our database consists of 2 schemas. One schema has some tables with INSERT triggers which cause the data to sometimes be copied into tables in our second schema. This requires us to log into the database with an admin account such as sysdba and GRANT access to the first schema to the necessary tables on the second schema, e.g. GRANT ALL ON schema_two.SomeTable TO schema_one; Our problem is that every time we make a change to our database design and want to drop and re-create our database tables, the access we GRANT-ed to schema_one went away when the table was dropped. Thus, this creates another annoying step wherein we must log in with an admin account to re-GRANT the access every time one of these tables is dropped and re-created. This isn't a huge deal, but I'd love to eliminate as many steps as possible from our development and testing procedures. Is there any way to GRANT access to a table in such a way that the GRANT-ed permissions survive a table being dropped and then re-created? And if this isn't possible, then is there a better way to go about this?

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >