Search Results

Search found 59951 results on 2399 pages for 'laptop memory question'.

Page 199/2399 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • Problems Using memset and memcpy

    - by user306557
    So I am trying to create a Memory Management System. In order to do this I have a set amount of space (allocated by malloc) and then I have a function myMalloc which will essentially return a pointer to the space allocated. Since we will then try and free it, we are trying to set a header of the allocated space to be the size of the allocated space, using memset. memset(memPtr,sizeBytes,sizeof(int)); We then need to be able to read this so we can see the size of it. We are attempting to do this by using memcpy and getting the first sizeof(int) bytes into a variable. For testing purposes we are just trying to do memset and then immediately get the size back. I've included the entire method below so that you can see all declarations. Any help would be greatly appreciated! Thanks! void* FirstFit::memMalloc(int sizeBytes){ node* listPtr = freelist; void* memPtr; // Cycle through each node in freelist while(listPtr != NULL) { if(listPtr->size >= sizeBytes) { // We found our space // This is where the new memory allocation begins memPtr = listPtr->head; memset(memPtr,sizeBytes,sizeof(int)); void *size; memcpy(size, memPtr, sizeof(memPtr)); // Now let's shrink freelist listPtr->size = listPtr->size - sizeBytes; int *temp = (int*)listPtr->head + (sizeBytes*sizeof(int)); listPtr->head = (int*) temp; return memPtr; } listPtr = listPtr->next; }

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • -(void)dealloc - How ? Objective - C

    - by sagar
    Please Note that - this is not similar than this question. OK. To understand my question, First of all please see both of these destructors. - (void)dealloc { [Marketdetails release]; Marketdetails=nil; [parsedarray release]; parsedarray=nil; [Marketid release]; Marketid=nil; [marketname release]; marketname=nil; [super dealloc]; } - (void)dealloc { [super dealloc]; [Marketdetails release]; Marketdetails=nil; [parsedarray release]; parsedarray=nil; [Marketid release]; Marketid=nil; [marketname release]; marketname=nil; } See, Both destructors have different code. In First Destructor first current class objects are released & then [super dealloc] is called. In second Desctructor first [super dealloc] is called. My question is as follows. Where should we write [super dealloc] ? first or last ? or it doesn't matter ?

    Read the article

  • Perl's Devel::LeakTrace::Fast Pointing to blank files and evals

    - by kt
    I am using Devel::LeakTrace::Fast to debug a memory leak in a perl script designed as a daemon which runs an infinite loop with sleeps until interrupted. I am having some trouble both reading the output and finding documentation to help me understand the output. The perldoc doesn't contain much information on the output. Most of it makes sense, such as pointing to globals in DBI. Intermingled with the output, however, are several leaked SV(<LOCATION>) from (eval #) line # Where the numbers are numbers and <LOCATION> is a location in memory. The script itself is not using eval at any point - I have not investigated each used module to see if evals are present. Mostly what I want to know is how to find these evals (if possible). I also find the following entries repeated over and over again leaked SV(<LOCATION>) from line # Where line # is always the same #. Not very helpful in tracking down what file that line is in.

    Read the article

  • 'Bank Switching' Sprites on old NES applications

    - by Jeffrey Kern
    I'm currently writing in C# what could basically be called my own interpretation of the NES hardware for an old-school looking game that I'm developing. I've fired up FCE and have been observing how the NES displayed and rendered graphics. In a nutshell, the NES could hold two bitmaps worth of graphical information, each with the dimensions of 128x128. These are called the PPU tables. One was for BG tiles and the other was for sprites. The data had to be in this memory for it to be drawn on-screen. Now, if a game had more graphical data then these two banks, it could write portions of this new information to these banks -overwriting what was there - at the end of each frame, and use it from the next frame onward. So, in old games how did the programmers 'bank switch'? I mean, within the level design, how did they know which graphic set to load? I've noticed that Mega Man 2 bankswitches when the screen programatically scrolls from one portion of the stage to the next. But how did they store this information in the level - what sprites to copy over into the PPU tables, and where to write them at? Another example would be hitting pause in MM2. BG tiles get over-written during pause, and then get restored when the player unpauses. How did they remember which tiles they replaced and how to restore them? If I was lazy, I could just make one huge static bitmap and just grab values that way. But I'm forcing myself to limit these values to create a more authentic experience. I've read the amazing guide on how M.C. Kids was made, and I'm trying to be barebones about how I program this game. It still just boggles my mind how these programmers accomplisehd what they did with what they had. EDIT: The only solution I can think of would be to hold separate tables that state what tiles should be in the PPU at what time, but I think that would be a huge memory resource that the NES wouldn't be able to handle.

    Read the article

  • How can I release this NSXMLParser without crashing my app?

    - by prendio2
    Below is the @interface for an MREntitiesConverter object I use to strip all html tags from a string using an NSXMLParser. @interface MREntitiesConverter : NSObject { NSMutableString* resultString; NSString* xmlStr; NSData *data; NSXMLParser* xmlParser; } @property (nonatomic, retain) NSMutableString* resultString; - (NSString*)convertEntitiesInString:(NSString*)s; @end And this is the implementation: @implementation MREntitiesConverter @synthesize resultString; - (id)init { if([super init]) { self.resultString = [NSMutableString string]; } return self; } - (NSString*)convertEntitiesInString:(NSString*)s { xmlStr = [NSString stringWithFormat:@"<data>%@</data>", s]; data = [xmlStr dataUsingEncoding:NSUTF8StringEncoding allowLossyConversion:YES]; xmlParser = [[NSXMLParser alloc] initWithData:data]; [xmlParser setDelegate:self]; [xmlParser parse]; return [resultString autorelease]; } - (void)dealloc { [data release]; //I want to release xmlParser here but it crashes the app [super dealloc]; } - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)s { [self.resultString appendString:s]; } @end If I release xmlParser in the dealloc method I am crashing my app but without releasing I am quite obviously leaking memory. I am new to Instruments and trying to get the hang of optimising this app. Any help you can offer on this particular issue will likely help me solve other memory issues in my app. Yours in frustrated anticipation: ) Oisin

    Read the article

  • C++ operator new, object versions, and the allocation sizes

    - by mizubasho
    Hi. I have a question about different versions of an object, their sizes, and allocation. The platform is Solaris 8 (and higher). Let's say we have programs A, B, and C that all link to a shared library D. Some class is defined in the library D, let's call it 'classD', and assume the size is 100 bytes. Now, we want to add a few members to classD for the next version of program A, without affecting existing binaries B or C. The new size will be, say, 120 bytes. We want program A to use the new definition of classD (120 bytes), while programs B and C continue to use the old definition of classD (100 bytes). A, B, and C all use the operator "new" to create instances of D. The question is, when does the operator "new" know the amount of memory to allocate? Compile time or run time? One thing I am afraid of is, programs B and C expect classD to be and alloate 100 bytes whereas the new shared library D requires 120 bytes for classD, and this inconsistency may cause memory corruption in programs B and C if I link them with the new library D. In other words, the area for extra 20 bytes that the new classD require may be allocated to some other variables by program B and C. Is this assumption correct? Thanks for your help.

    Read the article

  • Designing small comparable objects

    - by Thomas Ahle
    Intro Consider you have a list of key/value pairs: (0,a) (1,b) (2,c) You have a function, that inserts a new value between two current pairs, and you need to give it a key that keeps the order: (0,a) (0.5,z) (1,b) (2,c) Here the new key was chosen as the average between the average of keys of the bounding pairs. The problem is, that you list may have milions of inserts. If these inserts are all put close to each other, you may end up with keys such to 2^(-1000000), which are not easily storagable in any standard nor special number class. The problem How can you design a system for generating keys that: Gives the correct result (larger/smaller than) when compared to all the rest of the keys. Takes up only O(logn) memory (where n is the number of items in the list). My tries First I tried different number classes. Like fractions and even polynomium, but I could always find examples where the key size would grow linear with the number of inserts. Then I thought about saving pointers to a number of other keys, and saving the lower/greater than relationship, but that would always require at least O(sqrt) memory and time for comparison. Extra info: Ideally the algorithm shouldn't break when pairs are deleted from the list.

    Read the article

  • How safe and reliable are C++ String Literals?

    - by DoctorT
    So, I'm wanting to get a better grasp on how string literals in C++ work. I'm mostly concerned with situations where you're assigning the address of a string literal to a pointer, and passing it around. For example: char* advice = "Don't stick your hands in the toaster."; Now lets say I just pass this string around by copying pointers for the duration of the program. Sure, it's probably not a good idea, but I'm curious what would actually be going on behind the scenes. For another example, let's say we make a function that returns a string literal: char* foo() { // function does does stuff return "Yikes!"; // somebody's feeble attempt at an error message } Now lets say this function is called very often, and the string literal is only used about half the time it's called: // situation #1: it's just randomly called without heed to the return value foo(); // situation #2: the returned string is kept and used for who knows how long char* retVal = foo(); In the first situation, what's actually happening? Is the string just created but not used, and never deallocated? In the second situation, is the string going to be maintained as long as the user finds need for it? What happens when it isn't needed anymore... will that memory be freed up then (assuming nothing points to that space anymore)? Don't get me wrong, I'm not planning on using string literals like this. I'm planning on using a container to keep my strings in check (probably std::string). I'm mostly just wanting to know if these situations could cause problems either for memory management or corrupted data.

    Read the article

  • Have you dealt with space hardening?

    - by Tim Post
    I am very eager to study best practices when it comes to space hardening. For instance, I've read (though I can't find the article any longer) that some core parts of the Mars rovers did not use dynamic memory allocation, in fact it was forbidden. I've also read that old fashioned core memory may be preferable in space. I was looking at some of the projects associated with the Google Lunar Challenge and wondering what it would feel like to get code on the moon, or even just into space. I know that space hardened boards offer some sanity in such a harsh environment, however I'm wondering (as a C programmer) how I would need to adjust my thinking and code if I was writing something that would run in space? I think the next few years might show more growth in private space companies, I'd really like to at least be somewhat knowledgeable regarding best practices. Can anyone recommend some books, offer links to papers on the topic or (gasp) even a simulator that shows you what happens to a program if radiation, cold or heat bombards a board that sustained damage to its insulation? I think the goal is keeping humans inside of a space craft (as far as fixing or swapping stuff) and avoiding missions to fix things. Furthermore, if the board maintains some critical system, early warnings seem paramount.

    Read the article

  • What happens to class members when malloc is used instead of new?

    - by Felix
    I'm studying for a final exam and I stumbled upon a curious question that was part of the exam our teacher gave last year to some poor souls. The question goes something like this: Is the following program correct, or not? If it is, write down what the program outputs. If it's not, write down why. The program: #include<iostream.h> class cls { int x; public: cls() { x=23; } int get_x(){ return x; } }; int main() { cls *p1, *p2; p1=new cls; p2=(cls*)malloc(sizeof(cls)); int x=p1->get_x()+p2->get_x(); cout<<x; return 0; } My first instinct was to answer with "the program is not correct, as new should be used instead of malloc". However, after compiling the program and seeing it output 23 I realize that that answer might not be correct. The problem is that I was expecting p2->get_x() to return some arbitrary number (whatever happened to be in that spot of the memory when malloc was called). However, it returned 0. I'm not sure whether this is a coincidence or if class members are initialized with 0 when it is malloc-ed. Is this behavior (p2->x being 0 after malloc) the default? Should I have expected this? What would your answer to my teacher's question be? (besides forgetting to #include <stdlib.h> for malloc :P)

    Read the article

  • Is this 2D array initialization a bad idea?

    - by Brendan Long
    I have something I need a 2D array for, but for better cache performance, I'd rather have it actually be a normal array. Here's the idea I had but I don't know if it's a terrible idea: const int XWIDTH = 10, YWIDTH = 10; int main(){ int * tempInts = new int[XWIDTH * YWIDTH]; int ** ints = new int*[XWIDTH]; for(int i=0; i<XWIDTH; i++){ ints[i] = &tempInts[i*YWIDTH]; } // do things with ints delete[] ints[0]; delete[] ints; return 0; } So the idea is that instead of newing a bunch of arrays (and having them placed in different places in memory), I just point to an array I made all at once. The reason for the delete[] (int*) ints; is because I'm actually doing this in a class and it would save [trivial amounts of] memory to not save the original pointer. Just wondering if there's any reasons this is a horrible idea. Or if there's an easier/better way. The goal is to be able to access the array as ints[x][y] rather than ints[x*YWIDTH+y].

    Read the article

  • Android - BitmapFactory.decodeByteArray - OutOfMemoryError (OOM)

    - by Bob Keathley
    I have read 100s of article about the OOM problem. Most are in regard to large bitmaps. I am doing a mapping application where we download 256x256 weather overlay tiles. Most are totally transparent and very small. I just got a crash on a bitmap stream that was 442 Bytes long while calling BitmapFactory.decodeByteArray(....). The Exception states: java.lang.OutOfMemoryError: bitmap size exceeds VM budget(Heap Size=9415KB, Allocated=5192KB, Bitmap Size=23671KB) The code is: protected Bitmap retrieveImageData() throws IOException { URL url = new URL(imageUrl); InputStream in = null; OutputStream out = null; HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // determine the image size and allocate a buffer int fileSize = connection.getContentLength(); if (fileSize < 0) { return null; } byte[] imageData = new byte[fileSize]; // download the file //Log.d(LOG_TAG, "fetching image " + imageUrl + " (" + fileSize + ")"); BufferedInputStream istream = new BufferedInputStream(connection.getInputStream()); int bytesRead = 0; int offset = 0; while (bytesRead != -1 && offset < fileSize) { bytesRead = istream.read(imageData, offset, fileSize - offset); offset += bytesRead; } // clean up istream.close(); connection.disconnect(); Bitmap bitmap = null; try { bitmap = BitmapFactory.decodeByteArray(imageData, 0, bytesRead); } catch (OutOfMemoryError e) { Log.e("Map", "Tile Loader (241) Out Of Memory Error " + e.getLocalizedMessage()); System.gc(); } return bitmap; } Here is what I see in the debugger: bytesRead = 442 So the Bitmap data is 442 Bytes. Why would it be trying to create a 23671KB Bitmap and running out of memory?

    Read the article

  • Why does Perl's Devel::LeakTrace::Fast point to blank files and evals?

    - by kt
    I am using Devel::LeakTrace::Fast to debug a memory leak in a perl script designed as a daemon which runs an infinite loop with sleeps until interrupted. I am having some trouble both reading the output and finding documentation to help me understand the output. The perldoc doesn't contain much information on the output. Most of it makes sense, such as pointing to globals in DBI. Intermingled with the output, however, are several leaked SV(<LOCATION>) from (eval #) line # Where the numbers are numbers and <LOCATION> is a location in memory. The script itself is not using eval at any point - I have not investigated each used module to see if evals are present. Mostly what I want to know is how to find these evals (if possible). I also find the following entries repeated over and over again leaked SV(<LOCATION>) from line # Where line # is always the same #. Not very helpful in tracking down what file that line is in.

    Read the article

  • Handling large datasets with PHP/Drupal

    - by jo
    Hi all, I have a report page that deals with ~700k records from a database table. I can display this on a webpage using paging to break up the results. However, my export to PDF/CSV functions rely on processing the entire data set at once and I'm hitting my 256MB memory limit at around 250k rows. I don't feel comfortable increasing the memory limit and I haven't got the ability to use MySQL's save into outfile to just serve a pre-generated CSV. However, I can't really see a way of serving up large data sets with Drupal using something like: $form = array(); $table_headers = array(); $table_rows = array(); $data = db_query("a query to get the whole dataset"); while ($row = db_fetch_object($data)) { $table_rows[] = $row->some attribute; } $form['report'] = array('#value' => theme('table', $table_headers, $table_rows); return $form; Is there a way of getting around what is essentially appending to a giant array of arrays? At the moment I don't see how I can offer any meaningful report pages with Drupal due to this. Thanks

    Read the article

  • malloc:mmap(size=XX) failed (error code=12)

    - by Michel
    I have a memory problem in an iPhone app, giving me a hard time. Here is the error message I get: malloc: * mmap(size=9281536) failed (error code=12) * error: can't allocate region I am using ARC for this app, in case that might be useful information. The code (below) is just using a file in the Bundle in order to load a core data entity. The strange thing is the crash happens only after more than 90 loops; while it seems to mee that since the size of the "contents" in getting smaller and smaller, the memory request should also get smaller and smaller. Here is the code, if any one can see a flaw please let me know. NSString *path,*contents,*lineBuffer; path=[[NSBundle mainBundle] pathForResource:@"myFileName" ofType:@"txt"]; contents=[NSString stringWithContentsOfFile:path encoding:NSUTF8StringEncoding error:nil]; int counter=0; while (counter<10000) { lineBuffer=[contents substringToIndex:[contents rangeOfCharacterFromSet:[NSCharacterSet newlineCharacterSet]].location]; contents=[contents substringFromIndex:[lineBuffer length]+1]; newItem=[NSEntityDescription insertNewObjectForEntityForName:@"myEntityName" inManagedObjectContext:context]; [newItem setValue:lineBuffer forKey:@"name"]; request=[[NSFetchRequest alloc] init]; [request setEntity: [NSEntityDescription entityForName:@"myEntityName" inManagedObjectContext:context]]; error=nil; [context save:&error]; counter++; }

    Read the article

  • when i set property(retain), one "self" = one "retain"?

    - by Walter
    although I'm about to finish my first app, I'm still confused about the very basic memory management..I've read through many posts here and apple document, but I'm still puzzled.. For example..I'm currently doing things like this to add a label programmatically: @property (retain, nonatomic) UILabel *showTime; @sythesize showTime; showTime = [[UILabel alloc] initWithFrame:CGRectMake(45, 4, 200, 36)]; [self.showTime setText:[NSString stringWithFormat:@"%d", time]]; [self.showTime setFont:[UIFont fontWithName:@"HelveticaRoundedLT-Bold" size:23]]; [self.showTime setTextColor:numColor]; self.showTime.backgroundColor = [UIColor clearColor]; [self addSubview:self.showTime]; [showTime release]; this is my current practice, for UILabel, UIButton, UIImageView, etc... [Alloc init] it without self., coz I know this will retain twice.. but after the allocation, I put back the "self." to set the attributes.. My gut feel tells me I am doing wrong, but it works superficially and I found no memory leak in analyze and instruments.. can anyone give my advice? when I use "self." to set text and set background color, does it retain one automatically? THX so much!

    Read the article

  • How to configure Hyper-V failover cluster to live migrate when dynamic memory runs out?

    - by Matt Johnson
    Appologies in advance that this is not a direct programming question, but I have a feeling that the solution involves custom powershell scripts (maybe), so this is as good a place to ask as any. I maintain a website that has a large Hyper-V cluster for SQL Servers. We are using Windows 2008 R2 SP1, and the new "dynamic memory" feature. I've already ready reviewed the Best Practices Guide, and implemented it's suggested configuration. Everything works well, except that when SQL demand increases memory pressure to expand to more memory than is available on the physical machine, the memory status goes into the "Warning" state and stays there. I assume the hypervisor is using a swapfile on the host to fulfill the memory requirement, thus slowing the virtual machine down. When this happens, there are plenty of other nodes in the cluster that have available resources. I can live-migrate the virtual server over there and everything works, and the warnings go away. Now how can I automate this? I see no menu options in either Hyper-V or the Failover Cluster Manager for performing a migration or shutdown when dynamic memory goes into the warning state. Any ideas about how to script this, or monitor it and invoke the action directly, would be helpful. If the solution involves coding, powershell would be ideal, but I could envison this as a .Net Service that monitors for this state and kicks off the migration request. I just don't know what objects are involved in doing the monitoring or kicking off the live migration. Thanks in advance.

    Read the article

  • UML assignment question

    - by waitinforatrain
    Hi guys, Sorry, I know this is a very lame question to ask and not of any use to anyone else. I have an assignment in UML due tomorrow and I don't even know the basics (all-nighter ahead!). I'm not looking for a walkthrough, I simply want your opinion on something. The assignment is as follows (you only need to skim over it!): ============= Gourmet Surprise (GS) is a small catering firm with five employees. During a typical weekend, GS caters fifteen events with twenty to fifty people each. The business has grown rapidly over the past year and the owner wants to install a new computer system for managing the ordering and buying process. GS has a set of ten standard menus. When potential customers call, the receptionist describes the menus to them. If the customer decides to book an event (dinner, lunch, picnic, finger food etc.), the receptionist records the customer information (e.g., name, address, phone number, etc.) and the information about the event (e.g., place, date, time, which one of the standard menus, total price) on a contract. The customer is then faxed a copy of the contract and must sign and return it along with a deposit (often a credit card or by check) before the event is officially booked. The remaining money is collected when the catering is delivered. Sometimes, the customer wants something special (e.g., birthday cake). In this case, the receptionist takes the information and gives it to the owner who determines the cost; the receptionist then calls the customer back with the price information. Sometimes the customer accepts the price, other times, the customer requests some changes that have to go back to the owner for a new cost estimate. Each week, the owner looks through the events scheduled for that weekend and orders the supplies (e.g., plates) and food (e.g., bread, chicken) needed to make them. The owner would like to use the system for marketing as well. It should be able to track how customers learned about GS, and identify repeat customers, so that GS can mail special offers to them. The owner also wants to track the events on which GS sent a contract, but the customer never signed the contract and actually booked a GS. Exercise: Create an activity diagram and a use case model (complete with a set of detail use case descriptions) for the above system. Produce an initial domain model (class diagram) based on these descriptions. Elaborate the use cases into sequence diagrams, and include any state diagrams necessary. Finally use the information from these dynamic models to expand the domain model into a full application model. ============= In your opinion, do you think this question is asking me to come up with a package for an online ordering system to replace the system described above, or to create UML diagrams that facilitate the existing telephone-based system?

    Read the article

  • write c++ in latex, noob latex question

    - by voodoomsr
    maybe is a noob question but i can't find the solution in the web, i need to write C++ in Latex. I write C$++$ but the result is like crap, the signs are too big and there is too much space between C and the first plus sign. Previously i needed to write the sharp symbol for C#....c$\sharp$ it also looks like crap but with a escape character it looks nice, for the plus sign i can't do the same.

    Read the article

  • jquery animate question

    - by user313271
    jquery code $(function(){ $('#switcher1').click(function(){ $(this).stop().animate({left:'35px'},800); }); }); Hey everybody, this code slides #switcher1 at 35px to the right. I have question, how i can make that #switcher1 on one more click slides back to the original position ?

    Read the article

  • Very Simple Regex Question

    - by Sunil
    Hello sir I have a very simple regex question suppose I have 2 condition 1 url =http://www.abc.com/cde/def 2 url =https://www.abc.com/sadfl/dsaf and how can I extract the baseUrl using regex. sample output 1 http://www.abc.com 2 https://www.abc.com Thanks

    Read the article

  • Relational database question with php.

    - by Oliver Bayes-Shelton
    Hi, Not really a coding question more a little help with my idea of a Relational database. If I have 3 tables in a SQL database. In my php script I basically query the companies which are in industry "a" and then insert a row into a separate table with their details such as companyId , companyName etc is that a type of Relational database ? I have explained it in a simple way so we don't get confused what I am trying to say.

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >