Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 227/903 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Resizing a GLContext in Windows

    - by user146780
    I have a Win32 window and it has a gl context. I'm trying to figure out how to resize the context because even if I use glviewport it still stays the same size as when I created the context. Am I misunderstanding glviewport? i'm providing it the bottom left corner and the size I want but its still the original size that I made the context. Thanks

    Read the article

  • Tool to profile php code

    - by Michael
    Looking for some [freeware/opensource] tool in order to make it easy to profile a big php project on win32 platform. Need to find out which part of code is most time consuming. It's hard to manually put timing function for each function, loop...

    Read the article

  • Reporting Services "cannot connect to the report server database"

    - by Dano
    We have Reporting Services running, and twice in the past 6 months it has been down for 1-3 days, and suddenly it will start working again. The errors range from not being able to view the tree root in a browser, down to being able to insert parameters on a report, but crashing before the report can generate. Looking at the logs, there is 1 error and 1 warning which seem to correspond somewhat. ERROR:Event Type: Error Event Source: Report Server (SQL2K5) Event Category: Management Event ID: 107 Date: 2/13/2009 Time: 11:17:19 AM User: N/A Computer: ******** Description: Report Server (SQL2K5) cannot connect to the report server database. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. WARNING: always comes before the previous error Event code: 3005 Event message: An unhandled exception has occurred. Event time: 2/13/2009 11:06:48 AM Event time (UTC): 2/13/2009 5:06:48 PM Event ID: 2efdff9e05b14f4fb8dda5ebf16d6772 Event sequence: 550 Event occurrence: 5 Event detail code: 0 Process information: Process ID: 5368 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: ReportServerException Exception message: For more information about this error navigate to the report server on the local server machine, or enable remote errors. During the downtime we tried restarting everything from the server RS runs on, to the database it calls to fill reports with no success. When I came in monday morning it was working again. Anyone out there have any ideas on what could be causing these issues? Edit Tried both suggestions below several months ago to no avail. This issue hasn't arisen since, maybe something out of my control has changed....

    Read the article

  • supported MySQL for Python2.6.1

    - by SKSK
    Can any one tell me which MySQL version is suitable for Python2.6.1 to connect MySQLdb, I am using MySQL4.0.23 and MySQLdb of MySQL-python-1.2.3c1.win32-py2.6 after successfully installation also its showing No module named MySQLdb and I sent the Environment Variables also properly for Python2.6, using Windows XP, tested import MySQLdb through command prompt its not showing any errors then what could be the problem, I am straggling on this since last 2days, Please tell what could be the problem

    Read the article

  • How do I link gtk library more easily with cmake in windows?

    - by Runner
    I'm now doing it in a very ugly way by manually including all the required path(the gtk bundle is at D:/Tools/gtk+-bundle_2.20.0-20100406_win32): include_directories(D:/Tools/gtk+-bundle_2.20.0-20100406_win32/include/gtk-2.0 D:/Tools/gtk+-bundle_2.20.0-20100406_win32/include/glib-2.0 D:/Tools/gtk+-bundle_2.20.0-20100406_win32/lib/glib-2.0/include D:/Tools/gtk+-bundle_2.20.0-20100406_win32/include/cairo D:/Tools/gtk+-bundle_2.20.0-20100406_win32/include/pango-1.0 D:/Tools/gtk+-bundle_2.20.0-20100406_win32/lib/gtk-2.0/include D:/Tools/gtk+-bundle_2.20.0-20100406_win32/include/atk-1.0) link_directories(D:/Tools/gtk+-bundle_2.20.0-20100406_win32/lib) target_link_libraries(MyProgram gtk-win32-2.0.lib)

    Read the article

  • heroku logs --ps run showign nothing

    - by Zarne Dravitzki
    I have two running apps on heroku staging and production. They are near identical enviornments. (Staging has extra configs IE RailsFootnotes, Bullet gem) When I run heroku logs --ps run --app jl-staging Returns as logs like 2012-08-30T01:30:42+00:00 heroku[run.1]: Starting process with command `bundle exec rake jewellover:warn_users` This log is a Task set to run with Heroku Schedular Free. Everything Works perfect but when I do the same with heroku logs --ps run --app jl-production There are no results. No heroku[run.1] process logs. Both environments have the same scheduled tasks, albeit at different times but none the less both run scheduled tasks at specified times. Is there something im missing about heroku[run.1] processes in production env? Does heroku only keep the -ps logs for a certain amount of time? It seems to show less activity than the normal logs. Maybe only show 24hrs worth of logs rather than Last 100 logs... I need to log and debug the [run.1] process from the production env... specifically the jewellover:warn_users task. any ideas?

    Read the article

  • Which is the good C++ GUI Framework

    - by Suriyan Suresh
    I know there are plenty of C++ GUI libraries out there. Here is an incomplete but significant list: MFC Qt wxWidgets Ultimate++ WTL Win32 Win32Gui WinForms Here is what i want Well documented Modern (and well designed) interface Easy to use Widely used GUI editor (RAD tool) Free

    Read the article

  • How to make a program not show up in Alt-Tab or on the taskbar.

    - by Scott Chamberlain
    I have a program that needs to sit in the background and when a user connects to a RDP session it will do some stuff then launch a program. when the program is closed it will do some housekeeping and logoff the session. The current way I am doing it is like this I have the terminal server launch this application. I have it set as a windows forms application and my code is this public static void Main() { //Do some setup work Process proc = new Process(); //setup the process proc.Start(); proc.WaitForExit(); //Do some housecleaning NativeMethods.ExitWindowsEx(0, 0); } I really like this because there is no item in the taskbar and there is nothing showing up in alt-tab. However to do this I gave up access to functions like void WndProc(ref Message m) So Now I can't listen to windows messages (Like WTS_REMOTE_DISCONNECT or WTS_SESSION_LOGOFF) and do not have a handle to use for for bool WTSRegisterSessionNotification(IntPtr hWnd, int dwFlags); I would like my code to be more robust so it will do the housecleaning if the user logs off or disconnects from the session before he closes the program. Any reccomendations on how I can have my cake and eat it too?

    Read the article

  • Can somebody explain this remark in the MSDN CreateMutex() documentation about the bInitialOwner fla

    - by Tom Williams
    The MSDN CreatMutex() documentation (http://msdn.microsoft.com/en-us/library/ms682411%28VS.85%29.aspx) contains the following remark near the end: Two or more processes can call CreateMutex to create the same named mutex. The first process actually creates the mutex, and subsequent processes with sufficient access rights simply open a handle to the existing mutex. This enables multiple processes to get handles of the same mutex, while relieving the user of the responsibility of ensuring that the creating process is started first. When using this technique, you should set the bInitialOwner flag to FALSE; otherwise, it can be difficult to be certain which process has initial ownership. Can somebody explain the problem with using bInitialOwner = TRUE? Earlier in the same documentation it suggests a call to GetLastError() will allow you to determine whether a call to CreateMutext() created the mutex or just returned a new handle to an existing mutex: Return Value If the function succeeds, the return value is a handle to the newly created mutex object. If the function fails, the return value is NULL. To get extended error information, call GetLastError. If the mutex is a named mutex and the object existed before this function call, the return value is a handle to the existing object, GetLastError returns ERROR_ALREADY_EXISTS, bInitialOwner is ignored, and the calling thread is not granted ownership. However, if the caller has limited access rights, the function will fail with ERROR_ACCESS_DENIED and the caller should use the OpenMutex function.

    Read the article

  • Drupal: How to make a fieldset dependent using CTools

    - by far
    Hello, I am using Ctools Dependency to make a fieldset hideable. This is part of my code: $form['profile-status'] = array( '#type' => 'radios', '#title' => '', '#options' => array( 'new' => t('Create a new profile.'), 'select' => t('Use an existing profile.'), ), ); $form['select'] = array( '#type' => 'select', '#title' => t('Select a profile'), '#options' => $options, '#process' => array('ctools_dependent_process'), '#dependency' => array('radio:profile-status' => array('select')), ); $form['profile-properties'] = array( '#type' => 'fieldset', '#title' => t('View the profile'), '#process' => array('ctools_dependent_process'), '#dependency' => array('radio:profile-status' => array('select')), '#input' => true, ); In snippet above, There are two elements, one select and one fieldset. Both have #process and #dependency parameters and both point to one field for dependent value. Problem is elements like select or textfield can be hidden easily but it does not work for fieldset. In this support request page, CTools creator has mentioned that '#input' = true is a work around. As you see I added it to code, but it does not work as well. Do you have any suggestion?

    Read the article

  • Scheduling a Delay Job on Heroku with a Worker Dyno

    - by user1524775
    I'm currently using Heroku's scheduler to run a script. However, the time that the script takes to run is going to increase from a few milliseconds to a few minutes. I'm looking at using the delayed_job gem to push this process off to a Worker Dyno. I want to continue to run this script once-a-day, just offload it to the worker. My current rake task is: desc "This task updates some stuff for you." task :update_some_stuff => :environment do puts "Updating some stuff ..." SomeClass.new.process puts "... done." end Once the gem is installed, migration run, and worker dyno started, will the script just need to change to: desc "This task updates some stuff for you." task :update_some_stuff => :environment do puts "Updating some stuff ..." SomeClass.new.delay.process puts "... done." end With this task still being a rake task scheduled by Heroku's Scheduler, is the only thing that needs to happen here the introduction of the delay method to put this in the Worker's queue? Thanks in advance for any help.

    Read the article

  • Converting Source ASCII Files to JPEGs

    - by CommonsWare
    I publish technical books, in print, PDF, and Kindle/MOBI, with EPUB on the way. The Kindle does not support monospace fonts, which are kinda useful for source code listings. The only way to do monospace fonts is to convert the text (Java source, HTML, XML, etc.) into JPEG images. More specifically, due to pagination issues, a given input ASCII file needs to be split into slices of ~6 lines each, with each slice turned into a JPEG, so listings can span a screen. This is a royal pain. My current mechanism to do that involves: Running expand to set a consistent 2-space tab size, which pipes to... a2ps, which pipes to... A small Perl snippet to add a "%%LanguageLevel: 3\n" line, which pipes to... ImageMagick's convert, to take the (E)PS and make a JPEG out it, with an appropriate background, cropped to 575x148+5+28, etc. That used to work 100% of the time. It now works 95% of the time. The rest of the time, I get convert: geometry does not contain image errors, which I cannot seem to get rid of, in part because I don't understand what the problem is. Before this process, I used to use a pretty-print engine (source-highlight) to get HTML out of the source code...but then the only thing I could find to convert the HTML into JPEGs was to automate screen-grabs from an embedded Gecko engine. Reliability stank, which is why I switched to my current mechanism. So, if you were you, and you needed to turn source listings into JPEG images, in an automated fashion, how would you do it? Bonus points if it offers some sort of pretty-print process (e.g., bolded keywords)! Or, if you know what typically causes convert: geometry does not contain image, that might help. My current process is ugly, but if I could get it back to 100% reliability, that'd be just fine for now. Thanks in advance!

    Read the article

  • Should I base my Embedded Linux product on Qt?

    - by Udi
    My company is developing a medical product. One of the components is a pda-like platform that will run embedded linux. We were considering Qt as the UI framework but found out that Qt is a lot more than that (we are not familiar with Qt). In general, the device needs to do the following: 1. Receive measurements over USB HID from another device (USB HID is used for convenience). 2. Process the measurements. 3. Store them in a database. 4. Interact with the user using the device's touch screen lcd. 5. Communicate (wi-fi, tcp-ip) with a central management station that collects the data and configures the device. 6. Include a web server to allow accessing the device via a browser. We intend to program in C++. My questions are: 1. Is that a good choice for such a device? 2. Assuming we choose Qt, how do we build our product? - Do we use Qt just as a GUI framework and write the application code in a separate process (passing messages between Qt and the application process)? - Do we write the entire application inside Qt, using all of the services the tool has to offer? - Another approach?

    Read the article

  • Preallocating memory with C++ in realtime environment

    - by Elazar Leibovich
    I'm having a function which gets an input buffer of n bytes, and needs an auxillary buffer of n bytes in order to process the given input buffer. (I know vector is allocating memory at runtime, let's say that I'm using a vector which uses static preallocated memory. Imagine this is NOT an STL vector.) The usual approach is void processData(vector<T> &vec) { vector<T> &aux = new vector<T>(vec.size()); //dynamically allocate memory // process data } //usage: processData(v) Since I'm working in a real time environment, I wish to preallocate all the memory I'll ever need in advance. The buffer is allocated only once at startup. I want that whenever I'm allocating a vector, I'll automatically allocate auxillary buffer for my processData function. I can do something similar with a template function static void _processData(vector<T> &vec,vector<T> &aux) { // process data } template<size_t sz> void processData(vector<T> &vec) { static aux_buffer[sz]; vector aux(vec.size(),aux_buffer); // use aux_buffer for the vector _processData(vec,aux); } // usage: processData<V_MAX_SIZE>(v); However working alot with templates is not much fun (now let's recompile everything since I changed a comment!), and it forces me to do some bookkeeping whenever I use this function. Are there any nicer designs around this problem?

    Read the article

  • As an Agile Java developer, what should I be looking for when hiring a C++ developer?

    - by agoudzwaard
    I come from an effective team of Agile Java developers. We've had a lot of success in hiring more people like ourselves - people passionate about technology with experience primarily in the Agile Java/J2EE space. We're looking to hire our first C++ developer to serve as an on-shore resource for maintaining and adding to the C++ portion of our code base. Up until now the entirety of our C++ development has been done out of an off-shore location. We consider our interview process to be fairly thorough: A phone screen centered on Object-Oriented Programming and Java A non-trivial at-home code project using Java An in-person interview covering technical and behavioral competency We look for a demonstration of Agile best practices (expressive code, test-driven development, continuous integration) throughout the entire process, however there is a common conception that Agility is primarily practiced by Java developers. If we retrofit our interview process for C++, should we still expect Agile qualities when interviewing for a C++ role? I'm asking on behalf of a team that has worked with Java too long to know what a good C++ developer looks like. Specifically we're looking to answer the following questions: Can we expect a demonstrated understanding of OO design and Separation of Concerns? In the code project we want the candidate to write unit tests. Would a good C++ developer be surprised by this expectation? Are there any "extra" competencies we can look for? For example with Java developers we always look for a familiarity with Dependency Injection.

    Read the article

  • Best way to track the stages of a form across different controllers - $_GET or routing

    - by chrisj
    Hi, I am in a bit of a dilemma about how best to handle the following situation. I have a long registration process on a site, where there are around 10 form sections to fill in. Some of these forms relate specifically to the user and their own personal data, while most of them relate to the user's pets - my current set up handles user specific forms in a User_Controller (e.g via methods like user/profile, user/household etc), and similarly the pet related forms are handled in a Pet_Controller (e.g pet/health). Whether or not all of these methods should be combined into a single Registration_Controller, I'm not sure - I'm open to any advice on that. Anyway, my main issue is that I want to generate a progress bar which shows how far along in the registration process each user is. As the urls in each form section can potentially be mapping to different controllers, I'm trying to find a clean way to extract which stage a person is at in the overall process. I could just use the query string to pass a stage parameter with each request, e.g user/profile?stage=1. Another way to do it potentially is to use routing - e.g the urls for each section of the form could be set up to be registration/stage/1, registration/stage/2 - then i could just map these urls to the appropriate controller/method behind the scenes. If this makes any sense at all, does anyone have any advice for me?

    Read the article

  • Writing out sheet to text file using POI event model

    - by Eduardo Dennis
    I am using XLSX2CSV example to parse large sheets from a workbook. Since I only need to output the data for specific sheets I added an if statement in the process method to test for the specific sheets. When the condition is met I continue with the process. public void process() throws IOException, OpenXML4JException, ParserConfigurationException, SAXException { ReadOnlySharedStringsTable strings = new ReadOnlySharedStringsTable(this.xlsxPackage); XSSFReader xssfReader = new XSSFReader(this.xlsxPackage); StylesTable styles = xssfReader.getStylesTable(); XSSFReader.SheetIterator iter = (XSSFReader.SheetIterator) xssfReader.getSheetsData(); while (iter.hasNext()) { InputStream stream = iter.next(); String sheetName = iter.getSheetName(); if (sheetName.equals("SHEET1")||sheetName.equals("SHEET2")||sheetName.equals("SHEET3")||sheetName.equals("SHEET4")||sheetName.equals("SHEET5")){ processSheet(styles, strings, stream); try { System.setOut(new PrintStream( new FileOutputStream("C:\\Users\\edennis.AD\\Desktop\\test\\"+sheetName+".txt"))); } catch (Exception e) { e.printStackTrace(); } stream.close(); } } } But I need to output text file and not sure how to do it. I tried to use the System.set() method to output everything from system.out to text but that's not working I just get blank files.

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Help dealing with data dependency between two registration forms

    - by franko75
    I have a tricky issue here with a registration of both a user and his/her pet. Both the user and the pet are treated as separate entities and both require separate registration forms. However, the user's pet has to be linked to the user via a foreign key in the database. The process is basically that when a new user joins the site, firstly they register their pet, then they register themselves. The reason for this order is to check their pet's eligibility for the site (there are some criteria to be met) first, instead of getting the user to sign up only to then find out their pet is ineligible. It is this ordering of the form submissions which is causing me a bit of a headache, as follows... The site is being developed with an MVC framework, and the User registration process is managed via a method in a User_form controller, while the pet registration process is managed via a method in the Pet_form controller. The pet registration form happens first, and the pet data can be saved without the owner_id at this stage, with the user id possibly being added (e.g by retrieving pet's id from session) following user registration. However, doing it this way could potentially result in redundant data, where pet records would be created in the database, but if the user doesn't actually register themselves too, then the pets will be ownerless records in the DB. Other option is to serialize the new pet's data at the pet registration stage, don't save it to the DB until the user fills out their registration form. Once the user is created, i can pass serialised data AND the owner_id to a method in the Pet Model which can update the DB. However, I also need to set the newly created $pet to $this-pet which I then access for a sequence of other related forms. Should I just set the session variable in the model method? Then in the Pet controller constructor, do a check for pet stored in session, if yes, assign to $this-pet... If this makes any sense to anybody and you have some advice, i'd be grateful to hear it!

    Read the article

  • compiling maya (3d application ) with qt

    - by knishua
    including the maya ( 3d application ) classes in qt program gives lot of errors..... i have added all required include paths and libs...the same problem persists .... this is pro file for my qt project TARGET = FileCon TEMPLATE = app SOURCES += main.cpp \ dialog.cpp HEADERS += dialog.h \ ConvertFunction.h FORMS += dialog.ui LIBS += "C:/Program Files/Autodesk/Maya2008/lib" \ -lOpenMaya.lib \ -lFoundation.lib \ -lOpenMayalib INCLUDEPATH += "C:/Program Files/Autodesk/Maya2008/include" DEFINES = _BOOL \ WIN32 \ REQUIRE_IOSTREAM /////////////////////////////////////////// How is it possible to use maya classes with qt.

    Read the article

  • How to get the Drive Letter for the DevicePath

    - by new
    Dear All, I am using Win32 API. Really i do not understand how to get the drive letter for DevicePath of a USB stick . can you pls explain it to me ( what i have is SP_DEVICE_INTERFACE_DETAIL_DATA DevicePath using this Device path i get VID AND PID of the usb device my device path looks like below "\?\usb#vid_1a8d&pid_1000#358094020874450#{a5dcbf10-6530-11d2-901f-00c04fb951ed}" Is there any way to to map DRIVE LETTER to my DEVICE PATH so please help me to map drive letter to DevicePath ) Thanks for any help.

    Read the article

  • How can I refactor this to work without breaking the pattern horribly?

    - by SnOrfus
    I've got a base class object that is used for filtering. It's a template method object that looks something like this. public class Filter { public void Process(User u, GeoRegion r, int countNeeded) { List<account> selected = this.Select(u, r, countNeeded); // 1 List<account> filtered = this.Filter(selected, u, r, countNeeded); // 2 if (filtered.Count > 0) { /* do businessy stuff */ } // 3 if (filtered.Count < countNeeded) this.SendToSuccessor(u, r, countNeeded - filtered) // 4 } } Select(...), Filter(...) are protected abstract methods and implemented by the derived classes. Select(...) finds objects in the based on x criteria, Filter(...) filters those selected further. If the remaining filtered collection has more than 1 object in it, we do some business stuff with it (unimportant to the problem here). SendToSuccessor(...) is called if there weren't enough objects found after filtering (it's a composite where the next class in succession will also be derived from Filter but have different filtering criteria) All has been ok, but now I'm building another set of filters, which I was going to subclass from this. The filters I'm building however would require different params and I don't want to just implement those methods and not use the params or just add to the param list the ones I need and have them not used in the existing filters. They still perform the same logical process though. I also don't want to complicated the consumer code for this (which looks like this) Filter f = new Filter1(); Filter f2 = new Filter2(); Filter f3 = new Filter3(); f.Sucessor = f2; f2.Sucessor = f3; /* and so on adding filters as successors to previous ones */ foreach (User u in users) { foreach (GeoRegion r in regions) { f.Process(u, r, ##); } } How should I go about it?

    Read the article

  • How to effectively execute this cron job?

    - by Lost_in_code
    I have a table with 200 rows. I'm running a cron job every 10 minutes to perform some kind of insert/update operation on the table. The operation needs to be performed only on 5 rows at a time every time the cron job runs. So in first 10 mins records 1-5 are updated, records 5-10 in the 20th minute and so on. When the cron job runs for the 20th time, all the records in the table would have been updated exactly once. This is what is to be achieved at least. And the next cron job should repeat the process again. The problem: is that, every time a cron job runs, the insert/update operation should be performed on N rows (not just 5 rows). So, if N is 100, all records would've been updated by just 2 cron jobs. And the next cron job would repeat the process again. Here's an example: This is the table I currently have (200 records). Every time a cron job executes, it needs to pick N records (which I set as a variable in PHP) and update the time_md5 field with the current time's MD5 value. +---------+-------------------------------------+ | id | time_md5 | +---------+-------------------------------------+ | 10 | 971324428e62dd6832a2778582559977 | | 72 | 1bd58291594543a8cc239d99843a846c | | 3 | 9300278bc5f114a290f6ed917ee93736 | | 40 | 915bf1c5a1f13404add6612ec452e644 | | 599 | 799671e31d5350ff405c8016a38c74eb | | 56 | 56302bb119f1d03db3c9093caf98c735 | | 798 | 47889aa559636b5512436776afd6ba56 | | 8 | 85fdc72d3b51f0b8b356eceac710df14 | | .. | ....... | | .. | ....... | | .. | ....... | | .. | ....... | | 340 | 9217eab5adcc47b365b2e00bbdcc011a | <-- 200th record +---------+-------------------------------------+ So, the first record(id 10) should not be updated more than once, till all 200 records are updated once - the process should start over once all the records are updated once. I have some idea on how this could be achieved, but I'm sure there are more efficient ways of doing it. Any suggestions?

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >