Search Results

Search found 29949 results on 1198 pages for 'large scale website'.

Page 49/1198 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • quartz2d translating the origin

    - by qwertyp96
    My understanding of quartz2d is that the code CGContextTranslateCTM(context, x, y); translates the coordinate system. I have a quartz2d view with lots of shapes on it, and the user needs to be able to pan around and zoom it. However, when using the CGContextScaleCTM(context, scaleX, scaleY); code, everything scales around the origin, not the center of the viewpoint the user is viewing. My solution to this was to use the following code: CGContextRef context = UIGraphicsGetCurrentContext(); CGContextTranslateCTM(context, 512.0+offset.x, 384.0+offset.y); //(512, 384) is the center of the iPad screen CGContextScaleCTM(context, scale, scale); You can translate around fine, but things still scale into the corner. What's wrong? EDIT: Oh. Wow. Duh. If you move the origin, the shapes move too, so you can't move it relative to the shapes. Now I know what's wrong, but how do I do that?(move the origin independently of the shapes)

    Read the article

  • Zoom in Java Swing application

    - by Shirky
    Hi there, I am looking for ways to zoom in a Java Swing application. That means that I would like to resize all components in a given JPanel by a given factor as if I would take an screenshot of the UI and just applied an "Image scale" operation. The font size as well as the size of checkboxes, textboxes, cursors etc. has to be adjusted. It is possible to scale a component by applying transforms to a graphics object: protected Graphics getComponentGraphics(Graphics g) { Graphics2D g2d=(Graphics2D)g; g2d.scale(2, 2); return super.getComponentGraphics(g2d); } That works as long as you don't care about self-updating components. If you have a textbox in your application this approach ceases to work since the textbox updates itself every second to show the (blinking) cursor. And since it doesn't use the modified graphics object this time the component appears at the old location. Is there a possibility to change a components graphics object permanently? There is also a problem with the mouse click event handlers. The other possibility would be to resize all child components of the JPanel (setPreferredSize) to a new size. That doesn't work for checkboxes since the displayed picture of the checkbox doesn't change its size. I also thought of programming my own layout manager but I don't think that this will work since layout managers only change the position (and size) of objects but are not able to zoom into checkboxes (see previous paragraph). Or am I wrong with this hypothesis? Do you have any ideas how one could achieve a zoomable Swing GUI without programming custom components? I looked for rotatable user interfaces because the problem seems familiar but I also didn't find any satisfying solution to this problem. Thanks for your help, Chris

    Read the article

  • Compress Large Video Files with DivX / Xvid and AutoGK

    - by DigitalGeekery
    Have you ever recorded home video on a camcorder only to find the video size is enormous? What if you wanted to share a video clip on YouTube or another video sharing site, but the file size was bigger than the maximum upload size? Today we’ll look at a way to compress certain video files, such as MPEG and AVI, with Auto Gordian Knot (AutoGK). AutoGK is a free application that runs on Windows. It supports Mpeg1, Mpeg2, Transport Streams, Vobs, and virtually any codec used for an .AVI file. AutoGK will accept as input the following file types: MPG, MPEG, VOB, VRO, M2V, DAT, IFO, TS, TP, TRP, M2T, and AVI. Files are output as .AVI files and are converted using the DivX or XviD codecs. Installing and Using AutoGK Download and install AutoGK (link below) Open the AutoGK. You’ll need to navigate a few wizard screens, but you can just accept the defaults.   Choose your video file by clicking on the folder to the right of the Input file text box.   Browse for and select your video file and click “Open.”   For this example, we’ll be working with an .AVI file that’s 167MB in size.   The output file is copied into the same directory as the input file by default, but you can change this if you choose. If the input file is also .AVI, AutoGK will append an _agk to the output file so that the original is not overwritten. Next, you’ll see any audio tracks listed. You can unselect the check box if you’d like to remove the audio track. You can choose one of the Predefined size options… Or, select a Custom size in MB or Target Quality in percentage. For our example, we’ll be compressing our 167MB file to 35MB. Click on Advanced Settings. Here you can choose your codec, if you have a preference, as well as output resolution and output audio. If you’d like to use the DivX codec, you’ll need to download and install it separately. (See link below) Typically you’ll want to keep the defaults. Click “OK.” Now you’re ready to add your file conversion job to the Job queue. Click Add Job to add it to the queue. You can add multiple files conversions to the job queue and  convert them in one batch. Click Start to begin the conversion process. The process will begin. You’ll be able to see the progress in the Log window on the bottom left. When the conversion is complete you’ll see a “Job finished” and the total time in the log window.   Check your output file to see it’s compressed size. Test your video just to make sure the output quality is satisfactory.   Note:  Conversion times can vary greatly depending on the size of the file and your computer hardware. Files that are several GBs in size may take several hours to compress. AutoGK is no longer being actively developed but is still a wonderful DivX/XviD conversion tool. It can also be used to compress and convert non-copy protected DVDs. Downloads AutoGordianKnot DivX (optional) Similar Articles Productive Geek Tips Use Your Mac Mini as a Media Server Part 2Make Disk Cleanup Compress Older(or Newer) Files on XPMysticgeek Blog: Exclusive Look Inside Vreel – Including Interview With Vreel Founder!Friday Fun: Watch HD Video Content with MeevidConvert a DVD Movie Directly to AVI with FairUse Wizard 2.9 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox)

    Read the article

  • Large File Upload in SharePoint 2010

    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This feed URL has been discontinued. Please update your reader's URL to : http://feeds.feedburner.com/winsmarts Read full article .... ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • AWS Large Instance: /mnt does not show all the space that should be available

    - by Emile Baizel
    I just created a Large (m1.large) 64 bit instance which comes with 850 GB instance storage. Look at the Large Instance http://aws.amazon.com/ec2/instance-types/ A 'df -h' from the root folder gives me the output below. The /mnt is where I'm thinking the instance storage is but here it is only showing me 414G. I have set up two servers and both are showing the same numbers. root@ip-11-11-11-11:/# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.9G 1.1G 6.5G 14% / none 3.7G 112K 3.7G 1% /dev none 3.7G 0 3.7G 0% /dev/shm none 3.7G 48K 3.7G 1% /var/run none 3.7G 0 3.7G 0% /var/lock /dev/sdb 414G 199M 393G 1% /mnt

    Read the article

  • What's the best license for my website?

    - by John Maxim
    I have developed a unique website but do not have a lot of fund to protect it with trademark or patents. I'm looking for suggestions so that when my supervisor gets my codes, some laws restrict anyone from copying it and claim their work. I'm in the middle of thinking, making the application a commercial one or never allowing it to be copied at all. What kind of steps am I required to take in order to make full measurements so my applications are fully-protected? I've come across a few, one under my consideration is MIT license. Some say we can have a mixture of both commercial and MIT. I would also like to be able to distribute some functions so it can be modified by others but I'd still retain the ownership. Last but not least, it's confusing when I think of protecting the whole website, and protecting codes by codes in division. How should we go about this? Thanks. N.B I have to pass it over to the supervisor as this is a Uni project. I need this to be done within 2 weeks. So time to get my App protected is a factor here.

    Read the article

  • Oracle’s FY14 Partner Kickoff Recap & New OPN Website

    - by Kristin Rose
    There is no doubt that we are off to a strong FY14! Now that Oracle’s Global Partner Kickoff has come and gone, it’s time to take what we have learned and focus on having the strongest year ever! To quote Oracle pilot, Sean D. Tucker, “FY14, it’s all about growth baby!” Here are some of the ways you can grow with Oracle! Sell into accounts where Oracle isn’t selling directly Offer customer added value solutions leveraging our technology Offer deep market capabilities that leverage transformative technology Be aggressive, sell the entire stack, engage with Oracle in the marketplace and get engineered for growth! With this being said, we also know that to have the strongest year ever, you also need the strongest tools ever! Ladies and gentleman, in case you missed its debut during Oracle’s Global Partner Kickoff, let OPN introduce you to the newly redesigned, Oracle PartnerNetwork website, providing  easy access to key business processes, systems and resources! We took your advice and implemented the following enhancements: A new OPN home page, highlighting paths to top tasks Streamlined top navigation New business process focused pages Restructured Knowledge Zone areas (currently applied to select pages) Learn more about the new Oracle PartnerNetwork website and all that Oracle has to offer, by watching the FY14 Global Partner Kickoff replay video below! Thank you for your hard work and partnership in FY13, here’s to an even stronger FY14! Good Selling, The OPN Communications Team

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • How to pause/resume transfer of large files?

    - by Olivier Lalonde
    I recently had to copy about 20 GB of data split between about 20 files from my laptop to an external hard drive. Since this operation takes quite a while (at ~560kb/s), I was wondering if there was any way to pause the transfer and resume it later (in case, I need to interrupt the transfer). As a side question, is there any performance difference between copying from the terminal vs copying from Nautilus?

    Read the article

  • Designing a large database with multiple sources

    - by CatchingMonkey
    I have been tasked with redesigning, or at worst optimising the structure of a database for a data warehouse. Currently, the database has 4 other source databases (which is due to expand to X number of others), all of which have their own data structures, naming conventions etc. At the moment an overnight SSIS package pulls the data from the various source and then for each source coverts the data into a standardised, usable format. These tables are then appended to each other creating a 60m row, 40 column beast!. This table is then used in a variety of ways from an OLAP cube to a web front end. The structure has been in place for a very long time, and the work I have been able to prove the advantages of normalisation, and this is the way I would like to go. The problem for me is that the overnight process takes so long I don't then want to spend additional time normalising the last table into something usable. Can anyone offer any insight or ideas into the best way to restructure or optimise the database efficiently? Edit: All the databases are MS SQL Server 2008 R2 Thanks in advance CM

    Read the article

  • Reducing brightness of large areas containing bright colours

    - by intuited
    I do most of my work in either a terminal or a web browser. I prefer my terminals to use bright colours on dark. I would really prefer that web pages tended to look this way as well, but that's not under my control. The problem is that when I switch from a light-on-dark terminal to a dark-on-light web page (like this one), my eyes have to adjust to the overall rise in screen brightness. Apparently this is bad for your eyes, in addition to being painful and annoying. It would seem to be possible for some layer of the interface to adjust the displayed colours for parts of the screen, or perhaps for particular windows, to reduce the brightness of the brighter areas of the screen. Can this be done, possibly with a Compiz extension?

    Read the article

  • How do you dive into large code bases?

    - by miku
    What tools and techniques do you use for exploring and learning an unknown code base? I am thinking of tools like grep, ctags, unit-tests, functional test, class-diagram generators, call graphs, code metrics like sloccount and so on. I'd be interested in your experiences, the helpers you used or wrote yourself and the size of the codebase, with which you worked with. I realize, that this is also a process (happening over time) and that learning can mean "can give a ten minute intro" to "can refactor and shrink this to 30% of the size". Let's leave that open for now.

    Read the article

  • Our New Website Header (& Other Tweaks)

    - by justin.kestelyn
    Last week, the Oracle Technology Network Website went fixed-width. There are several reasons for this, most relating to providing a consistent user experience, easier management of Website content, etc. Furthermore, it's fairly standard for developer portals these days - java.sun.com, MSDN, and IBM DeveloperWorks are also all fixed-width sites. (My apologies to everyone who is unhappy about this change, but it really is an overall positive one.) Today, we have rolled out a brand-new header, the first step in what we call the "Mosaic" project - which is an effort to make the user experience across all Oracle Websites more consistent. To summarize the impact: The "pull-down" menus on the OTN site disappear; most of them move into a "flyout" button in the header. You can access the OTN flyout from any page on Oracle.com or the OTN site. Great for our page views. :) You also have direct access to the Downloads index from anywhere on Oracle.com. If you so desire, you can directly access product overviews, Oracle University and Support info, Oracle Store, etc etc from the OTN site now. Due to limited space in the flyout we cannot accommodate *all* the pull-down items, but they are all no more than 1 or 2 clicks away. This approach has been validated in extensive user testing over the last few months; I welcome your feedback now in comments. There are many other changes in train, with the next one being: A major homepage redesign, the first in 4 or 5 years.

    Read the article

  • Best method to organize/manage dependencies in the VCS within a large solution

    - by SnOrfus
    A simple scenario: 2 projects are in version control The application The test(s) A significant number of checkins are made to the application daily. CI builds and runs all of the automation nightly. In order to write and/or run tests you need to have built the application (to reference/load instrumented assemblies). Now, consider the application to be massive, such that building it is prohibitive in time (an entire day to compile). The obvious side effect here, is that once you've performed a build locally, it is immediately inconsistent with latest. For instance: If I were to sync with latest, and open up one of the test projects, it would not locally build until I built the application. This is the same when syncing to another branch/build/tag. So, in order to even start working, I need to wait a day to build the application locally, so that the assemblies could be loaded - and then those assemblies wouldn't be latest. How do you organize the repository or (ideally) your development environment such that you can continually develop tests against whatever the current build is, or a given specific build, while minimizing building the application as much as possible?

    Read the article

  • How do you dive into large code bases?

    - by miku
    What tools and techniques do you use for exploring and learning an unknown code base? I am thinking of tools like grep, ctags, unit-tests, functional test, class-diagram generators, call graphs, code metrics like sloccount and so on. I'd be interested in your experiences, the helpers you used or wrote yourself and the size of the codebase, with which you worked with. I realize, that this is also a process (happening over time) and that learning can mean "can give a ten minute intro" to "can refactor and shrink this to 30% of the size". Let's leave that open for now.

    Read the article

  • Rendering large and high poly meshes

    - by Aurus
    Consider an huge terrain that has a lot polygons, to render this terrain I thought of following techniques: Using height-map instead of raw meshes: Yes, but I want to create a lot of caves and stuff that simply wont work with height-maps. Using voxels: Yes, but I think that this would be to much since I don't even want to support changing terrain.. Split into multiple chunks and do some sort of LOD with the mesh: Yes, but how would I do that? Tessellation usually creates more detail not less. Precompute the same mesh in lower poly version (like Mudbox does) and depending on the distance it renders one of these meshes: Graphic memory is limited and uploading only the chunks won't solve that problem since the traffic would be too high. IMO the last one sounds really good, but imagine the following process: Upload and render the chunks depending on the current player position. [No problem] Player will walk straight forward Now we maybe have to change on of the low poly chunk with the high poly one So, Remove the low poly chunk and load the high poly chunk [Already to much traffic here, I think] I am not very experienced in graphic programming and maybe the upper process is totally okay but somehow I think it is too much. And how about the disk space it would require.. I think 3 kind of levels would be fine but isn't that also too much? (I am using OpenGL but I don't think that this is important)

    Read the article

  • Storing lots of large strings with frequent "appends" and few reads

    - by Thiago Moraes
    In my current project, I need to store a very long ASCII string to each instance of a given object. This string will receive an 2 appends per minute and will not be retrieved so frequently. The worst case scenario is a 5-10MB string. I'll have thousands of instances of my object and I'm worried that storing all those strings in the filesystem would not be optimal, but I can't think of a better solution. Can anyone suggest an alternative? Maybe a key-value store? In this case, which one? Any other thoughts?

    Read the article

  • Reading a large SQL Errorlog

    - by steveh99999
    I came across an interesting situation recently where a SQL instance had been configured with the Audit of successful and failed logins being written to the errorlog. ie This meant… every time a user or the application connected to the SQL instance – an entry was written to the errorlog. This meant…  huge SQL Server errorlogs. Opening an errorlog in the usual way, using SQL management studio, was extremely slow… Luckily, I was able to use xp_readerrorlog to work around this – here’s some example queries..   To show errorlog entries from the currently active log, just for today :- DECLARE @now DATETIME DECLARE @midnight DATETIME SET @now = GETDATE() SET @midnight =  DATEADD(d, DATEDIFF(d, 0, getdate()), 0) EXEC xp_readerrorlog 0,1,NULL,NULL,@midnight,@now   To find out how big the current errorlog actually is, and what the earliest and most recent entries are in the errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT COUNT(*) AS 'Number of entries in errorlog', MIN(logdate) AS 'ErrorLog Starts', MAX(logdate) AS 'ErrorLog Ends' FROM #temp_errorlog DROP TABLE #temp_errorlog To show just DBCC history  information in the current errorlog :- EXEC xp_readerrorlog 0,1,'dbcc'   To show backup errorlog entries in the current errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT * from #temp_errorlog WHERE ProcessInfo = 'Backup' ORDER BY Logdate DROP TABLE #temp_errorlog XP_Errorlog is an undocumented system stored procedure – so no official Microsoft link describing the parameters it takes – however,  there’s a good blog on this here And, if you do have a problem with huge errorlogs – please consider running system stored procedure  sp_cycle_errorlog on a nightly or regular basis.  But if you do this,  remember to change the amount of errorlogs you do retain – the default of 6 might not be sufficient for you….

    Read the article

  • Scalable solution for website polling

    - by Tom Irving
    I'm looking to add push notifications to one of my iOS apps. The app is a client for a website which doesn't offer push notifications. What I've come up with so far: App sends a message to home server when transitioning to background, asking the server to start polling the website for the logged in user. The home server starts a new process to poll for that user. Polling happens every so many seconds / minutes. When the user returns to the iOS app, the app sends a message to the home server to stop polling. The home server kills the process polling for the user. Repeat. The problem is that this soon becomes stupid: 100s of users means 100s of different processes. It's just not scalable in the slighest. What I've written so far is in PHP, using CURL to do the polling and I started with PHP a few days ago, so maybe I'm missing something obvious that could help me with this. Some advice would be great.

    Read the article

  • How to deal with large open worlds?

    - by Mr. Beast
    In most games the whole world is small enough to fit into memory, however there are games where this is not the case, how is this archived, how can the game still run fluid even though the world is so big and maybe even dynamic? How does the world change in memory while the player moves? Examples for this include the TES games (Skyrim, Oblivion, Morrowind), MMORPGs (World of Warcraft), Diablo, Titan Quest, Dwarf Fortress, Far Cry.

    Read the article

  • Tracking state of a one time event on a big website

    - by Mattis
    Assume a website with 250 million active users. I add a new feature to the website. Once a user visits I want to use a short tutorial to teach them how to use said feature. I only want them to complete the tutorial once (or actively click it away). What is the smart way to code the verification check for this? How do I track the progress in the database? Having a separate table with like NewTutorial_completed = 1 for user_id = 21312315 would just snowball. It also feels intuitively bad to check for every one-time event for every user on every page view. While writing the question I got one idea, to have a separate event log that is checked periodically for any new action the user need to see or perform. I push events to this log and once they are completed they are removed from the log. No need to store NewTutorial_completed = 1-type variables this way. I am sure this is a common problem. I would appreciate any input on what best practice is.

    Read the article

  • Large sparse (stiff) ODE system needed for testing

    - by macydanim
    I hope this is the right place for this question. I have been working on a sparse stiff implicit ODE solver and have finished the code so far. I now tested the solver with the Van der Pol equation, and another stiff problem, which is of dimension 4. But to perform better tests I am searching for a bigger system. I'm thinking of the order N = 100...1000, if possible stiff and sparse. Does anybody have an example I could use? I really don't know where to search.

    Read the article

  • More Efficient Data Structure for Large Layered Tile Map

    - by Stupac
    It seems like the popular method is to break the map up into regions and load them as needed, my problem is that in my game there are many AI entities other than the player out performing actions in virtually all the regions of the map. Let's just say I have a 5000x5000 map, when I use a 2D array of byte's to render it my game uses around 17 MB of memory, as soon as I change that data structure to a my own defined MapCell class (which only contains a single field: byte terrain) my game's memory consumption rockets up to 400+ MB. I plan on adding layering, so an array of byte's won't cut it and I figure I'd need to add a List of some sort to the MapCell class to provide objects in the layers. I'm only rendering tiles that are on screen, but I need the rest of the map to be represented in memory since it is constantly used in Update. So my question is, how can I reduce the memory consumption of my map while still maintaining the above requirements? Thank you for your time! Here's a few snippets my C# code in XNA4: public static void LoadMapData() { // Test map generations int xSize = 5000; int ySize = 5000; MapCell[,] map = new MapCell[xSize,ySize]; //byte[,] map = new byte[xSize, ySize]; Terrain[] terrains = new Terrain[4]; terrains[0] = grass; terrains[1] = dirt; terrains[2] = rock; terrains[3] = water; Random random = new Random(); for(int x = 0; x < xSize; x++) { for(int y = 0; y < ySize; y++) { //map[x,y] = new MapCell(terrains[random.Next(4)]); map[x,y] = new MapCell((byte)random.Next(4)); //map[x, y] = (byte)random.Next(4); } } testMap = new TileMap(map, xSize, ySize); // End test map setup currentMap = testMap; } public class MapCell { //public TerrainType terrain; public byte terrain; public MapCell(byte itsTerrain) { terrain = itsTerrain; } // the type of terrain this cell is treated as /*public Terrain terrain { get; set; } public MapCell(Terrain itsTerrain) { terrain = itsTerrain; }*/ }

    Read the article

  • Writing Large Portions Of Code Then Debugging?

    - by The Floating Brain
    Lately I have been writing a game engine, and I have been writing a lot of "foundation stuff" (standard interfaces, modules, a message system ect.), but I have noticed a pattern, a lot of the stuff is interdependent and I can not debug until everything is done, hence I do not debug for about 3 to 5 hours at a time. I am wondering if this is an acceptable practice for this part of the project, and if not, if anyone can give me some advice? -----Update-----: I downloaded some code metrics tools, and my programs cyclomatic complexity is 1.52 which as I understand it is good, and should correlate to high cohesion, if I am wrong please correct me/

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >