Search Results

Search found 23634 results on 946 pages for 'performance management'.

Page 120/946 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Installed Ubuntu In a low Specs PC and it is too slow, even with LXDE

    - by Herudae
    I'm new with Linux and started with Ubuntu 11.10, I installed it in my PC (Core2Duo 2Ghz, 512Mb Ram DDR2, integrated video in Motherboard), I know the requirements for Unity are 1Gb Ram so I decided to download a Desktop environment more lightweight, so I Installed LXDE, it loads very fast, compared to the 3.5 min from login screen to open desktop in Unity, but it freezes every time I open a single program, I can't even navigate in Internet, it freezes, sometimes for a pair of minutes and the graph at bottom right is all green as if iyt were using 100% CPU, it happens with every program. As additional data it takes 3+ min to get from boot system selection screen to Login screen and 3.5 Min more to get into Ubuntu with Unity, with LXDE it turns to 30 secs aproximately. Is Ubuntu + LXDE Desktop Environment Package = Lubuntu? or should I download Lubuntu directly instead? I installed some other desktop environments, as Gnome but it doesn't log in, the screen just turns grey. Should I get an older Ubuntu version? I'm thinking about uninstalling Ubuntu but I'll try to deplete the options, thanks for your support.

    Read the article

  • Determining cause of random latency and loading issues

    - by Sherwin Flight
    I'm not sure exactly what details to post in regards to my issue, because I'm not sure what is relevant. Prior to the end of September my websites all loaded quickly, in almost all cases. Loading time wasn't usually more than a few seconds. However, since the end of September I noticed a big increase in page loading times. In some cases pages were taking 30 seconds or more to load. I do have a remote monitoring service monitoring some of the sites as well, and the image below shows the response times over the past month. The response times shown at the beginning of this graph were what the usual response times were prior to this issue occurring. You can see that there has been a significant increase in response times from the beginning to the end of this graph. The thing is, the problem is not happening 100% of the time. If I click through the site, or even just keep refreshing the page, about 25% of the time the pages load quickly, the remaining 75% of the time they load slowly. Sometimes the pages take so long to load that they time out, and don't load at all. I have contacted my hosting provider, and they said things at their end was fine. I don't believe the problem is my home internet provider, because all other websites load without a problem. The server is located in Texas, USA. This also raises another interesting point. My remote monitor checks my site from two locations, California, USA, and London, England. As you can see in the chart below the response time is actually shorter when checked from London, which doesn't seem to make sense, since the server is physically closer to the California monitoring location. I would have expected the London monitoring location to have higher response times since they are physically farther away. I should also point out that in some traceroute test I've done it seem like the first connection to the server seems to take the longest, then after that the rest of the page loads quickly. Below is a little chart showing the times for the first connection to the server. So, what could be causing this problem, and what steps can I take to resolve it or at least narrow down the problem? Sending the request to the server was very quick, and receiving the reply back seems pretty quick, but the WAIT time is really long. So it connects, sends the request, but then waits close to 30 seconds before it starts receiving data back. I am also aware that there are things I can do to speed up page loading times, like reducing the number of CSS and JS files used on a page, compressing images, etc. This is not really what the source of the problem is though, because nothing has really changed on the site since before the problem started, and other sites on the same server are loading slowly as well.

    Read the article

  • High Usage of RAM by wxPython's GUI and need some advice to reduce it

    - by user67024
    I've recently developed a GUI in wxPython for windows platform. It contains a five tabs, 4 of them are just richTextCtrl boxes and the other one has controls for uploading files, buttons, textctrls, a slider etc.. As I was new to GUI development in Python, I used wxFormBuilder to generate some of the code using a good amount of sizers. So, now the problem is that the GUI starts off with a initial memory of around 40MB which is too much for such a simple application (Or so I think) . Also, when the functions handling the functions use huge lists as the program is for debugging large data logs and identifying the problems in'em implying that I can't afford memory for GUI. So, how can I reduce that start working memory size? Is it a general issue in wxPython? And currently trying use profilers but not sure if it's going to help.

    Read the article

  • When to start thinking about scalability?

    - by Rits
    I'm having a funny but also terrible problem. I'm about to launch a new (iPhone) app. It's a turn-based multiplayer game running on my own custom backend. But I'm afraid to launch. For some reason, I think it might become something big and that its popularity will kill my poor lonely single server + MySQL database. On one hand I'm thinking that if it's growing, I'd better be prepared and have a scalable infrastructure already in place. On the other hand I just feel like getting it out into the world and see what happens. I often read stuff like "premature optimization is the root of all evil" or people saying that you should just build your killer game now, with the tools at hand, and worry about other stuff like scalability later. I'd love to hear some opinions on this from experts or people with experience with this. Thanks!

    Read the article

  • EPM Planning 11.1.2 - MassGridStatistics

    - by Keith Rosenthal
    A utility is available for Oracle Hyperion Planning that determines web form load times.  This utility, MassGridStatistics, opens all web forms within the Planning application.  After the forms are opened, an html page will appear showing the form options, suppression, number of row column and page members, and load times.  Any form having a load time longer than one second could potentially have scalability issues in a multi-user environment and should be considered for re-design.  Adding suppression (especially block suppression) and reducing the number of rows and columns are potential fixes that will reduce load times. The MassGridStatistics utility is located in a .7z file called MassGridStatistics.7z.  Extract the file using 7-Zip.  A readme file is provided listing the installation instructions and the steps to run the utility. MassGridStatistics is included with the 11.1.2.1.101 patch set and will also be in all future releases starting with 11.1.2.2.  For earlier Planning releases, an SR will be necessary to have Support provide the utility.

    Read the article

  • What is the benefit of triple buffering?

    - by user782220
    I read everything written in a previous question. From what I understand in double buffering the program must wait until the finished drawing is copied or swapped before starting the next drawing. In triple buffering the program has two back buffers and can immediately start drawing in the one that is not involved in such copying. But with triple buffering if you're in a situation where you can take advantage of the third buffer doesn't that suggest that you are drawing frames faster than the monitor can refresh. So then you don't actually get a higher frame rate. So what is the benefit of triple buffering then?

    Read the article

  • Java Alphabetize Algorithm Insertion sort vs Bubble Sort

    - by Chris Okyen
    I am supposed to "Develop a program that alphabetizes three strings. The program should allow the user to enter the three strings, and then display the strings in alphabetical order." It's instructed that I need to use the String library compareTo()/charAt()/toLowerCase() to make all the characters lowercase so the Lexicon comparison is also a alphabetical comparison. Input Pseudo Code: String input[3]; Scanner keyboard = new Scanner(System.in); System.out.println("Enter three strings: "); for(byte i = 0; i < 3; i++) input[i] = keyboard.next() The sorting would be Insertion Sort: 321 2 3 1 2 31 231 1 23 1 2 3 1 23 1 23 123 Bubble Sort 321 231 213 123 Which would be more efficient in this case? The bubble sort seems to be more efficient though they seem to have equal stats for worst best and avg case, but I read the Insertion Sort is quicker for small amounts of data like my case.

    Read the article

  • EXALYTICS - If Oracle BI Server Does Not Fail Over to the TimesTen Instance

    - by Ahmed Awan
    If the BI Server does not fail over to the second TimesTen instance on the scaled-out node, then ensure that the logical table source (LTS) for the repository has mapped both TimesTen physical data sources. This mapping ensures that at the logical table source level, a mapping exists to both TimesTen instances. If one TimesTen instance is not available, then failover logic for the BI Server at the DSN level tries to connect to the other TimesTen instance. Reference: http://docs.oracle.com/cd/E23943_01/bi.1111/e24706/toc.htm

    Read the article

  • Ways to break the "Syndrome of the perfect programmer"

    - by Rushino
    I am probably not the only one that feel that way. But I have what I tend to call "The syndrome of the perfect programmer" which many might say is the same as being perfectionist but in this case it's in the domain of programming. However, the domain of programming is a bit problematic for such a syndrome. Have you ever felt that when you are programming you're not confident or never confident enought that your code is clean and good code that follows most of the best practices ? There so many rules to follow that I feel like being overwhelmed somehow. Not that I don't like to follow the rules of course I am a programmer and I love programming, I see this as an art and I must follow the rules. But I love it too, I mean I want and I love to follow the rules in order to have a good feeling of what im doing is going the right way.. but I only wish I could have everything a bit more in "control" regarding best practices and good code. Maybe it's a lack of organization? Maybe it's a lack of experience? Maybe a lack of practice? Maybe it's a lack of something else someone could point out? Is there any way to get rid of that syndrome somehow ?

    Read the article

  • SQL Azure Federations and Semantic Search by the SQL product team in London tonight (Monday)

    - by simonsabin
    Don’t forget that tonight we have Michael Rys from the SQL Server Product Team presenting on the Federation support coming to SQL Azure and the Semantic search coming in SQL Server Denali. This is a must attend evening for anyone that is serious about scaling SQL or doing search in SQL Server. Michael also has a few other hats including Microsoft’s representative on the W3C XML Query Working Group. To register go to http://sqlsocial20110613.eventbrite.com/   Ps Beer and Pizza will be laid on...(read more)

    Read the article

  • What is the best way to check if there is overlap between player and static, non-collidable items in bullet physic engine

    - by tigrou
    I'd like to add non collidable objects (eg: power ups, items, ...) in a game world using Bullet Physics Engine and to know if there is collision between player and them. Some info : there is a lot of items ( 1000), all are box shapes and they don't overlap. Here is things i have tried : btDbvt* bvtItems = new btDbvt(); //btDbvt is a hierachical AABB tree, used by Bullet foreach(var item ...) { btDbvtVolume volume = ... //compute item AABB; bvtItems->insert(volume, (void*)someExtraData); } Then, to find collisions between items and player : playerRigidBody->getAabb(min, max); btDbvtVolume playervolume = ... //compute player AABB bvtItems->collideTV(bvtItems->m_root, playervolume, *someCollisionHandler); This works fairly well (and its very fast), however, there is a problem : it only check items AABB against player AABB. That loss of precision is acceptable for items but not for player which is not a box. It would actually need another check to make sure player really collide with item but i don't know how to do this in Bullet. It would have been nice to have a function like this : playerRigidBody->checkCollisionWithAABB(); After doing trying that, I discovered that a btGhostObject exist and seems to have been made for that. I changed my code like this : foreach(var item...) { btCollisionObject* ghostObject = new btGhostObject(); ghostObject->setCollisionShape(boxShape); ghostObject->setCollisionFlags(ghostObject->getCollisionFlags() | btCollisionObject::CF_NO_CONTACT_RESPONSE); startTransform.setOrigin(...); //item position ghostObject->setWorldTransform(startTransform); dynamicsWorld->addCollisionObject(ghostObject, btBroadphaseProxy::SensorTrigger, btBroadphaseProxy:: CharacterFilter); } It also works ok, but there is a huge fps drop (almost ten times slower) which is not acceptable. Maybe there is something missing (forget set a flag) and Bullet is doing extra job for nothing or maybe all that ghostObjects are polluting broad phase and ghostObject is not the right thing for that. Any help would be appreciated.

    Read the article

  • Shared to Dedicated or Amazon CloudFront to improve performances and keep secured?

    - by user978548
    I have a Wordpress which currently takes about 1.8s to 2.5s for the home page to completely load in my country. The page weight is about 700Ko (static content included). In order to increase performances, I'm considering two solutions: Switching to a dedicated host. Using amazon s3 cloudfront to serve static contents. My current shared hosting have servers in a neighboring country but not exactly in mine, and both amazon and the dedicated hosting have some, so that's already an advantage. So considering all that, I still have three questions remaining: Currently having a low traffic (100 unique visitors/days, but growing) will it make a huge difference between my shared hosting and a dedicated server ? Knowing that I already use a cookie-less domain to deliver static contents (but using a redirection to the same server), would using amazon s3 make a real difference ? Talking about the cons of dedicated vs amazon s3, if I choose for the dedicated server something like Ubuntu server and do daily package updates and have only port 80 open, would it be sufficient in terms of security (in comparison with my current shared hosting which manage everything for me) ?

    Read the article

  • What is the best way to keep track of the median?

    - by Steven Mou
    I read a question in one book: Numbers are randomly generated and stored into an (expanding) array, How would you keep track of the median? There are two data structures can solve the problem. One is the balanced binary tree, the other is two heaps which keep trace of the biggest half and the smallest half of the elements. I think these two solutions has the same running time as O(n lg n), but I am not sure of my judgement. In your opinions, What is the best way to keep track of the median?

    Read the article

  • Demantra Partitioning and the First PK Column

    - by user702295
      We have found that it is necessary in Demantra to have an index that matches the partition key, although it does not have to be the PK.  It is ok   to create a new index instead of changing the PK.   For example, if my PK on SALES_DATA is (ITEM_ID, LOCATION_ID, SALES_DATE) and I decide partition by SALES_DATE, then I should add an index starting   with the partition key like this: (SALES_DATE, ITEM_ID, LOCATION_ID).   * Note that the first column of the new index matches the partition key.   It might also be helpful to create a 2nd index with the other PK columns reversed (SALES_DATE, LOCATION_ID, ITEM_ID). Again, the first column   matches the partition key.

    Read the article

  • Java Spotlight Episode 98: Cliff Click on Benchmarkings

    - by Roger Brinkley
    Interview with Cliff Click of 0xdata on benchmarking. Recorded live at JFokus 2012. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Bean Validation 1.1 Java EE 7 Roadmap Java JRE Update 7u7 and 6u35 available. Change to Java SE 7 and Java SE 6 Update Release Numbers JCP 2012 Award Nominations Announced Griffon JavaFX Plugin Events Sep 3-6, Herbstcampus, Nuremberg, Germany Sep 10-15, IMTS 2012 Conference,  Chicago Sep 12,  The Coming M2M Revolution: Critical Issues for End-to-End Software and Systems Development,  Webinar Sep 30-Oct 4, JavaONE, San Francisco Oct 3-4, Java Embedded @ JavaONE, San Francisco Oct 15-17, JAX London Oct 30-Nov 1, Arm TechCon, Santa Clara Oct 22-23, Freescale Technology Forum - Japan, Tokyo Nov 2-3, JMagreb, Morocco Nov 13-17, Devoxx, Belgium Feature Interview Cliff Click is the CTO and Co-Founder of 0xdata, a firm dedicated to creating a new way to think about web-scale data storage and real-time analytics. I wrote my first compiler when I was 15 (Pascal to TRS Z-80!), although my most famous compiler is the HotSpot Server Compiler (the Sea of Nodes IR). I helped Azul Systems build an 864 core pure-Java mainframe that keeps GC pauses on 500Gb heaps to under 10ms, and worked on all aspects of that JVM. Before that I worked on HotSpot at Sun Microsystems, and am at least partially responsible for bringing Java into the mainstream. I am invited to speak regularly at industry and academic conferences and has published many papers about HotSpot technology. I hold a PhD in Computer Science from Rice University and about 15 patents. What’s Cool Shaun Smith’s Devoxx 2011 talk "JPA Multi-Tenancy & Extensibility" now freely available at Parleys.

    Read the article

  • Cat 6 Only 100mbit speed

    - by Stu2000
    I tried two different cat6 cables directly connected between my two ubuntu machines. This one I ordered online: http://www.amazon.co.uk/gp/product/B002SQPDXS/ref=wms_ohs_product only achieves 100mbit speeds, but does appear to be supporting cross-talk (direct pc to pc), the other cat 6 cable, worked perfectly and gets the full 1gigabit speed. Both tests were performed using ftp and checking the network monitor with direct pc to pc connection. Did the product from amazon lie to me or do I need to manually set a setting somewhere in ubuntu for some cables? I had thought 10 quid for 20m of gigabit ethernet cable was a bit cheap, you get what you pay for... Regards, Stu Update: It seems that after rebooting, the device is set to 1000mbit sec when looking it up with sudo ethtool eth0 However after a while, this will drop down to just 100, after which to reset it to 1000 again, I have to reboot, and simply unpugging and re-plugging in the cable doesn't do it. I tried setting this in networking config file as suggested here: auto eth0 iface eth0 inet static pre-up /usr/sbin/ethtool -s eth0 speed 1000 duplex full but that resulted in my networking failing to start. Is there a problem with my 'auto-negotiation' or something? Can I manually override a setting to 1000mbit?

    Read the article

  • How to become a "faster" programmer?

    - by Nick Gotch
    My last job evaluation included just one weak point: timeliness. I'm already aware of some things I can do to improve this but what I'm looking for are some more. Does anyone have tips or advice on what they do to increase the speed of their output without sacrificing its quality? How do you estimate timelines and stick to them? What do you do to get more done in shorter time periods? Any feedback is greatly appreciated, thanks,

    Read the article

  • Current trends in Random Access Memory

    - by Nutel
    As I know for now because of laws of Physics there will be not any tangible improvements in CPU cycles per second for the nearest future. However because of Von Neumann bottleneck it seems to not be an issue for non-server applications. So what about RAM, is there any upcoming technologies that promise to improve memory speed or we are stack with the current situation till quantum computers will come out from labs?

    Read the article

  • Add-ons for Firefox - Java Plugin has been blocked JRE versions below 1.6.0_31 or between 1.7.0 and 1.7.0_2

    - by user702295
    As Java 1.6u31 is not certified for use with EBS or Demantra, you may notice issues in relation to the Java plug-in.  Demantra Development is currently working to certify Java 1.6u31.  They are recommending that you upgrade to that version. EBS customers, should not be installing 1.6u31 as it is not certified.  If you do upgrade your browser, you will either need to downgrade to a lower release of Firefox or find a way of allowing Firefox to use the older version of the Java Plug-in.

    Read the article

  • Spatial data from shapefiles (for T-SQL Tuesday #006)

    - by Rob Farley
    I’m giving a presentation on May 12th at the Adelaide .Net User Group, around the topic of spatial data, and in particular, the visualization of said data. Given that it’s about one the larger types, this post should also count towards Michael Coles’ T-SQL Tuesday on BLOB data . I wrote recently about my experience with exploded data , but what I didn’t go on to talk about was how using a shapefile like this would translate into a scenario with a much larger number of shapes, such as all the postcode...(read more)

    Read the article

  • Designing a flexible tile-based engine

    - by Vee
    I'm trying to create a flexible tile-based game engine to make all sorts of non-realtime puzzle games, just as Bejeweled, Civilization, Sokoban, and so on. The first approach I had was to have a 2D array of Tile objects, and then have classes inheriting from Tile that represented the game objects. Unfortunately that way I couldn't stack more game elements on the same Tile without having a 3D array. Then I did something different: I still had the 2D array of Tile objects, but every Tile object contained a List where I put and different entities. This worked fine until 20 minutes ago, when I realized that it's too expensive to do many things, look at this example: I have a Wall entity. Every update I have to check the 8 adjacent Tiles, then check all of the entities in the Tile's List, check if any of those entities is a Wall, then finally draw the correct sprite. (This is done to draw walls that are next to each other seamlessly) The only solution I see now is having a 3D array, with many layers, that could suit every situation. But that way I can't stack two entities that share the same layer on the same tile. Whenever I want to do that I have to create a new layer. Is there a better solution? What would you do?

    Read the article

  • Regulating how much to draw based on how much was drawn last frame.

    - by Mike Howard
    I have a 3D game world on an iPhone (limited graphics speed), and I'm already regulating whether I draw each shape on the screen based on it's size and distance from the camera. Something like... if (how_big_it_looks_from_the_camera > constant) then draw What I want to do now is also take into account how many shapes are being drawn, so that in busier areas of the game world I can draw less than I otherwise would. I tried to do this by dividing how_big_it_looks by the number of shapes that were drawn last frame (well, the square root of this but I'm simplifying - the problem is the same). if (how_big_it_looks / shapes_drawn > constant2) then draw But the check happens at the level of objects which represent many drawn shapes, and if an object containing many shapes is switched on, it increases shapes_drawn lots and switches itself back off the next frame. It flickers on and off. I tried keeping a kind of weighted average of previous values, by each frame doing something like shapes_drawn_recently = 0.9 * shapes_drawn_recently + 0.1 * shapes_just_drawn, but of course it only slows the flickering down because of the nature of the feedback loop. Is there a good way of solving this? My project is in Objective-C, but a general algorithm or pseudo-code is good too. Thanks.

    Read the article

  • OpenGL profiling with AMD PerfStudio 2

    - by Aurus
    I'm rendering just a really small amount of polygons for my UI but however I still tried to increase the FPS. In the end I removed redundant calls which increased the FPS. I really don't want to lose FPS for nothing so I keep looking for more improvements. The first thing I noticed is the "huge" time where no calls are made before SwapBuffer (the black one). Well I know that OpenGL works asynchronous so SwapBuffer has to wait until everything is done. But shouldn't PerfStudio mark this time also as black ? Correct me If I am wrong. The second thing I noticed is that some glUniform2f calls just take longer (the brown ones). I mean they should all upload 2floats to the GPU how can the time be so different from call to call. The program isn't even changed or something like that. I also tried to look at other programs like gDebugger or CodeXL but they often crashed and they show less statistics (only # of calls or redundant calls etc.) EDIT: I also realized that the draw calls also have different durations, which was obvious for me but sometimes drawing more vertices is faster than drawing less vertices.

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • Slides and demo code for Columnstore Index session

    - by Hugo Kornelis
    Almost a week has passed after SQLBits X in London , so I guess it’s about time for me to share the slides and demo code of my session on columnstore indexes. After all, I promised people I would do that – especially when I found out that I had enough demos prepared to fill two sessions! I made some changes to the demo code. I added extra comments, not only to the demos I could not explain and run during the session, but also to the rest, so that people who missed the session will also be able to...(read more)

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >