Search Results

Search found 15377 results on 616 pages for 'socket programming'.

Page 109/616 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Starting to make 2D games in C++

    - by Ashley
    I'm fairly experienced with C and C#, but I've only ever created console/windows applications. I'm also experienced with AS3 and I've made some flash games. I want to make proper 2D games in C++, but I have no idea where to begin with graphics. There are entire books devoted to game development in C++ that only work with console applications and I'm finding the lack of resources and tutorials for proper 2D games frustrating... I'm also not particularly interested in using existing engines because I want total control of what I create. I've heard of the Allegro library; is it something important/popular that I should look into? How will I use DirectX? Any resources or links to tutorials or information is greatly appreciated.

    Read the article

  • Scaling Down Pixel Art?

    - by Michael Stum
    There's plenty of algorithms to scale up pixel art (I prefer hqx personally), but are there any notable algorithms to scale it down? In my case, the game is designed to run at 1280x720, but if someone plays at a lower resolution I want it to still look good. Most Pixel Art discussions center around 320x200 or 640x480 and upscaling for use in console emulators, but I wonder how modern 2D games like the Monkey Island Remake look good on lower resolutions? (Ignoring the options of having multiple versions of assets (essentially, mipmapping))

    Read the article

  • Audio Panning using RtAudio

    - by user1801724
    I use Rtaudio library. I would like to implement an audio program where I can control the panning (e.g. shifting the sound from the left channel to the right channel). In my specific case, I use a duplex mode (you can find an example here: duplex mode). It means that I link the microphone input to the speaker output. I seek on the web, but I did not find anything useful. Should I apply a filter on the output buffer? What kind of filter? Can anyone help me? Thanks

    Read the article

  • Difference between the terms Material & Effect

    - by codey
    I'm making an effect system right now (I think, because it may be a material system... or both!). The effects system follows the common (e.g. COLLADA, DirectX) effect framework abstraction of Effects have Techniques, Techniques have Passes, Passes have States & Shader Programs. An effect, according to COLLADA, defines the equations necessary for the visual appearance of geometry and screen-space image processing. Keeping with the abstraction, effects contain techniques. Each effect can contain one or many techniques (i.e. ways to generate the effect), each of which describes a different method for rendering that effect. The technique could be relate to quality (e.g. high precision, high LOD, etc.), or in-game-situation (e.g. night/day, power-up-mode, etc.). Techniques hold a description of the textures, samplers, shaders, parameters, & passes necessary for rendering this effect using one method. Some algorithms require several passes to render the effect. Pipeline descriptions are broken into an ordered collection of Pass objects. A pass provides a static declaration of all the render states, shaders, & settings for "one rendering pipeline" (i.e. one pass). Meshes usually contain a series of materials that define the model. According to the COLLADA spec (again), a material instantiates an effect, fills its parameters with values, & selects a technique. But I see material defined differently in other places, such as just the Lambert, Blinn, Phong "material types/shaded surfaces", or as Metal, Plastic, Wood, etc. In game dev forums, people often talk about implementing a "material/effect system". Is the material not an instance of an effect? Ergo, if I had effect objects, stored in a collection, & each effect instance object with there own parameter setting, then there is no need for the concept of a material... Or am I interpreting it wrong? Please help by contributing your interpretations as I want to be clear on a distinction (if any), & don't want to miss out on the concept of a material if it should be implemented to follow the abstraction of the DirectX FX framework & COLLADA definitions closely.

    Read the article

  • What could be the Java successor Oracle wants to invest in?

    - by deamon
    I've read that Oracle wants to invest into another language than Java: "On the other hand, Oracle has been particularly supportive of alternative JVM languages. Adam Messinger ( http://www.linkedin.com/in/adammessinger ) was pretty blunt at the JVM Languages Summit this year about Java the language reaching it's logical end and how Oracle is looking for a 'higher level' language to 'put significant investment into.'" But what language could be the one Oracle wants to invest in? Is there another candidate than Scala?

    Read the article

  • Nothing drawing on screen OpenGL with GLSL

    - by codemonkey
    I hate to be asking this kind of question here, but I am at a complete loss as to what is going wrong, so please bear with me. I am trying to render a single cube (voxel) in the center of the screen, through OpenGL with GLSL on Mac I begin by setting up everything using glut glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA|GLUT_ALPHA|GLUT_DOUBLE|GLUT_DEPTH); glutInitWindowSize(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); glutCreateWindow("Cubez-OSX"); glutReshapeFunc(reshape); glutDisplayFunc(render); glutIdleFunc(idle); _electricSheepEngine=new ElectricSheepEngine(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); _electricSheepEngine->initWorld(); glutMainLoop(); Then inside the engine init camera & projection matrices: cameraPosition=glm::vec3(2,2,2); cameraTarget=glm::vec3(0,0,0); cameraUp=glm::vec3(0,0,1); glm::vec3 cameraDirection=glm::normalize(cameraPosition-cameraTarget); cameraRight=glm::cross(cameraDirection, cameraUp); cameraRight.z=0; view=glm::lookAt(cameraPosition, cameraTarget, cameraUp); lensAngle=45.0f; aspectRatio=1.0*(windowWidth/windowHeight); nearClippingPlane=0.1f; farClippingPlane=100.0f; projection=glm::perspective(lensAngle, aspectRatio, nearClippingPlane, farClippingPlane); then init shaders and check compilation and bound attributes & uniforms to be correctly bound (my previous question) These are my two shaders, vertex: #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } and fragment: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } init the cube: setPosition(glm::vec3(0,0,0)); struct voxelData data[]={ //front face {{-1.0, -1.0, 1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, 1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, 1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, 1.0}, {0.0, 1.0, 1.0}}, //back face {{-1.0, -1.0, -1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, -1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, -1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, -1.0}, {0.0, 1.0, 1.0}} }; glGenBuffers(1, &modelVerticesBufferObject); glBindBuffer(GL_ARRAY_BUFFER, modelVerticesBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, 0); const GLubyte indices[] = { // Front 0, 1, 2, 2, 3, 0, // Back 4, 6, 5, 4, 7, 6, // Left 2, 7, 3, 7, 6, 2, // Right 0, 4, 1, 4, 1, 5, // Top 6, 2, 1, 1, 6, 5, // Bottom 0, 3, 7, 0, 7, 4 }; glGenBuffers(1, &modelFacesBufferObject); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, modelFacesBufferObject); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); and then the render call: glClearColor(0.52, 0.8, 0.97, 1.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); //use the shader glUseProgram(shaderProgram); //enable attributes in program glEnableVertexAttribArray(shaderAttribute_position); glEnableVertexAttribArray(shaderAttribute_color); //model matrix using model position vector glm::mat4 mvp=projection*view*voxel->getModelMatrix(); glUniformMatrix4fv(shaderAttribute_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_position, // attribute 3, // number of elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements 0 // offset of first element ); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_color, // attribute 3, // number of colour elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements (GLvoid *)(offsetof(struct voxelData, color3D)) // offset of colour data ); //draw the model by going through its elements array glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, voxel->modelFacesBufferObject); int bufferSize; glGetBufferParameteriv(GL_ELEMENT_ARRAY_BUFFER, GL_BUFFER_SIZE, &bufferSize); glDrawElements(GL_TRIANGLES, bufferSize/sizeof(GLushort), GL_UNSIGNED_SHORT, 0); //close up the attribute in program, no more need glDisableVertexAttribArray(shaderAttribute_position); glDisableVertexAttribArray(shaderAttribute_color); but on screen all I get is the clear color :$ I generate my model matrix using: modelMatrix=glm::translate(glm::mat4(1.0), position); which in debug turns out to be for the position of (0,0,0): |1, 0, 0, 0| |0, 1, 0, 0| |0, 0, 1, 0| |0, 0, 0, 1| Sorry for such a question, I know it is annoying to look at someone's code, but I promise I have tried to debug around and figure it out as much as I can, and can't come to a solution Help a noob please? EDIT: Full source here, if anyone wants

    Read the article

  • Using XNA to learn to build a game, but wanna move on [closed]

    - by Daniel Ribeiro
    I've been building a 2D isometric game (with learning purposes) in C# using XNA. I found it's really easy to manage sprite sheets loading, collision, basic physics and such with the XNA api. The thing is, I want to move on. My real goal is to learn C++ and develop a game using that language. What engine/library would you guys recommend for me to keep going on that same 2D isometric game direction using pretty much sprite sheets for the graphical part of the game?

    Read the article

  • How to handle mutiple API calls using javascript/jquery

    - by James Privett
    I need to build a service that will call multiple API's at the same time and then output the results on the page (Think of how a price comparison site works for example). The idea being that as each API call completes the results are sent to the browser immediately and the page would get progressively bigger until all process are complete. Because these API calls may take several seconds each to return I would like to do this via javascript/jquery in order to create a better user experience. I have never done anything like this before using javascript/jquery so I was wondering if there was any frameworks/advice that anyone would be willing to share.

    Read the article

  • Basis of definitions

    - by Yttrill
    Let us suppose we have a set of functions which characterise something: in the OO world methods characterising a type. In mathematics these are propositions and we have two kinds: axioms and lemmas. Axioms are assumptions, lemmas are easily derived from them. In C++ axioms are pure virtual functions. Here's the problem: there's more than one way to axiomatise a system. Given a set of propositions or methods, a subset of the propositions which is necessary and sufficient to derive all the others is called a basis. So too, for methods or functions, we have a desired set which must be defined, and typically every one has one or more definitions in terms of the others, and we require the programmer to provide instance definitions which are sufficient to allow all the others to be defined, and, if there is an overspecification, then it is consistent. Let me give an example (in Felix, Haskell code would be similar): class Eq[t] { virtual fun ==(x:t,y:t):bool => eq(x,y); virtual fun eq(x:t, y:t)=> x == y; virtual fun != (x:t,y:t):bool => not (x == y); axiom reflex(x:t): x == x; axiom sym(x:t, y:t): (x == y) == (y == x); axiom trans(x:t, y:t, z:t): implies(x == y and y == z, x == z); } Here it is clear: the programmer must define either == or eq or both. If both are defined, the definitions must be equivalent. Failing to define one doesn't cause a compiler error, it causes an infinite loop at run time. Defining both inequivalently doesn't cause an error either, it is just inconsistent. Note the axioms specified constrain the semantics of any definition. Given a definition of == either directly or via a definition of eq, then != is defined automatically, although the programmer might replace the default with something more efficient, clearly such an overspecification has to be consistent. Please note, == could also be defined in terms of !=, but we didn't do that. A characterisation of a partial or total order is more complex. It is much more demanding since there is a combinatorial explosion of possible bases. There is an reason to desire overspecification: performance. There also another reason: choice and convenience. So here, there are several questions: one is how to check semantics are obeyed and I am not looking for an answer here (way too hard!). The other question is: How can we specify, and check, that an instance provides at least a basis? And a much harder question: how can we provide several default definitions which depend on the basis chosen?

    Read the article

  • Overloading interface buttons, what are the best practices?

    - by XMLforDummies
    Imagine you'll have always a button labeled "Continue" in the same position in your app's GUI. Would you rather make a single button instance that takes different actions depending on the current state? private State currentState = State.Step1; private ContinueButton_Click() { switch(currentState) { case State.Step1: DoThis(); currentState = State.Step2; break; case State.Step2: DoThat(); break; } } Or would you rather have something like this? public Form() { this.ContinueStep2Button.Visible = false; } private ContinueStep1Button_Click() { DoThis(); this.ContinueStep1Button.Visible = false; this.ContinueStep2Button.Visible = true; } private ContinueStep2Button_Click() { DoThat(); }

    Read the article

  • What attracts software developers such as yourselves to choose to program for the Android mobile platform?

    - by Hasnan Karim
    Dear programmers, as part of my final year university project, I am conducting research into what makes programmers prefer to program for Android as opposed to other mobile operating systems. The description does not need to be detailed however, I am trying to find patterns between programmers to determine what properties (other than money) a software company such as Android must have in order to attract programmers and therefore grow.

    Read the article

  • Is Linq having a mind-numbing effect on .NET programmers?

    - by Aaronaught
    A lot of us started seeing this phenomenon with jQuery about a year ago when people started asking how to do absolutely insane things like retrieve the query string with jQuery. The difference between the library (jQuery) and the language (JavaScript) is apparently lost on many programmers, and results in a lot of inappropriate, convoluted code being written where it is not necessary. Maybe it's just my imagination, but I swear I'm starting to see an uptick in the number of questions where people are asking to do similarly insane things with Linq, like find ranges in a sorted array. I can't get over how thoroughly inappropriate the Linq extensions are for solving that problem, but more importantly the fact that the author just assumed that the ideal solution would involve Linq without actually thinking about it (as far as I can tell). It seems that we are repeating history, breeding a new generation of .NET programmers who can't tell the difference between the language (C#/VB.NET) and the library (Linq). What is responsible for this phenomenon? Is it just hype? Magpie tendencies? Has Linq picked up a reputation as a form of magic, where instead of actually writing code you just have to utter the right incantation? I'm hardly satisfied with those explanations but I can't really think of anything else. More importantly, is it really a problem, and if so, what's the best way to help enlighten these people?

    Read the article

  • Writing Large Portions Of Code Then Debugging?

    - by The Floating Brain
    Lately I have been writing a game engine, and I have been writing a lot of "foundation stuff" (standard interfaces, modules, a message system ect.), but I have noticed a pattern, a lot of the stuff is interdependent and I can not debug until everything is done, hence I do not debug for about 3 to 5 hours at a time. I am wondering if this is an acceptable practice for this part of the project, and if not, if anyone can give me some advice? -----Update-----: I downloaded some code metrics tools, and my programs cyclomatic complexity is 1.52 which as I understand it is good, and should correlate to high cohesion, if I am wrong please correct me/

    Read the article

  • Is reference to bug/issue in commit message considered good practice?

    - by Christian P
    I'm working on a project where we have the source control set up to automatically write notes in the bug tracker. We simply write the bug issue ID in the commit message and the commit message is added as a note to the bug tracker. I can see only a few downsides for this practice. If sometime in the future the source code gets separated from the bug tracking software (or the reported bugs/issues are somehow lost). Or when someone is looking in the history of commits but doesn't have access to our bug tracker. My question is if having a bug/issue reference in the commit message is considered good practice? Are there some other downsides?

    Read the article

  • if/else statements or exceptions

    - by Thaven
    I don't know, that this question fit better on this board, or stackoverflow, but because my question is connected rather to practices, that some specified problem. So, consider an object that does something. And this something can (but should not!) can go wrong. So, this situation can be resolved in two way: first, with exceptions: DoSomethingClass exampleObject = new DoSomethingClass(); try { exampleObject.DoSomething(); } catch (ThisCanGoWrongException ex) { [...] } And second, with if statement: DoSomethingClass exampleObject = new DoSomethingClass(); if(!exampleObject.DoSomething()) { [...] } Second case in more sophisticated way: DoSomethingClass exampleObject = new DoSomethingClass(); ErrorHandler error = exampleObject.DoSomething(); if (error.HasError) { if(error.ErrorType == ErrorType.DivideByPotato) { [...] } } which way is better? In one hand, I heard that exception should be used only for real unexpected situations, and if programist know, that something may happen, he should used if/else. In second hand, Robert C. Martin in his book Clean Code Wrote, that exception are far more object oriented, and more simple to keep clean.

    Read the article

  • Would you use UML if it kept stakeholders from requesting changes frequently?

    - by Huperniketes
    As much as programmers hate to document their code/system and draw UML (especially, Sequencing, Activity and State machine diagrams) or other diagramming notation, would you agree to do it if it kept managers from requesting a "minor change" every couple of weeks? IOW, would you put together visual models to document the system if it helped you demonstrate to managers what the effect of changes are and why it takes so long to implement them? (Edited to help programmers understand what type of answer I'm looking for.) 2nd edit: Restating my question again, "Would you be willing to use some diagramming notation, against your better nature as a programmer, if it helped you manage change requests?" This question isn't asking if there might be something wrong with the process. It's a given that there's something wrong with the process. Would you be willing to do more work to improve it?

    Read the article

  • Appropriate level of granularity for component-based architecture

    - by Jon Purdy
    I'm working on a game with a component-based architecture. An Entity owns a set of Component instances, each of which has a set of Slot instances with which to store, send, and receive values. Factory functions such as Player produce entities with the required components and slot connections. I'm trying to determine the best level of granularity for components. For example, right now Position, Velocity, and Acceleration are all separate components, connected in series. Velocity and Acceleration could easily be rewritten into a uniform Delta component, or Position, Velocity, and Acceleration could be combined alongside such components as Friction and Gravity into a monolithic Physics component. Should a component have the smallest responsibility possible (at the cost of lots of interconnectivity) or should related components be combined into monolithic ones (at the cost of flexibility)? I'm leaning toward the former, but I could use a second opinion.

    Read the article

  • Why Adobe Air is so underrated for building mobile apps?

    - by Marcelo de Assis
    I worked with Adobe Flash related technologies for the last 5 years, although not being a big fan of Adobe. I see some little bugs happening in some apps, but I cannot imagine why a lot of big companies do not even think to use use Adobe Air as a good technology for their mobile apps. I see a lot of mobile developer positions asking for experts in Android or iOS , but very much less positions asking for Adobe Air, even when Adobe Air apps have the advantage of being multi-plataform, with the same app working in Blackberry, iOS and Android. Is so much easier to develop a game using Flash, than using Android SDK, for example. It really have flaws (that I never saw) or it is just some kind of mass prejudgement? I also would like to hear what a project manager or a indie developer takes when choosing a plataform for building apps.

    Read the article

  • What's a nice explanation for pointers?

    - by Macneil
    In your own studies (on your own, or for a class) did you have an "ah ha" moment when you finally, really understood pointers? Do you have an explanation you use for beginner programmers that seems particularly effective? For example, when beginners first encounter pointers in C, they might just add &s and *s until it compiles (as I myself once did). Maybe it was a picture, or a really well motivated example, that made pointers "click" for you or your student. What was it, and what did you try before that didn't seem to work? Were any topics prerequisites (e.g. structs, or arrays)? In other words, what was necessary to understand the meaning of &s and *, when you could use them with confidence? Learning the syntax and terminology or the use cases isn't enough, at some point the idea needs to be internalized. Update: I really like the answers so far; please keep them coming. There are a lot of great perspectives here, but I think many are good explanations/slogans for ourselves after we've internalized the concept. I'm looking for the detailed contexts and circumstances when it dawned on you. For example: I only somewhat understood pointers syntactically in C. I heard two of my friends explaining pointers to another friend, who asked why a struct was passed with a pointer. The first friend talked about how it needed to be referenced and modified, but it was just a short comment from the other friend where it hit me: "It's also more efficient." Passing 4 bytes instead of 16 bytes was the final conceptual shift I needed.

    Read the article

  • Get Hands On with Raspberry Pi via Free OS-Building Course

    - by Jason Fitzpatrick
    Cambridge University is now offering a free 12-segment course that will guide you through building an OS from scratch for the tiny Raspberry Pi development board–learn the ins and outs of basic OS design on the cheap. You’ll need a Raspberry Pi board, a computer running Windows, OS X, or Linux, and an SD card, as well as a small amount of free software. The 12-part tutorial starts you off with basic OS theory and then walks you through basic control of the board, graphics manipulation, and, finally, creating a command line interface for your new operating system. Hit up the link below to read more and check out the lessons. Baking Pi – Operating Systems Development HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows?

    Read the article

  • developers-designers-testers interaction [closed]

    - by user29124
    Sorry for my bad English, and also you may not read this and waste your time, because it is just a lament of layman developer... Seems no one want to learn anything at my workplace. We have Mantis bug tracker, but our testers use google-docs for reports and only developers and team lead report bugs in Mantis. We have SVN for version control and use Smarty as template system, but our designers give us pure HTML (sometimes it's ugly for programmers, but mostly it's OK) in archives, and changes to design made by programmers go nowhere (I mean designers use their own obsolete HTML and CSS most of the time). We have a testing environment but designers don't have access with restricted accounts to it. So we can only ask them where to look for the problem and then investigate the problem by ourselves (and made changes to CSS by ourselves (that go nowhere most of the time...)). I will not mention legacy code without documentation, tests, or any requirements, just an absence of real interaction in triangle programmers-designers-testers. I'm not talking about using HAML, SASS, continuous integration, or something else, just about using basic tools by all participants of the development process. Maybe the absence of communication is not a problem in short-time projects, which will finish up in 2 months time but rather on the types of projects that lasts for years. Any comments please...

    Read the article

  • Why not have a High Level Language based OS? Are Low Level Languages more efficient?

    - by rtindru
    Without being presumptuous, I would like you to consider the possibility of this. Most OS today are based on pretty low level languages (mainly C/C++) Even the new ones such as Android uses JNI & underlying implementation is in C In fact, (this is a personal observation) many programs written in C run a lot faster than their high level counterparts (eg: Transmission (a bittorrent client on Ubuntu) is a whole lot faster than Vuze(Java) or Deluge(Python)). Even python compilers are written in C, although PyPy is an exception. So is there a particular reason for this? Why is it that all our so called "High Level Languages" with the great "OOP" concepts can't be used in making a solid OS? So I have 2 questions basically. Why are applications written in low level languages more efficient than their HLL counterparts? Do low level languages perform better for the simple reason that they are low level and are translated to machine code easier? Why do we not have a full fledged OS based entirely on a High Level Language?

    Read the article

  • Exploring your database schema with SQL

    In the second part of Phil's series of articles on finding stuff (such as objects, scripts, entities, metadata) in SQL Server, he offers some scripts that should be handy for the developer faced with tracking down problem areas and potential weaknesses in a database.

    Read the article

  • How to have a maintainable and manageable Javascript code base

    - by dade
    I am starting a new job soon as a frontend developer. The App I would be working on is 100% Javascript on the client side. all the server returns is an index page that loads all the Javascript files needed by the app. Now here is the problem: The whole of the application is built around having functions wrapped to different namespaces. And from what I see, a simple function like rendering the HTML of a page can be accomplished by having a call to 2 or more functions across different namespace... My initial thought was "this does not feel like the perfect solution" and I can just envisage a lot of issues with maintaining the code and extending it down the line. Now I would soon start working on taking the project forward and would like to have suggestions on good case practices when it comes to writing and managing a relatively large amount of javascript code.

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >