Search Results

Search found 24037 results on 962 pages for 'every'.

Page 97/962 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Is it possible to add hardware on a remote machine and use it as if it was installed on the local machine?

    - by that0th3rGuy
    Probably a silly question. We have two computers in the apartment running Windows 7 and Windows Vista. 99% of the time we use headphones but every now and then we have a desire for some ambiance so we got a set of 2.1 speakers. But now only one of us has access to the speakers unless we move them around every now and then. So I was wondering if it's possible to add the other computer's sound card as a hardware device on my computer so that I can configure, for instance, Winamp to play through the other computer's sound card, hence the speakers connected to the other computer.

    Read the article

  • Should we test all our methods?

    - by Zenzen
    So today I had a talk with my teammate about unit testing. The whole thing started when he asked me "hey, where are the tests for that class, I see only one?". The whole class was a manager (or a service if you prefer to call it like that) and almost all the methods were simply delegating stuff to a DAO so it was similar to: SomeClass getSomething(parameters) { return myDao.findSomethingBySomething(parameters); } A kind of boilerplate with no logic (or at least I do not consider such simple delegation as logic) but a useful boilerplate in most cases (layer separation etc.). And we had a rather lengthy discussion whether or not I should unit test it (I think that it is worth mentioning that I did fully unit test the DAO). His main arguments being that it was not TDD (obviously) and that someone might want to see the test to check what this method does (I do not know how it could be more obvious) or that in the future someone might want to change the implementation and add new (or more like "any") logic to it (in which case I guess someone should simply test that logic). This made me think, though. Should we strive for the highest test coverage %? Or is it simply an art for art's sake then? I simply do not see any reason behind testing things like: getters and setters (unless they actually have some logic in them) "boilerplate" code Obviously a test for such a method (with mocks) would take me less than a minute but I guess that is still time wasted and a millisecond longer for every CI. Are there any rational/not "flammable" reasons to why one should test every single (or as many as he can) line of code?

    Read the article

  • Example of DOD design

    - by Jeffrey
    I can't seem to find a nice explanation of the Data Oriented Design for a generic zombie game (it's just an example, pretty common example). Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombie list class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie } Or would creating a generic World container that contains every action from bite(zombieId, playerId) to moveTo(playerId, vector) to createPlayer() to shoot(playerId, vector) to face(radians)/face(vector); and contains: std::vector<zombie> std::vector<player> ... std::vector<mapchunk> ... std::vector<vbobufferid> player_run_animation; ... be a good example? Whats the proper way to organize a game with DOD?

    Read the article

  • verisign certificate into jboss server SSL

    - by rfders
    i'm trying to enable jboss to uses ssl protocol using a previously generated certificate from verisign, i imported both certificate, server certificate and ca certificate into the keytore file, and i configured the server.xml to use that keystore and activate ssl protocol, then when i run the jboss, I got this error "certificate or key corresponds to the SSL cipher suites which are enabled" Question, reading some post on internet, i found that every example was made it generating a Certificate Request, it stricly necesary to do that if i already have the server certificate and that CSR has to be imported into the keystore as well ? at this point i'm very confused about this issue, i tried almost every solutions posted in several forums but till now i haven't any luck !! can you give me some tips in order to solve this problem. thanks in advance this are my keystore file: Keystore type: jks Keystore provider: SUN Your keystore contains 2 entries j2ee, Dec 29, 2009, trustedCertEntry, Certificate fingerprint (MD5): 69:CC:2D:2A:2D:EF:C4:DB:A2:26:35:57:06:29:7D:4C ugent, Dec 29, 2009, trustedCertEntry, Certificate fingerprint (MD5): AC:D8:0E:A2:7B:B7:2C:E7:00:DC:22:72:4A:5F:1E:92 and my server.xml configuration:

    Read the article

  • A new clients come into my web agency. How to configure email and social accounts to work better? [on hold]

    - by Marco Panichi
    I created websites for many years but still have not found the right way to organize all the email and social accounts of every clients. I mean, every web agency follows dozens of customers. Each client needs at least Google Analytics, AdWords, a Facebook page, a Twitter profile, a Youtube channel, probably a listing on Google Places and maybe a Mail Chimp (or similar) account. The web agency, in my opinion, must own these accounts, use them to deliver results to the customer and -of course- make them available to the customer for two reasons: - The customer must be able to see how things are going - The client must have the ability to change web agency without suffering The web agency, however, has many problems in having all of these accounts. For example, I like the idea of having a Gmail account for each client and from that account use all the products of Google. But is not possible to create more than many Gmail account from the same ip address and with the same phone number. The web agency could invite the customer to create his own accounts but: - This is not necessary a value for the customer (indeed...) - The web agency would manage them, however, from the same ip address, incurring in problems - If phone verification occurs, the web agency has to disturb the customer for verification Have you the same problem? How to solve it?

    Read the article

  • How do I remove a URL from Google without having to have a Google E-mail Account

    - by PP
    Really simple question. I do not want a Google account. I just want Google to stop making requests every 2 minutes for a URL it should never have known about (apparently Google harvests URLs from search requests as well as private e-mails, not just from actual web pages). But when I search Google help for removing URLs it appears I have to use their "webmaster tools" which require logging into a GMail account! How do I tell Google not to index my URL without becoming a customer? Note: I already return 404 for the URLs in question using a rewrite rule - this appears to make zero difference to the crawler which continually attempts to fetch the page every 2 minutes.

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • Import Google Voice Contacts Into iPhone

    - by Nalandial
    What I'd like to do is have my Google Voice contacts available on my iPhone, not the other way around. I recently had to restore the phone to factory defaults and it's a pain to manually enter all them all. When I make a new GMail e-mail account on my iPhone it won't let me import contacts from my Google account, but even if it did I don't want every single contact in my phone. Google for some reason adds every single person I've ever sent e-mails to into my contacts list, which as you can imagine is quite a large list by now. Does anyone have any suggestions for how to do this?

    Read the article

  • Is it good idea to require to commit only working code?

    - by Astronavigator
    Sometimes I hear people saying something like "All committed code must be working". In some articles people even write descriptions how to create svn or git hooks that compile and test code before commit. In my company we usually create one branch for a feature, and one programmer usually works in this branch. I often (1 per 100, I think and as I think with good reason) do non-compilable commits. It seems to me that requirement of "always compilable/stable" commits conflicts with the idea of frequent commits. A programmer would rather make one commit in a week than test the whole project's stability/compilability ten times a day. For only compilable code I use tags and some selected branches (trunk etc). I see these reasons to commit not fully working or not compilable code: If I develop a new feature, it is hard to make it work writing a few lines of code. If I am editing a feature, it is again sometimes hard to keep code working every time. If I am changing some function's prototype or interface, I would also make hundreds of changes, not mechanical changes, but intellectual. Sometimes one of them could cause me to carry out hundreds of commits (but if I want all commits to be stable I should commit 1 time instead of 100). In all these cases to make stable commits I would make commits containing many-many-many changes and it will be very-very-very hard to find out "What happened in this commit?". Another aspect of this problem is that compiling code gives no guarantee of proper working. So is it good idea to require every commit to be stable/compilable? Does it depends on branching model or CVS? In your company, is it forbidden to make non compilable commits? Is it (and why) a bad idea to use only selected branches (including trunk) and tags for stable versions?

    Read the article

  • How do I left-click a Java Application on a WeTab running Ubuntu 12.10? (workaround defect in Onboard)

    - by Kat Amsterdam
    I installed Ubuntu 12.10 on my weTab. Everything works perfectly (albeit slowly) and I can touch and use every application execpt ones written in Java. When I start any Java Application the touchscreen does not recognize the left click. I believe it's a problem in OnBoard (the onscreen keyboard) because when I touch the mouse icon on the OnBoard and then the Java Application the left click works. This is very cumbersome for every click to first hit OnBoard mouse icon and then button in the Java app I would like to click. It defeats the purpose of a touchscreen. The Java Application is definitly touchable as it's running on 10 other machines with Elo Touchscreen. How do I get Ubuntu to recognize the left click in a java application automatically when I touch the screen? Or a way to dignose this so I can make a clear bug report? This happens in all the desktop environments (Gnome/Unity, XFCE4 and LXDE) I tried with openjdk-6-* and openjdk-7-* Stats: WeTab 32GB 3G 2GB RAM Intel(R) Atom(TM) CPU N450 @ 1.66GHz - 64-bit Ubuntu 12.10 - 64 bit Unity Desktop environment Xubuntu Desktop environment Lubuntu Desktop environment The real touchscreen driver from EETI (eGalaxy) (also didn't work with the Ubuntu standard touchscreen driver)

    Read the article

  • Server-side Architecture for Online Game

    - by Draiken
    basically I have a game client that has communicate with a server for almost every action it takes, the game is in Java (using LWJGL) and right now I will start making the server. The base of the game is normally one client communicating with the server alone, but I will require later on for several clients to work together for some functionalities. I've already read how authentication server should be sepparated and I intend on doing it. The problem is I am completely inexperienced in this kind of server-side programming, all I've ever programmed were JSF web applications. I imagine I'll do socket connections for pretty much every game communication since HTML is very slow, but I still don't really know where to start on my server. I would appreciate reading material or guidelines on where to start, what architecture should the game server have and maybe some suggestions on frameworks that could help me getting the client-server communication. I've looked into JNAG but I have no experience with this kind of thing, so I can't really tell if it is a solid and good messaging layer. Any help is appreciated... Thanks ! EDIT: Just a little more information about the game. It is intended to be a very complex game with several functionalities, making some functionalities a "program" inside the program. It is not an usual game, like FPS or RPG but I intend on having a lot of users using these many different "programs" inside the game. If I wasn't clear enough, I'd really appreciate people that have already developed games with java client/server architecture, how they communicated, any frameworks, apis, messaging systems, etc. It is not a question of lack of knowledge of language, more a question for advice, so I don't end up creating something that works, but in the later stages will have to be rewriten for any kind of limiting reason. PS: sorry if my english is not perfect...

    Read the article

  • High load on a nagios server -- How many service checks for a nagios server is too many?

    - by Josh
    I have a nagios server running Ubuntu with a 2.0 GHz Intel Processor, a RAID10 array, and 400 MB of RAM. It monitors a total of 42 services across 8 hosts, most of which are checked using the check_http plugin even 5 minutes, some every minute. Recently the load on the nagios server has been above 4, often as high as 6. The server also runs cacti, gathering statistics every minute for 6 hosts. I wonder, how many services should hardware like this be able to handle? Is the load so high because I am pushing the limits of the hardware, or should this hardware be able to handle 42 service checks plus cacti? If the hardware is inadequate, should I look to add more RAM, more cores, or faster cores? What hardware / service checks are others running?

    Read the article

  • Knowing state of game in real time

    - by evthim
    I'm trying to code a tic tac toe game in java and I need help figuring out how to efficiently and without freezing the program check if someone won the game. I'm only in the design stages now, I haven't started programming anything but I'm wondering how would I know at all times the state of the game and exactly when someone wins? Response to MarkR: (note: had to place comment here, it was too long for comment section) It's not a homework problem, I'm trying to get more practice programming GUI's which I've only done once as a freshman in my second introductory programming course. I understand I'll have a 2D array. I plan to have a 2D integer array where x would equal 1 and o would equal 0. However, won't it take too much time if I check after every move if someone won the game? Is there a way or a data structure or algorithm I can use so that the program will know the state (when I say state I mean not just knowing every position on the board, the int array will take care of that, I mean knowing that user 1 will win if he places x on this block) of the game at all times and thus can know automatically when someone won?

    Read the article

  • PDFtk Password Protection Help

    - by Dave W.
    I am using Ubuntu 11.10 and am looking for a solution to password protect a bunch of pdf files in a directory in batch. I came across PDFtk and it looks like it might do what I need, but I've reviewed the command line PDFtk examples and can't figure out if there is a way to do it in batch without having to individually specify the output file name for every file. I'm hoping a command-line guru can take a look at the PDFtk syntax and tell me if there is some trick / command that will allow me to password protect a directory of pdf files (e.g., *.pdf) and overwrite the existing files using the same name, or consistently rename the individual output files without having to specify each output name individually. Here's a link to the PDFtk command line examples page: http://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/ Thanks for your help. I think I've answered my own question. Here's a bash script that appears to do the trick. I'd welcome help evaluating why the code I've commented out doesn't work... #!/bin/bash # Created by Dave, 2012-02-23 # This script uses PDFtk to password protect every PDF file # in the directory specified. The script creates a directory named "protected_[DATE]" # to hold the password protected version of the files. # # I'm using the "user_pw" parameter, # which means no one will be able to open or view the file without # the password. # # PDFtk must be installed for this script to work. # # Usage: ./protect_with_pdftk.bsh [FILE(S)] # [FILE(S)] can use wildcard expansion (e.g., *.pdf) # This part isn't working.... ignore. The goal is to avoid errors if the # directory to be created already exists by only attempting to create # it if it doesn't exists # #TARGET_DIR="protected_$(date +%F)" #if [ -d "$TARGET_DIR" ] #then #echo # echo "$TARGET_DIR directory exists!" #else #echo # echo "$TARGET_DIR directory does not exist!" #fi # mkdir protected_$(date +%F) for i in *pdf ; do pdftk "$i" output "./protected_$(date +%F)/$i" user_pw [PASSWORD]; done echo "Complete. Output is in the directory: ./protected_$(date +%F)"

    Read the article

  • "log on as a batch job" user rights removed by what GPO?

    - by LarsH
    I am not much of a server administrator, but get my feet wet when I have to. Right now I'm running some COTS software on a Windows 2008 Server machine. The software installer creates a few user accounts for running its processes, and then gives those users the right to "log on as a batch job". Every so often (e.g. yesterday at 2:52pm and this morning at 7:50am), those rights disappear. The software then stops working. I can verify that the user rights are gone by using secedit /export /cfg e:\temp\uraExp.inf /areas USER_RIGHTS and I have a script that does this every 30 seconds and logs the results with a timestamp, so I know when the rights disappear. What I see from the export is that in the "good" state, i.e. after I install the software and it's working correctly, the line for SeBatchLogonRight from the secedit export includes the user accounts created by the software. But every few hours (sometimes more), those user accounts are removed from that line. The same thing can be seen by using the GUI tool Local Security Policy > Security Settings > Local Policies > User Rights Assignment > Log on as a batch job: in the "good" state, that policy includes the needed user accounts, and in the bad state, the policy does not. Based on the above-mentioned logging script and the timestamps at which the user rights are being removed, I can see clearly that some GPOs are causing the change. The GPO Operational log shows GPOs being processed at exactly the right times. E.g.: Starting Registry Extension Processing. List of applicable GPOs: (Changes were detected.) Local Group Policy I have run GPOs on demand using gpupdate /force, and was able to verify that this caused the User Rights to be removed. We have looked over local group policies till our eyes are crossed, trying to figure out which one might be stripping these User Rights to "log on as a batch job." We have not configured any local group policies on this machine, that we know of; so is there a default local group policy that might typically do such a thing? Are there typical domain policies that would do this? I have been working with our IT staff colleagues to troubleshoot the problem, but none of them are really GPO experts... They wear many hats, and they do what they need to do in order to keep most things running. Any suggestions would be greatly appreciated!

    Read the article

  • Read ahead buffering while playing video file from optical disc

    - by Saxtus
    I was wondering if there is a program for Windows 7 (64-bit), that reads ahead the file to be played back (usually MKV files in my case) to the RAM, so the disc that the video file resides in, won't spin for the entire duration of the playback, but only every time the cache is almost empty (with big enough cache so it won't need the drive for long periods). A program that I've used in the past (called DVDIdle), was doing that universally for every video player I wanted, but they've stopped updating it 6 years ago and now it doesn't seem to work with Windows 7 (tried using compatibility mode too). The method I am using now, is to either have the drive wearing down and buzzing all the time or manually copy the entire file to HDD, SSD or RamDisk and play it from there. The closest thing I've found, is a software that slows down the drive's spin speed, but I was looking for something more convenient, automated, without waiting for an entire file to be copied before starting playback or needing any HDD space, like I've used to in the past. Thanks in advance.

    Read the article

  • Is it possible to use WebMatrix with pure IIS?

    - by Mike Christensen
    I'd like to check out WebMatrix for publishing our site to IIS automatically (right now, I have to zip it up, copy it out, Remote Desktop into the server, unzip it, etc). However, every example I can find on how to setup WebMatrix involves Azure, or using a .publishsettings file that you'd get from your hosting provider. I'm curious if I can publish to a normal, every day IIS server running on Windows Server 2008. So far, all I've done to the IIS server is install Web Deploy, which I believe is the protocol that WebMatrix uses to publish. When I enter the Remote Site Settings screen, I select Enter settings. I select Web Deploy as the protocol, type in my NT domain credentials (I'm an Admin on that server). I put in the site URL for the Site Name and Destination URL. When I click Validate Connection, I get: Am I doing something wrong, or is this just not possible to do?

    Read the article

  • WordPress - Open Link in New Windows Default

    - by ninjaboi21
    Hey SuperUsers, is it possible to make some sort of change or add a plugin that allows me to make every 'external' link open in a new window? Just an example: if my blog was called http//timmy.com/ and I wanted to link to http//tom.com/, it would open a new window instead of the same window, but I wanted to link from http//timmy.com/ to http//timmy.com/somewhere/ in the same window. So I want all external link from my website to another in a new window/tab, but I want every internal link, from my website to somewhere else on my website, to be in the same window/tab. Is this possible?

    Read the article

  • apt-get update stuck on "Waiting for Headers"

    - by crasic
    I'm setting up a Maverick server on a spare PC. The install completes fine and the system boots up into the shell. However, when I try to do a apt-get update , apt hangs on almost every entry with the message 99% [Waiting for headers] sometimes a message of 96 b/s appears on the far right. The actual percent that it claims also varies. Searching around online gave a potential solution by using the option Acquire::http::Pipeline-Depth="0" this somewhat alleviates the problem, i.e. it stalls on every other entry with the same message as above. If you wait it out (the whole update took about 4 hours), the update still fails as a good portion of the hits show a "unable to connect" or similar message, despite the fact that I can ping the server from the pc just fine. The problem is also unrelated to the mirror used since I've tried about a dozen mirrors with no success, I've even tried commenting out everything but the main entry in sources.list and it still refuses to update. The network connection is fine since I can ping and wget (apt won't let me install lynx until I run a successful update) just fine. I've also reinstalled the distro with no luck. The only thing weird about the setup is that the PC is connecting to the internet through my windows laptop with ICS configured properly, but as I've said before, the network connection is fine.

    Read the article

  • Why do we have to use break in switch

    - by trejder
    Who decided, and basing on what concepts, that switch construction (in many languages) has to be, like it is? Why do we have to use break in each statement? Why do we have to write something like this: switch(a) { case 1: result = 'one'; break; case 2: result = 'two'; break; default: result = 'not determined'; break; } I've noticed this construction in PHP and JS, but there are probably many other languages that uses it. If switch is an alternative of if, why we can't use the same construction for switch, as for if? I.e.: switch(a) { case 1: { result = 'one'; } case 2: { result = 'two'; } default: { result = 'not determined'; } } It is said, that break prevents execution of a blocks following current one. But, does someone really run into situation, where there was any need for execution of current block and following ones? I didn't. For me, break is always there. In every block. In every code.

    Read the article

  • Collision detection with non-rectangular images

    - by Adam Smith
    I'm creating a game and I need to detect collisions between a character and some parts of the environment. Since my character's frames are taken from a sprite sheet with a transparent background, I'm wondering how I should go about detecting collisions between a wall and my character only if the colliding parts are non-transparent in both images. I thought about checking only if part of the rectangle the character is in touches the rectangle a tile is in and comparing the alpha channels, but then I have another choice to make... Either I test every single pixel against every single pixel in the other image and if one is true, I detect a collision. That would be terribly ineficient. The other option would be to keep a x,y position of the leftmost, rightmost, etc. non-transparent pixel of each image and compare those instead. The problem with this one might be that, for instance, the character's hand could be above a tile (so it would be in a transparent zone of the tile) but a pixel that is not the rightmost could touch part of the tile without being detected. Another problem would be that in different frames, the rightmost, leftmost, etc. pixels might not be at the same position. Should I not bother with that and just check the collisions on the rectangles? It would be simpler, but I'm afraid people.will feel that there are collisions sometimes that shouldn't happen.

    Read the article

  • Can Windows XP remember file icon position?

    - by g .
    Windows allows you to arrange icons however you like in a window. However after some time when I go back to the window, it has forgotten the icon positions and has completely rearranged the icons. Is there a way to preserve the icon positions? In the previous version of Windows I used, this would happen on rare occasions (about 6 months) and it might only move a couple of icons. Now it is happening practically every day and it is every single icon. Attempts to rearrange them manually are futile and it is increasingly maddening. Searching Google, I have found a number of programs that allow you to save and restore icon position on the desktop. But, this is just a regular folder window so I am not sure if any of those would even work. Some details: Windows XP SP 3 running in Parallels 5 virtual machine Auto Arrange and Align to Grid are NOT turned on Fresh OS install and only setting icon positions in one window

    Read the article

  • SEO - different data with same title and keywords

    - by Junaid Saeed
    here is my scenario i have a website where i redirect my users basing upon the device they were using, lets say a user is visiting from an iPad, i take him directly to the page of iPad wallpapers, the user selects iPad version & i take the user to the gallery of wallpapers where the user can select & download any wallpaper. Every wallpaper is the required resolution, i have my reasons for doing this, now the thing is there are diff. resolution. versions of an image appearing one 5 diff. sections of my website, each having their own view page Now there is only one record in db.table for the image, and basing on the my consistent naming convention of the images, i pick the required image. this means when 5 different pages are generated in 5 categorized sections of the website, due to a shared DB record, the keywords, the titles and every single detail of the 5 pages is same besides the resolution of the image, and the section specific details that the page has and yeah the pages also have different paths like wallpapers.com\ipad-1\cars\Ferrari-dino.html wallpapers.com\ipad-2\cars\Ferrari-dino.html wallpapers.com\ipad-3\cars\Ferrari-dino.html wallpapers.com\ipad-4\cars\Ferrari-dino.html wallpapers.com\ipad-5\cars\Ferrari-dino.html now this is my scenario, How do Search Engines see it and how do they rank it? Is it a Good or Normal or Bad SEO practice? If bad how dangerous it is for my sites SEO? i need your comments on my scenario.

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >