Search Results

Search found 11703 results on 469 pages for 'dev to production'.

Page 144/469 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • KVM to Xen migration

    - by qweet
    I've recently been appointed to create some VMs for production use, and went gung ho into making a KVM based VM instead of finding out what our production server uses. I've only recently found out though that our own servers use Xensource OS, and don't look like they're going to be upgraded in the near future. So for the moment, I'm stuck with either two choices- attempting to convert the KVM VM into a Xen VM, or rebuilding what I have into a new Xen VM. Being the lazy person I am, I would rather not have to rebuild the VM. I've looked for some documentation on a procedure to do this, but the only thing I can come up with is an ancient article with some vague instructions. So this is my question, Server Fault- can one migrate a KVM running on a KVM kernel to a Xen kernel? And if so, how?

    Read the article

  • Is Rsync like subversion, but for a server?

    - by johnlai2004
    I'm trying to learn how to use rsync. I want to create daily backs up of my production server. Right now I run the command rsync -azr /var/www/* [email protected]:/var/www Now let's say one day, I want to roll back the /var/www/ directory on my production server to last month's version. How do I tell rsync to retrieve version N? On reading that rsync only copies differences between src and dest, I assumed rsync works like subversion where you commit changes to a destination, and keep track of every version, and with the option to checkout any version at anytime. Is that the way rsync works? It's like subversion but for an entire server? That would be great because then it means I don't have to do full ssh copies for my nightly backups.

    Read the article

  • symfony 1.4: doctrine build model warning

    - by tigerstyle
    Hi volks, I copied my sources from my lokal dev (everything works fine) to my repository and from there I did a checkout on my remote dev. Now when I try to build everything I get this error: devel:/var/www/myproject# ./symfony doc:build-model doctrine generating model classes file+ /tmp/doctrine_schema_48726.yml Warning: file_get_contents(/var/www/myproject/lib/model/doctrine//base/BaseAdvert.class.php): failed to open stream: No such file or directory in /var/www/myproject/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/task/sfDoctrineBuildModelTask.class.php on line 77 Do you know what the problem could be? Thx for your answers :)

    Read the article

  • Oracle Logical Standby redo generation

    - by DCookie
    Oracle 10.2.0.4 database with a logical standby on Win2K3. Recently a rather large delete operation was carried out on the production instance. I'm experiencing difficulty with the logical standby, in that it gets a couple of hundred (58M size) archive logs into the operation and the apply process fails with an out-of-memory error. Unfortunately, every time it fails it has to restart the apply from the beginning of the transaction. This is taking a couple of days each time. Anyway, in trying to resolve this problem, I've noticed that each archive log from the production system generates 5 or 6 log switches on the standby. I don't understand why this should be. Anyone have any ideas? A related question that I've not found the answer for: does anyone know if the logical standby must be running in archivelog mode? I really don't have a need to keep the logs.

    Read the article

  • How to detect that azure application is running development fabric?

    - by Hasan Khan
    How can I reliability detect whether my Azure application is running in development fabric and not in 'the cloud' ? RoleEnvironment.IsAvailable is true for both. I want something that is true in only one case. I'm asking this because I want users of my library to be able to use my library for free in dev fabric. Hence manually putting seperate identifier or flag in config file and keeping two configs for dev and deploy is not feasible.

    Read the article

  • What constitutes valid justification for more IP addresses?

    - by David
    I host a small website with a well known VPS service. They provided me with one IPv4 address upon registering and said additional addresses would require justification. I requested one additional IPv4 address so as to have one for a production environment and one for a testing/QA environment. They said this was unnecessary as I could just use alternative TCP ports for the test environment. I can live with using a non-standard port for non-production hosting, but it got me thinking, what would be valid justification? (I asked them and they didn't want to answer). Is there an industry standard for what counts as "valid" justification for additional IPv4 addresses?

    Read the article

  • Some Memory Slots Not Working on MSI FM2-A85XA-G65 Motherboard

    - by Mike Ciaraldi
    Short version of question: Does anyone have an MSI FM2-A85XA-G65 motherboard, who can confirm that all four memory slots work? Long version: Several months ago I bought an MSI FM2-A85XA-G65 motherboard at Newegg. At that point I installed an AMD A8-5500 processor and two sticks of Corsair Vengeance 8 GB DDR3-1866 memory, and put it into my file server. I installed the RAM in slots 1 and 3, as directed in the manual, to enable dual-channel memory access. It seemed to work fine, so I bought a second identical mobo (which arrived dead, but was quickly replaced by Newegg) and set of RAM, installed an A10-5800K, and put that into my production Linux machine. Again, it seemed to work well. Eventually I happened to notice that on the server only 8 GB of RAM appeared in the BIOS. I tried each of the slots and memory modules individually and in various combinations. I even swapped processors with the production machine. The result was that putting memory in slots 1 and 2 worked (showing a total of 16 GB), but any memory in slots 3 or 4 was not recognized. However, all four memory slots in the production machine worked, and I confirmed this with both processors. I contacted MSI and arranged to ship the defective mobo back to them for replacement under warranty. I did not want my file server to be down in the interim, and I had another machine I wanted to upgrade, so I bought a third identical mobo to use. That one had the same problem -- only memory slots 1 and 2 worked. I tested it thoroughly with multiple processors and memory sticks. I sent the defective mobo back to MSI and they sent me a new one. This has the same memory slot problem. So I sent it back. The replacement arrived the other day and shows the same problem. I contacted MSI yet again and they said that nobody else has reported memory slot problems on this board and it must be my processor. So my score so far is, out of six boards of this model, I have: One where all four slots work. One which was dead on arrival. Four where only memory slots 1 and 2 work. Before I tear my other machines apart and start swapping processors again I thought I would ask if anyone else has this exact model motherboard and could confirm that all four memory slots either do or do not work. According to MSI you should be able to just plug a single memory module into any of the slots and it will work (and it does on the one mobo I have which works correctly). If you have not yet used all four slots, this is a good time to test them so you know if you can expand your memory in the future. Thanks in advance to anyone who can help.

    Read the article

  • apt-get install fuse - MAKEDEV not installed, skipping device node creation

    - by holms
    This happened with command apt-get dist-upgrade to upgrade to debian jessie, after which I've tried to remove fuse, and install it again. Same error: root@msgapp:/dev# apt-get install fuse Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: fuse 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/69.9 kB of archives. After this operation, 191 kB of additional disk space will be used. Selecting previously unselected package fuse. (Reading database ... 39354 files and directories currently installed.) Preparing to unpack .../fuse_2.9.3-10_amd64.deb ... Unpacking fuse (2.9.3-10) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up fuse (2.9.3-10) ... MAKEDEV not installed, skipping device node creation. device node not found dpkg: error processing package fuse (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: fuse E: Sub-process /usr/bin/dpkg returned an error code (1) UPDATE Reinstalling makedev gives another problem: root@msgapp:/dev# apt-get install makedev Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: makedev 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/42.6 kB of archives. After this operation, 129 kB of additional disk space will be used. Selecting previously unselected package makedev. (Reading database ... 39347 files and directories currently installed.) Preparing to unpack .../makedev_2.3.1-93_all.deb ... Unpacking makedev (2.3.1-93) ... Processing triggers for man-db (2.6.7.1-1) ... ySetting up makedev (2.3.1-93) ... /run/udev or .udevdb or .udev presence implies active udev. Aborting MAKEDEV invocation. /run/udev or .udevdb or .udev presence implies active udev. Aborting MAKEDEV invocation. /run/udev or .udevdb or .udev presence implies active udev. Aborting MAKEDEV invocation. There's ticket raised, and their fix doesn't give any result: root@msgapp:/dev# cd /dev && ./MAKEDEV fuse /run/udev or .udevdb or .udev presence implies active udev. Aborting MAKEDEV invocation.

    Read the article

  • SQL-Server 2008 : Table Insert and Range Check ?

    - by LB .
    I'm using the Table Value constructor to insert a bunch of rows at a time. However if i'm using sql replication, I run into a range check constraint on the publisher on my id column managed automatically. The reason is the fact that the id range doesn't seem to be increased during an insert of several values, meaning that the max id is reached before the actual range expansion could occur (or the id threshold). It looks like this problem for which the solution is either running the merge agent or run the sp_adjustpublisheridentityrange stored procedure. I'm litteraly doing something like : INSERT INTO dbo.MyProducts (Name, ListPrice) VALUES ('Helmet', 25.50), ('Wheel', 30.00), ((SELECT Name FROM Production.Product WHERE ProductID = 720), (SELECT ListPrice FROM Production.Product WHERE ProductID = 720)); GO What are my options (if I don't want or can't adopt any of the proposed solution) ? Expand the range ? Decrease the threshold ? Can I programmatically modify my request to circumvent this problem ? thanks.

    Read the article

  • Maven Ant BuildException with maven-antrun-plugin ... unable to find javac compiler

    - by robsbobs
    Im trying to make Maven call an ANT build for some legacy code. the ant build builds correctly through ant however when i call it using the maven ant plugin it fails with the following error: [ ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project CoreServices: An Ant BuildException has occured: The following error occurred while executing this line: [ERROR] C:\dev\projects\build\build.xml:158: The following error occurred while executing this line: [ERROR] C:\dev\projects\build\build.xml:62: The following error occurred while executing this line: [ERROR] C:\dev\projects\build\build.xml:33: The following error occurred while executing this line: [ERROR] C:\dev\projects\ods\build.xml:41: Unable to find a javac compiler; [ERROR] com.sun.tools.javac.Main is not on the classpath. [ERROR] Perhaps JAVA_HOME does not point to the JDK. [ERROR] It is currently set to "C:\bea\jdk150_11\jre" My javac exists at C:\bea\jdk150_11\bin and this works for all other things. Im not sure where Maven is getting this version of JAVA_HOME. JAVA_HOME in windows envionrmental variables is set to C:\bea\jdk150_11\ as it should be. The Maven code that im using to call the build.xml is <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.6</version> <executions> <execution> <phase>install</phase> <configuration> <target> <ant antfile="../build/build.xml" target="deliver" > </ant> </target> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build>

    Read the article

  • g++ compiler complains about conversions between related types (from int to enum, from void* to clas

    - by Slav
    g++ compiler complains about conversions between related types (from int to enum, from void* to class*, from const char* to unsigned char*, etc.). Compiler handles such convertions as errors and won't compile furthermore. It occurs only when I compile using Dev-C++ IDE, but when I compile the same code (using the compiler which Dev-C++ uses) such errors (even warnings) do not appears. How to mute errors of such types?

    Read the article

  • How do I query delegation properties of an active directory user account?

    - by Mark J Miller
    I am writing a utility to audit the configuration of a WCF service. In order to properly pass credentials from the client, thru the WCF service back to the SQL back end the domain account used to run the service must be configured in Active Directory with the setting "Trust this user for delegation" (Properties - "Delegation" tab). Using C#, how do I access the settings on this tab in Active Directory. I've spent the last 5 hours trying to track this down on the web and can't seem to find it. Here's what I've done so far: using (Domain domain = Domain.GetCurrentDomain()) { Console.WriteLine(domain.Name); // get domain "dev" from MSSQLSERVER service account DirectoryEntry ouDn = new DirectoryEntry("LDAP://CN=Users,dc=dev,dc=mydomain,dc=lcl"); DirectorySearcher search = new DirectorySearcher(ouDn); // get sAMAccountName "dev.services" from MSSQLSERVER service account search.Filter = "(sAMAccountName=dev.services)"; search.PropertiesToLoad.Add("displayName"); search.PropertiesToLoad.Add("userAccountControl"); SearchResult result = search.FindOne(); if (result != null) { Console.WriteLine(result.Properties["displayName"][0]); DirectoryEntry entry = result.GetDirectoryEntry(); int userAccountControlFlags = (int)entry.Properties["userAccountControl"].Value; if ((userAccountControlFlags & (int)UserAccountControl.TRUSTED_FOR_DELEGATION) == (int)UserAccountControl.TRUSTED_FOR_DELEGATION) Console.WriteLine("TRUSTED_FOR_DELEGATION"); else if ((userAccountControlFlags & (int)UserAccountControl.TRUSTED_TO_AUTH_FOR_DELEGATION) == (int)UserAccountControl.TRUSTED_TO_AUTH_FOR_DELEGATION) Console.WriteLine("TRUSTED_TO_AUTH_FOR_DELEGATION"); else if ((userAccountControlFlags & (int)UserAccountControl.NOT_DELEGATED) == (int)UserAccountControl.NOT_DELEGATED) Console.WriteLine("NOT_DELEGATED"); foreach (PropertyValueCollection pvc in entry.Properties) { Console.WriteLine(pvc.PropertyName); for (int i = 0; i < pvc.Count; i++) { Console.WriteLine("\t{0}", pvc[i]); } } } } The "userAccountControl" does not seem to be the correct property. I think it is tied to the "Account Options" section on the "Account" tab, which is not what we're looking for but this is the closest I've gotten so far. The justification for all this is: We do not have permission to setup the service in QA or in Production, so along with our written instructions (which are notoriously only followed in partial) I am creating a tool that will audit the setup (WCF and SQL) to determine if the setup is correct. This will allow the person deploying the service to run this utility and verify everything is setup correctly - saving us hours of headaches and reducing downtime during deployment.

    Read the article

  • IT merger - self-sufficient site with domain controller VS thin clients outpost with access to termi

    - by imagodei
    SITUATION: A larger company acquires a smaller one. IT infrastructure has to be merged. There are no immediate plans to change the current size or role of the smaller company - the offices and production remain. It has a Win 2003 SBS domain server, Win 2000 file server, linux server for SVN and internal Wikipedia, 2 or 3 production machines, LTO backup solution. The servers are approx. 5 years old. Cisco network equippment (switches, wireless, ASA). Mail solution is a hosted Exchange. There are approx. 35 desktops and laptops in the company. IT infrastructure unification: There are 2 IT merging proposals. 1.) Replacing old servers, installing Win Server 2008 domain controller, and setting up either subdomain or domain trust to a larger company. File server and other servers remain local and synchronization should be set up to a centralized location in larger company. Similary with the backup - it remains local and if needed it should be replicated to a centralized location. Licensing is managed by smaller company. 2.) All servers are moved to a centralized location in larger company. As many desktop machines as possible are replaced by thin clients. The actual machines are virtualized and hosted by Terminal server at the same central location. Citrix solutions will be used. Only router and site-2-site VPN connection remain at the smaller company. Backup internet line to insure near 100% availability is needed. Licensing is mainly managed by larger company. Only specialized software for PCs that will not be virtualized is managed by smaller company. I'd like to ask you to discuss both solutions a bit. In your opinion, which is better from the operational point of view? Which is more reliable, cheaper in the long run? Easier to manage from the system administrator's point of view? Easier on the budget and easier to maintain from IT department's point of view? Does anybody have any experience with the second option and how does it perform in production environment? Pros and cons of both? Your input will be of great significance to me. Thank you very much!

    Read the article

  • What would cause a query being ran from SSMS on local box to run slower then from remote box

    - by Racter
    When I run a simply query such as "Select Column1, Column2 from Table A" from within SSMS running on my production SQL Server the results seems to take extremely long (45Min). If I run the same query from my dev system’s SSMS connecting to the production SQL Server the results return within a few seconds (<60sec). One thing I have notices is if the system was just rebooted performance is good for a bit. It is hard to determine a time as I have had it start running slow very quickly after reboot but at most it performed good for 20min and then start acting up. Also, just restarting the SQL service does not resolve the issue or provide a temporary performance boost. Specs for Server are: Windows Server 2003, Enterprise Edition, SP2 4 X Intel Xeon 3.6GHz - 6GB System Memory Active/Active Cluster SQL Server 2005 SP2 (9.0.3239)

    Read the article

  • Daemonize() issues on Debian

    - by djTeller
    Hi, I'm currently writing a multi-process client and a multi-treaded server for some project i have. The server is a Daemon. In order to accomplish that, i'm using the following daemonize() code: static void daemonize(void) { pid_t pid, sid; /* already a daemon */ if ( getppid() == 1 ) return; /* Fork off the parent process */ pid = fork(); if (pid < 0) { exit(EXIT_FAILURE); } /* If we got a good PID, then we can exit the parent process. */ if (pid > 0) { exit(EXIT_SUCCESS); } /* At this point we are executing as the child process */ /* Change the file mode mask */ umask(0); /* Create a new SID for the child process */ sid = setsid(); if (sid < 0) { exit(EXIT_FAILURE); } /* Change the current working directory. This prevents the current directory from being locked; hence not being able to remove it. */ if ((chdir("/")) < 0) { exit(EXIT_FAILURE); } /* Redirect standard files to /dev/null */ freopen( "/dev/null", "r", stdin); freopen( "/dev/null", "w", stdout); freopen( "/dev/null", "w", stderr); } int main( int argc, char *argv[] ) { daemonize(); /* Now we are a daemon -- do the work for which we were paid */ return 0; } I have a strange side effect when testing the server on Debian (Ubuntu). The accept() function always fail to accept connections, the pid returned is -1 I have no idea what causing this, since in RedHat & CentOS it works well. When i remove the call to daemonize(), everything works well on Debian, when i add it back, same accept() error reproduce. I've been monitring the /proc//fd, everything looks good. Something in the daemonize() and the Debian release just doesn't seem to work. (Debian GNU/Linux 5.0, Linux 2.6.26-2-286 #1 SMP) Any idea what causing this? Thank you

    Read the article

  • Can't get ZSH working on CentOS

    - by waveslider
    I've been using zsh for a couple of years now on Ubuntu and really like it a lot. I've installed it on our production server as well, which is running CentOS 5.2 However, I just installed it via yum on a new VM I created to use as a development box, to replicate our production box as closely as possible. Although yum shows that it is definitely installed (/bin/zsh) and that it is set as my shell, it does not appear to be working. Instead of creating the .zshrc and .profile files in my home directory, it created a .tcshrc file. Also, I did not receive the default configuration menu that is always displayed once you begin using ZSH, and none of the features (like advanced tab comple

    Read the article

  • IIS hosting, asp.net mvc

    - by tomasz
    Hi I have a site that uses flex and calls controller actions which returns json to the flex. This works fine in a dev server , the folder that has the flex app lives inside the web project and in the dev ennvironment, makes calls hostname, ie www.someurl.com in the actual live scenario, this will be an intranet so not hostname to call, the flex app seems to have trouble calling http://localhost/Virtual directory name it seems to totally miss the virtual directory name. I am obviously missing something basic, any help?

    Read the article

  • multiline sed using backreferences...

    - by pagid
    Hi, I'm converting patch scripts using a commandline script - within these scripts there's the combination two lines like: --- /dev/null +++ filename.txt which needs to be converted to: --- filename.txt +++ filename.txt Initially I tried: less file.diff | sed -e "s/---\/dev\null\n+++ \(.*\)/--- \1\n+++ \1/" But I had to find out that multiline-handling is much more complex in sed :( Any help is appreciated...

    Read the article

  • OpenCL books/tutorials?

    - by StfnoPad
    Is there any openCL books out there? or in the pipeline? Any online openCL tutorial? I already looked at the usual pages like khronos/nvidia dev/opengl.org/ati dev/siggraph slides. Any other help or pointer is welcome :-D Thanks!

    Read the article

  • perforce exclude subfolder from diff

    - by PeanutPower
    I have added an exclusion to my workspace which is called "dev-machine" -//depot/DotNetProject/bin/... //dev-machine/bin/... The bin folder now shows as excluded in p4diff but... The differences are still showing i.e. diff 1 of 11 and the next and previous arrows are still taking me into those excluded folders. Am I missing something obvious here?

    Read the article

  • g++ compiler complains about conversions between relative types (from int to enum, from void* to cla

    - by Slav
    g++ compiler complains about conversions between relative types (from int to enum, from void* to class*, from const char* to unsigned char*, etc.). Compiler handles such convertions as errors and won't compile furthermore. It occurs only when I compile using Dev-C++ IDE, but when I compile the same code (using the compiler which Dev-C++ uses) such errors (even warnings) do not appears. How to mute errors of such types?

    Read the article

  • debugging JBoss 100% CPU usage

    - by Nate
    We are using JBoss to run two of our WARs. One is our web app, the other is our web service. The web app accesses a database on another machine and makes requests to the web service. The web service makes JMS requests to other machines, aggregates the data, and returns it. At our biggest client, about once a month the JBoss Java process takes 100% of all CPUs. The machine running JBoss has 8 CPUs. Our web app is still accessible during this time, however pages take about 3 minutes to load. Restarting JBoss restores everything to normal. The database machine and all the other machines are fine, only the machine running JBoss is affected. Memory usage is normal. Network utilization is normal. There are no suspect error messages in the JBoss logs. I have set up a test environment as close as possible to the client's production environment and I've done load testing with as much as 2x the number of concurrent users. I have not gotten my test environment to replicate the problem. Where do we go from here? How can we narrow down the problem? Currently the only plan we have is to wait until the problem occurs in production on its own, then do some debugging to determine the cause. So far people have just restarted JBoss when the problem occurred to minimize down time. Next time it happens they will get a developer to take a look. The question is, next time it happens, what can be done to determine the cause? We could setup a separate JBoss instance on the same box and install the web app separately from the web service. This way when the problem next occurs we will know which WAR has the problem (assuming it is our code). This doesn't narrow it down much though. Should I enable JMX remote? This way the next time the problem occurs I can connect with VisualVM and see which threads are taking the CPU and what the hell they are doing. However, is there a significant down side to enabling JMX remote in a production environment? Is there another way to see what threads are eating the CPU and to get a stacktrace to see what they are doing? Any other ideas? Thanks!

    Read the article

  • Need simple Twitter API v1.1 example to show timeline using jQuery or C# ASP.NET

    - by Ken Palmer
    With Twitter turning off the API 1.0 faucet on 6/11/2013, we have several sites that now fail to display timelines. I've been looking for an "If you did that, now do this" example. Here was Twitter's announcement. https://dev.twitter.com/blog/api-v1-is-retired Here is what we were originally doing to show the Twitter timeline via API 1.0. <div id="twitter"> <ul id="twitter_update_list"></ul> <script type="text/javascript" src="http://twitter.com/javascripts/blogger.js"></script> <script type="text/javascript" src="http://api.twitter.com/1/statuses/user_timeline/companytwitterhandle.json?callback=twitterCallback2&amp;count=1"></script> <div style="float:left;"><a href="https://twitter.com/companytwitterhandle" target="_blank">@companytwitterhandle</a> | </div> <div class="twitterimg">&nbsp;</div> </div> Initially I tried changing the version in the JavaScript reference URL like so, which did not work. <script type="text/javascript" src="http://api.twitter.com/1.1/statuses/user_timeline/companytwitterhandle.json?callback=twitterCallback2&amp;count=1"></script> Then I looked at the Twitter API documentation (https://dev.twitter.com/docs/api/1.1/overview) which lacks a clear transition example. I don't have 4 or 5 hours to delve into that, or into this disheveled FAQ (https://dev.twitter.com/docs/faq#17750). Then I found this API documentation regarding the user timeline. So I changed the URL again as shown below. https://dev.twitter.com/docs/api/1.1/get/statuses/user_timeline <script type="text/javascript" src="https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=companytwitterhandle&amp;count=1"></script> That did not work. Using jQuery or C# ASP.NET MVC, how can I transition that interface from Twitter API 1.0 to Twitter API 1.1? My first preference would be for a browser client side implementation if that is possible. Please include a code example. Thanks.

    Read the article

  • TFS: Branching. How to map a branch to IIS for local test

    - by DarkJackO
    Hi, I think there's something I don't understand about Branching How can I run my website from localhost to test my changes made on a Branch Let's say my branch structure is -Dev -UI -App Main -UI -App The project UI and App from the main are map in my IIS, it's all working well Now I want to make some changes in the UI project from Dev branch, and I want to test these changes before I merge them to Main Thanks

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >