Search Results

Search found 26214 results on 1049 pages for 'farm solution'.

Page 152/1049 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • SQL-Server 2008 : Table Insert and Range Check ?

    - by LB .
    I'm using the Table Value constructor to insert a bunch of rows at a time. However if i'm using sql replication, I run into a range check constraint on the publisher on my id column managed automatically. The reason is the fact that the id range doesn't seem to be increased during an insert of several values, meaning that the max id is reached before the actual range expansion could occur (or the id threshold). It looks like this problem for which the solution is either running the merge agent or run the sp_adjustpublisheridentityrange stored procedure. I'm litteraly doing something like : INSERT INTO dbo.MyProducts (Name, ListPrice) VALUES ('Helmet', 25.50), ('Wheel', 30.00), ((SELECT Name FROM Production.Product WHERE ProductID = 720), (SELECT ListPrice FROM Production.Product WHERE ProductID = 720)); GO What are my options (if I don't want or can't adopt any of the proposed solution) ? Expand the range ? Decrease the threshold ? Can I programmatically modify my request to circumvent this problem ? thanks.

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • AppleTV - itunes store is temporarily unavailable - please check back later

    - by Ken
    When attempting to rent a movie on ATV, my wife received the error message above.  Alternately “server unavailable”.  When your wife is sick, the amount of IT support she needs goes up exponentially.  One piece of the puzzle was that she had changed her Apple ID password.  On her PC I ran iTunes and under account, there was only 1 device listed (not the ATV).  Even when signed out/back-in on the ATV under Settings>iTunes it still gave same error message.  What I suspect is it thinks she is trying to authorize the device to another Apple ID.  Some new 90 day rule limits when a device can be associated with another Apple ID.  Your iTunes store/account will show devices, and how long before they can be associated with a different Apple ID from the Account Information page in iTunes on your computer.  Apple must have no freaking idea why someone would want to know which ID is associated to the ATV (i.e. the vice versa), because it can’t be done. Solution: Try ATV settings>reset I swapped out ATV 1 for ATV 2 (used for music streaming downstairs).  I know it’s a cop-out solution, but remember I had a sick wife breathing down my neck.

    Read the article

  • Using iTunes within Terminal Services 2008 R2 - Pitfalls etc

    - by Kristiaan
    I was hoping to get some further information on any possible Do's and Don't when it comes to installing, using and maintaining iTunes within a Termina Server environment. We have come across a situation in our company whereby some of our users who are using thin clients now need the ability to sync, update and manage their devices, previously they used either standard desktop systems or laptops so there was no issue with running iTunes. I have not found much information on the web about using iTunes within a Terminal Server Farm, Id like to find out if iTunes works within the environment, any known or common issues that occur due to running it like this.

    Read the article

  • F# Application Entry Point

    - by MarkPearl
    Up to now I have been looking at F# for modular solutions, but have never considered writing an end to end application. Today I was wondering how one would even start to write an end to end application and realized that I didn’t even know where the entry point is for an F# application. After browsing MSDN a bit I got a basic example of a F# application with an entry point [<EntryPoint>] let main args = printfn "Arguments passed to function : %A" args // Return 0. This indicates success. 0 Pretty simple stuff… but what happens when you have a few modules in a program – so I created a F# project with two modules and a main module as illustrated in the image below… When I try to compile my program I get a build error… A function labeled with the 'EntryPointAttribute' attribute must be the last declaration in the last file in the compilation sequence, and can only be used when compiling to a .exe… What does this mean? After some more reading I discovered that the Program.fs needs to be the last file in the F# application – the order of the files in a F# solution are important. How do I move a source file up or down? I tried dragging the Program.fs file below ModuleB.fs but it wouldn’t allow me to. Then I thought to right click on a source file and got the following menu.   Wala… to move the source file to the bottom of the solution you can select the “Move Up” or “Move Down” option. Now that I got this right I decided to put some code in ModuleA & ModuleB and I have the start of a basic application structure. ModuleA Code namespace MyApp module ModuleA = let PrintModuleA = printf "hello a \n" ()   ModuleB Code namespace MyApp module ModuleB = let PrintModuleB = printf "hello b \n" ()   Program Code // Learn more about F# at http://fsharp.net #light namespace MyApp module Main = open System [<EntryPoint>] let main args = ModuleA.PrintModuleA let endofapp = Console.ReadKey() 0

    Read the article

  • Outlook + Exchange 2007: it is possible to rid of local OST files?

    - by kdl
    I am looking for a solution which would allow to use a convenience of Outlook as a mail client app while at the same time have no PST or OST files on a local computer. Even in 'non-caching' mode Outlook creates an OST file where it downloads everything from the Exchange server. OWA does not create any local files (except cookies I believe) but lacks some of the nice features Outlook has. Would it be feasible to place OST files on a network share? Maybe the solution exists for some other client+server pair?

    Read the article

  • FP for simulation and modelling

    - by heaptobesquare
    I'm about to start a simulation/modelling project. I already know that OOP is used for this kind of projects. However, studying Haskell made me consider using the FP paradigm for modelling a system of components. Let me elaborate: Let's say I have a component of type A, characterised by a set of data (a parameter like temperature or pressure,a PDE and some boundary conditions,etc.) and a component of type B, characterised by a different set of data(different or same parameter, different PDE and boundary conditions). Let's also assume that the functions/methods that are going to be applied on each component are the same (a Galerkin method for example). If I were to use an OOP approach, I would create two objects that would encapsulate each type's data, the methods for solving the PDE(inheritance would be used here for code reuse) and the solution to the PDE. On the other hand, if I were to use an FP approach, each component would be broken down to data parts and the functions that would act upon the data in order to get the solution for the PDE. This approach seems simpler to me assuming that linear operations on data would be trivial and that the parameters are constant. What if the parameters are not constant(for example, temperature increases suddenly and therefore cannot be immutable)? In OOP, the object's (mutable) state can be used. I know that Haskell has Monads for that. To conclude, would implementing the FP approach be actually simpler,less time consuming and easier to manage (add a different type of component or new method to solve the pde) compared to the OOP one? I come from a C++/Fortran background, plus I'm not a professional programmer, so correct me on anything that I've got wrong.

    Read the article

  • Extend university wifi network [migrated]

    - by asfasdoiuh ouhouhouh
    i live in a university campus and i can get wifi signal on the outside of my window but not in the house. The solution i use at the moment is a usb wifi dongle outside connected to my laptop but the lack of an internal antenna make the connection quite unreliable at times. So i was trying to find another solution to improve the reception of my network. One idea is to setup a router on the outside (in a place with stronger signal) and redirect the connection inside the house with an ethernet cable but the problem is that our Uni Wifi is managed by a capitve portal (BlueSocket with DNS redirection to login page) and the authentication has to happen on the mac address that connect to the net (so the client appliance in this case). If I use a router with Mac-Clone capability i will be able to be redirected trough the captive portal on my laptop computer and login from there or i need to setup my router to fill in the login page by itself? There are other hardware/software solutions i can use to get what i want? Thank you all

    Read the article

  • HP 4530s: Fan is Always On even When Laptop/CPU is Idle

    - by tolitius
    Just bought an HP 4530s from newegg. Laptop is great, but.. The FAN is always on. I did some googling, found it is a known problem, but no easily googlable solution. Tried to: Update BIOS to the latest Disable "CPU fan always on when plugged in" BIOS setting Installed Windows 7 Home (came with), Live CDed Ubuntu, Windows XP Spent 2 hours with horrible HP support Some other things that I can't already recall = spent too much time on it Laptop is not refundable (learned it the hard way, after the fact, by looking at the NEWEGG clever policy that is hidden in "details") I would really appreciate a workable solution / workaround / hack. The laptop is for my friend who will most likely be running Windows XP/7.

    Read the article

  • Unable to run Microsoft Office 2010 install file

    - by Len
    This problem began when I noticed that the icons in the Windows 7 task bar for MS Word and Outlook were generic. I rebuilt the icon cache. Still not the right icons, but not the generic "document" icons either, and both are identical (to each other). The two programs seem to be working OK. So then I tried to repair MS Office. I ran the setup file. It extracts the files, I get the splash screen, and then the message, "Setup has stopped working. A problem caused the program to stop working correctly. Windows will close the program and notify you if a solution is available." with a "Close program" button. Microsoft does not notify me about a solution. What I have tried: 1. running two other copies of the setup program; 2. doing an in-place re-install of Windows 7.

    Read the article

  • Backing up data (including mysqldumps) to S3

    - by seengee
    We have a web app on a number of servers and we want to add an additional layer of redundancy by backing up the key data to S3. The key data is the MySQL database and a folder containing dynamically created site assets - predominantly images. Some kind of rsync based solution would initially seem the best plan. A couple of years ago we played with S3cmd (in particular s3cmd sync) with some success but we didn't find it particularly reliable although this may have changed since. Its occurred to me though that a rsync solution might not work particularly well with a single db.sql file created with mysqldump and I assume this means the whole database getting transferred each time, with multiple databases of over 1GB this is going to add up to a lot of traffic (and $s) very quickly. With the image files I could simply just transfer files modified within the last day which would be far more simple. What approach should I look at?

    Read the article

  • A Web Service to collect data from local servers every hour

    - by anilerduran
    I'm trying to find a way to collect data from different servers around the world. Here are the details: There is only one single PowerShell script on servers that encrypts data (simple csv file) and sends with preferred method (HTTP/HTTPS Post could be) There is no more control on that servers. Can't install any service, process etc. Just I can configure script to execute every hour. This script also will have encrypted username/password/license key for every server. Script will compress data and send to me with these information. So I need a service (I'm not sure if Web Service is the rigth solution) on the cloud that will help me to: Will get data that is sent from servers using a method. Will authenticate request to recognize sender using license key/username/password and most importantly, Will redirect/send this filecab to my SQL Server on the cloud (Azure). Also it should seperate data according to customer information in license key. So every data for every customer will be stored in dedicated DB/Tables on my SQL All the processes above should be completed automatically. No way for manual steps. Question: A Web Service (SOAP or Restful) is the rigth solution for that?

    Read the article

  • Why S3 website redirect location is not followed by CloudFront?

    - by ychaze
    I have a website hosted on Amazon S3. It is the new version of an old website hosted on WordPress. I have set up some files with the metadata Website Redirect Locationto handle old location and redirect them to the new website pages. For example: I had http://www.mysite.com/solution that I want to redirect to http://mysite.s3-website-us-east-1.amazonaws.com/product.html So I created an empty file named solutioninside my bucket with the correct metadata: Website Redirect Location= /product.html The S3 redirect metadata is equivalent to a 301 Moved Permanentlythat is great for SEO. This works great when accessing the URL directly from S3 domain. I have also set up a CloudFront distribution based on the website bucket. And when I try to access through my distribution, the redirect does not work, ie: http://xxxx123.cloudfront.net/solution does not redirect but download the empty file instead. So my question is how to keep the redirection through the CloudFront distribution ? Or any idea on how to handle the redirection without deteriorate SEO ? Thanks

    Read the article

  • Today's Well Connected Companies

    - by Michael Snow
    Statoil Fuel & Retail and their partner, L&T Infotech, our recent winner of the Oracle Excellence Award for Fusion Middleware Innovation in the WebCenter category is featured this month in Profit Magazine's November Issues of both print and online versions. The online version has significantly more detail about their "Connect" project Statoil Fuel & Retail is a leading Scandinavian road transport fuel retailer that operates in 8 different countries and delivers aviation fuel at 85 airports. The company produces and sells 750 different lubricant products for B2B and B2C customers. Statoil won the 2013 Oracle Excellence Award for Oracle Fusion Middleware Innovation: Oracle WebCenter based on a stellar Oracle implementation, created with implementation partner L&T Infotech, which used Oracle’s JD Edwards and Oracle Fusion Middleware to replace and consolidate 10 SAP portals into a single, integrated, personalized enterprise portal for partners, station managers, and support staff. Utilizing Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Identity Management, Oracle SOA Suite, JD Edwards applications, and Oracle CRM On Demand, Statoil is now able to offer a completely redesigned portal for an easy and user-friendly web experience, delivering a fast, secure, robust, and scalable solution that will help the company remain competitive in its industry. The solution has increased Statoil Fuel & Retail’s web footprint and expanded its online business. Read the complete article for the full story of Statoil Fuel & Retail's implementation of Oracle Fusion Middleware technology.

    Read the article

  • Map Ctrl and Alt to mouse thumb buttons

    - by murphyslaw
    I'm running Ubuntu 12.04 and have a multi-button Microsoft mouse. I would like to map the CTRL and ALT modifier keys to the left and right thumb buttons of my mouse, respectively, so I can ctrl-click and alt-click without touching the keyboard. My thumb buttons are buttons 8 and 9. I tried the solution in this question: How do I configure a mouse thumb button? which explained how to map a double click to a thumb button - this worked for the double-click but I couldn't figure out how to modify the solution for CTRL and ALT I also tried this: How to map Ctrl/Shift to thumb buttons of Mouse? which used xdotools and xbindkeys. I modified the script to this: ~/.xbindkeysrc: "xdotool keydown alt" b:9 "xdotool keyup alt" release + alt + b:9 "xdotool keydown ctrl" b:8 "xdotool keyup ctrl" release + control + b:8 Which ALMOST works. It simulates a CTRL-key press when I click the left thumb button, but I can't actually hold the button and click at the same time - holding the thumb button seems to prevent it from listening to other input until it is released. Does anyone know how I can make my mouse thumb button actually work as a modifier key, so I can use thumb_button+click instead of CTRL-click?

    Read the article

  • SharePoint doesn't support this authentication scheme.

    - by EtherDragon
    I have a new Windows Phone 7 phone, and I'm trying to investigate how to connect the Office application to our SharePoint site(s). In the Office application, on Phone 7, I flip to the SharePoint page. I go to open URL, and enter the url for one of my sites, that uses default authentication (Windows Auth). I get a message: Can't open SharePoint doesn't support this authentication scheme. For assistance, contact the person who manages thus SharePoint site (That would be me). You can try opening the content in your web browser instead. When opening in my browser, I can access the content without any problem. (Windows Auth passes) Anyone have any source material on what I should do to my SharePoint site to "support this authentication scheme?" Note: I am the administrator of our SharePoint server farm(s).

    Read the article

  • Problems loading Hilva tutorials

    - by Beska
    I'm a newcomer to XNA, and I'm evaluating some libraries. The Hilva Graphics Engine looks interesting, and I'm trying to run their tutorials. However, all of them give me errors. For example, if I download the ParallaxMappingSample demo, and try to build it, I get Error 1 Error loading pipeline assembly "C:\Users\Me\Desktop\ParallaxMappingSample\Hilva.Content.dll". ParallaxMappingSample I get similar errors for all of the samples. Unfortunately, this error isn't very enlightening. I can see the Hilva.Content.dll in the appropriate directory. I tried removing and readding the reference from the content project, but I get the same error. I'm not sure it's relevant, but I'm on Windows 7, I'm using Microsoft Visual Studio 2010, and XNA 4.0. Is there an easy (or difficult) solution? EDIT: If you happen to try this, even if you don't have a solution, let me know about it in a comment. Whether it works for you, or if you get the same problem...either result would be something that might let me know if it's just a problem with the tutorial, or if it's on my end.

    Read the article

  • linking Office 365 and ADFS on Azure

    - by Gaurav
    i am trying to link my ADFS to Office 365 in order to set up Single Sign On for Office 365. I am going over this blog post. at Step 5, i am supposed to run Powershell commands so i can connect the ADFS to Office 365. Set-MsolAdfscontext -Computer <AD FS server FQDN> Convert-MsolDomainToFederated -DomainName <domain name> i am unable to determine the FQDN for the ADFS server. i searched around a lot on the web, but was not able to locate the solution. i am just getting started with all configuration of ADFS and Office 365, but was not able to find out a solution. any answers?

    Read the article

  • Auto-scaling EC2 Servers and Updating Code

    - by jstats
    We've come to the point where we need to set up autoscaling for our web server and I'm unsure how to go about the process of scaling servers and updating the the existing code without remaking a new AMI and changing the autoscale config to use it. I've read a bit about people bundling the new code and uploading it to s3 and having new servers grab the bundle on boot up but that doesn't seem all that pleasant either. Currently the web app's files live in a git repo, and when we update the code, we push it to github, ssh into the web app and run a hook to bring down the latest code. So I was thinking that another option could be to just run that hook on an hourly or daily cron task. Unfortunately that doesn't cover everything type of update (for example new blog posts' images and such which aren't included in the git repo) but it's something. Could anyone provide some advice on what a common solution is or anything as to why my proposed solution is a bad idea? Thanks all

    Read the article

  • TechEd 2012: Fast SQL Server

    - by Tim Murphy
    While I spend a certain amount of my time creating databases (coding around SQL Server and setup a server when I have to) it isn’t my bread and butter.  Since I have run into a number of time that SQL Server needed to be tuned I figured I would step out of my comfort zone and see what I can learn. Brent Ozar packed a mountain of information into his session on making SQL Server faster.  I’m not sure how he found time to hit all of his points since he was allowing the audience abuse him on Twitter instead of asking questions, but he managed it.  I also questioned his sanity since he appeared to be using a fruit laptop. He had my attention though when he stated that he had given up on telling people to not use “select *”. He posited that it could be fixed with hardware by caching the data in memory.  He continued by cautioning that having too many indexes could defeat this approach.  His logic was sound if not always practical, but it was a good place to start when determining the trade-offs you need to balance.  He was moving pretty fast, but I believe he was prescribing this solution predominately for OLTP database prior to moving on to data warehouse solutions. Much of the advice he gave for data warehouses is contained in the Microsoft Fast Track guidance so I won’t rehash it here.  To summarize the solution seems to be the proper balance memory, disk access speed and the speed of the pipes that get the data from storage to the CPU.  It appears to be sound guidance and the session gave enough information that going forward we should be able to find the details needed easily.  Just what the doctor ordered. del.icio.us Tags: SQL Server,TechEd,TechEd 2012,Database,Performance Tuning

    Read the article

  • Monitor status while using VNC

    - by kumar
    So after connecting to a vnc server via vnc viewer to my desktop (remotely), Is it possible to know whether the monitor connected to the CPU is switched ON or not. Simply put, from command prompt how do you know whether monitor is ON/OFF from command line. Here, Basically I am bit worried about privacy as my monitor can be viewed by anyone while accessed remotely. Any solution? Obviously there is a option to switch off the monitor while starting the vnc server at remote side but I am looking for a better solution to control monitor(possible??) remotely. Thanks!

    Read the article

  • Mozilla Persona to the login rescue?

    - by Matt Watson
    A lot of website now allow us to login or create accounts via OAuth or OpenID. We can use our Facebook, Twitter, Google, Windows Live account and others. The problem with a lot of these is we have to have allow the websites to then have access to our account and profile data that they shouldn't really have. Below is a Twitter authorization screen for example when signing in via Technorati. Now Technorati can follow new people, update my profile and post tweets? All I wanted to do was login to Technorati.com to comment on a post!Mozilla has just released their new solution for this called Persona. First thought is oh great another solution! But they are actually providing something a little different and better. It is based on an email address and isn't linked to anything like our personal social networks or their information. Persona only exists to help with logging in to websites. No loose strings attached.Persona is based on a new standard called BrowserID and you can read more about it here:How BrowserID Works.  The goal is to integrate BrowserID in to the browser at a deeper level so no password entry is required at all. You can tell your web browser to just auto sign in for you. I am really hoping this takes off and will look at implementing it in current projects! I would recommend researching it and lets hope it or something like it becomes a wide spread reality in the future.

    Read the article

  • PeopleSoft CRM 9.2 Release Value Proposition

    - by Race Bannon
    Oracle's PeopleSoft Customer Relationship Management (CRM) delivers solutions that have been tailored to fit your industry business processes, your customer strategies, and your success criteria. With PeopleSoft CRM 9.2, organizations will be able to deploy a solution that delivers built-in best practices specific to your industry with a highly configurable, tightly integrated platform, ensuring that solutions will be fast to implement. The result is less configuration, less customization, and less integration. PeopleSoft Customer Relationship Management (CRM) is a world-class solution for organizations of every size and Oracle’s planned product roadmap for PeopleSoft applications is to deliver valuable, needed features for all of an organization’s constituents along three design principles — Simplicity, Productivity, and Lowered Total Cost of Ownership — as well as new application functionality as prioritized by our customers. The upcoming 9.2 release of PeopleSoft Customer Relationship Management focuses on these themes of Simplicity, Productivity, and Lower Total Cost of Ownership while also delivering robust new functionality to help your organization succeed. The recently published PeopleSoft CRM 9.2 Release Value Proposition provides overviews of the new features and enhancements planned for these applications for Release 9.2. This document offers customers a road map intended to help them assess the business benefits of upgrading to the 9.2 release while also helping them plan their IT projects and investments. (Link is to a My Oracle Support page, available to customers and partners.) Oracle continues to deliver enterprise-wide features that enhance our customer ownership experience and helps them run their businesses more efficiently and profitably. With the CRM 9.2 release, we continue to abide by this firm commitment we’ve made to our customers.

    Read the article

  • Making a class pseudo-immutable by setting a flag

    - by scott_fakename
    I have a java project that involves building some pretty complex objects. There are quite a lot (dozens) of different ones and some of them have a HUGE number of parameters. They also need to be immutable. So I was thinking the builder pattern would work, but it ends up require a lot of boilerplate. Another potential solution I thought of was to make a mutable class, but give it a "frozen" flag, a-la ruby. Here is a simple example: public class EqualRule extends Rule { private boolean frozen; private int target; public EqualRule() { frozen = false; } public void setTarget(int i) { if (frozen) throw new IllegalStateException( "Can't change frozen rule."); target = i; } public int getTarget() { return target; } public void freeze() { frozen = true; } @Override public boolean checkRule(int i) { return (target == i); } } and "Rule" is just an abstract class that has an abstract "checkRule" method. This cuts way down on the number of objects I need to write, while also giving me an object that becomes immutable for all intents and purposes. This kind of act like the object was its own Builder... But not quite. I'm not too excited, however, about having an immutable being disguised as a bean however. So I had two questions: 1. Before I go too far down this path, are there any huge problems that anyone sees right off the bat? For what it's worth, it is planned that this behavior will be well documented... 2. If so, is there a better solution? Thanks

    Read the article

  • How do I find broken symlinks automatically on Windows?

    - by HughG
    Not sure if this is bad style, but I'm asking this question here because I couldn't find the answer elsewhere, and then I worked out one solution on my own. I'd be interested to see other people's solutions, but after a few days I'll post my own. In my specific case, I'm running on Windows 7, but I'd be interested in answers for other/older versions of Windows. I realise that one answer is "install a version of Unix find, and then solve as for Unix", but I wanted a more "native" solution. EDIT 2012-07-17: Clarification: by "automatically" I ideally mean something I can run as part of a script, rather than a GUI tool which does all the work at the press of a button, because I'll want to do this unattended.

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >