Search Results

Search found 15376 results on 616 pages for 'once'.

Page 33/616 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • How to safely shutdown Guest OS in VirtualBox using command line

    - by Bakhtiyor
    I have Ubuntu 10.10 and using VirtualBox 3.2. As a Guest OS I have another Ubuntu in the VirtualBox. I am starting Guest Ubuntu automatically using following command once my Host Ubuntu boots: VBoxHeadless -startvm Ubuntu --vrdp on Then I can access to it with ssh or tsclient. Now I need to shutdown automatically Guest Ubuntu once I shutdown my Host Ubuntu. Does anybody know any safe method to automatically shutdown Guest Ubuntu with a command line? I have found out two ways one can shutdown Guest OS but I am not sure whether they are safe or not. Here are they: VBoxManage controlvm Ubuntu acpipowerbutton or VBoxManage controlvm Ubuntu poweroff

    Read the article

  • TFS Hosting: discountasp.net TFS

    - by Enrique Lima
    In the last month or so I have been able to test and experience first hand the offering from discountasp.net for hosted TFS 2010. This first part is a description of the setup process for the account itself and getting some additional information on what you will find through the portal on their site. Not long ago, I posted a little tidbit on hosting TFS.  Through it I also did a shameless plug to my employer, our services and the type of hosting we recommend.  So, wouldn’t me running on discountasp.net be an issue?  Actually? NO. Ok, enough rambling.  Let’s get some details here. It is a Software as a Service model.  Through it we get Source Control, Version Control, Work Item Tracking and such.  What about Build?  If your need includes Build Management and such, you may need to look at some other options.  But, still this is a great offering for those that are moving from SourceSafe.  Or organizations who have 3 to 5 developers on staff, and do not foresee getting larger anytime soon.  Can it support more than 5 developers?  Yes, but then we need to get into how are you using TFS.  Do you need more than just Basic?  For example, SharePoint and Reporting Services integration. The signup process was seamless! Very easy to follow, complete and transition to Visual Studio to start working. An email followed the signup process, it contained details on how to get to the Team Foundation Server Control Panel login.  Once there, here is what I saw after the initial setup process of naming my Team Project Collection: So, moving on … once I clicked the area to get my server info, I got the following: Then it was a matter of getting the first user in there: Then on to connecting Visual Studio to my hosted TFS. Getting the server information, and the user account created I will configure those options in Visual Studio. Using Team Explorer, I am adding a new server configuration. Once this is provided, click OK, I will be challenged for a username and password, provide them and you will land on the following screen. Then Click Close. You will now be connected to your server and Team Project Collection. Since this will likely be the first time connecting, you will have no Projects (I already have 2 going). Click Connect, and you will be back in Team Explorer. My next post in the topic will be on Creating your First Team Project and uploading a Project Template to the server.

    Read the article

  • Google Analytics showing more unique visitors than there are pages on an intranet site

    - by DDEX
    I take care of a company intranet and measure the traffic with GA. I am absolutely sure that there are no more than 5000 URLs in our company and it is impossible to check the intranet from outside the company network. Yet when I check the number of Unique Visitors (UV) in the last year GA says there were 36.500 of them. How is that possible? I thought UV should measure each URL only once in the given time period. Could anybody explain how this actually works? Can it be that the cookie trackers expire after some time and are counted more then once?

    Read the article

  • The future is looking brighter &ndash; debugging Windows Azure in the cloud with IntelliTrace

    - by Eric Nelson
    One of the “warts” on Windows Azure development has been that once your application was deployed to the cloud, if things went wrong it was pretty tough to figure out the root problem. I knew for sometime we had a solution coming for Visual Studio 2010 users and I couldn’t wait to tell folks about it once it became public. I planned to do a detailed post subsequent to briefly mentioning it when I talked about the 1.2 SDK release. However … other stuff just keeps on getting in the way. Hence I have decided to point at Somas blog post on just that. Enjoy. Check out Peering into the cloud with IntelliTrace  NB: You will need the Ultimate Edition of Visual Studio 2010 to use this feature. Sorry.

    Read the article

  • No 'Hardware' tab in audio and no profiles

    - by Gene
    If I run the 12.x ubuntu (latest May 2012) from the CD, I get full audio settings, and sound playing in speaker. Profiles let me change analog to digital in/out. Once I run install from the same CD onto the laptop HD, once it boots the first time, after selecting audio settings, there is no 'Hardware' tab and no way to change profiles. Worst part is the audio device is set to SPDIF so nothing comes out of the speakers. Very off how booting off the CD I can get analog audio, and installing to HD and booting seems to limit the profile to something useless. Laptop is a 5 year old Dell D820 with Nvidea 128meg video on a 1920x1200 screen and T7200 CPU. I suspect if I could get the damn HARDWARE tab back in audio settings, I could just select the proper Analog profile - just as is the case if running from a boot CD. Searched the web, no similar problems found... any help appreciated!

    Read the article

  • GA and Unique visitors again

    - by DDEX
    I take care of a company intranet and measure the traffic with GA. I am absolutely sure that there are no more than 5000 URLs in our company and it is impossible to check the intranet from outside the company network. Yet when I check the number of Unique Visitors (UV) in the last year GA says there were 36.500 of them...How is that possible? I thought UV should measure each URL only once in the given time period. Could anybody explain how this actually works? Can it be that the cookie trackers expire after some time and are counted more then once?

    Read the article

  • How to totaly remove blank screen screensaver?

    - by Xamidovic
    I have no screensaver installed but when I watch movies after a while blank screen comes and I have to get up every time and move mouse to continue watching the movie. It really pisses me off. I found this command "gsettings set org.gnome.desktop.screensaver idle-activation-enabled false" which allegedly disables blank screen... I put that command in once, and blank screen keeps doing its crap. I did it again, blank screen still works on its own and would not stop. Did it trice, and nothing. Please someone help me with this, I'm freaking out when trying to watch a decent movie. I'd have to mention that I once had "xscreensaver" installed but I removed it after awhile. Don't know if it has something to do with blank screen still work, maybe some else would know. PLEASE HELP !

    Read the article

  • Using a SQL Prompt snippet with template parameters

    - by SQLDev
    As part of my product management role I regularly attend trade shows and man the Red Gate booth in the vendor exhibition hall. Amongst other things this involves giving product demos to customers. Our latest demo involves SQL Source Control and SQL Test in a continuous integration environment. In order to demonstrate quite how easy it is to set up our tools from scratch we start the demo by creating an entirely new database to link to source control, using an individual database name for each conference attendee. In SQL Server Management Studio this can be done either by selecting New Database from the Object Explorer or by executing “CREATE DATABASE DemoDB_John” in a query window. We recently extended the demo to include SQL Test. This uses an open source SQL Server unit testing framework called tSQLt (www.tsqlt.org), which has a CLR object that requires EXTERNAL_ACCESS to be set as follows: ALTER DATABASE DemoDB_John SET TRUSTWORTHY ON This isn’t hard to do, but if you’re giving demo after demo, this two-step process soon becomes tedious. This is where SQL Prompt snippets come into their own. I can create a snippet named create_demo_db for this following: CREATE DATABASE DemoDB_John GO USE DemoDB_John GO ALTER DATABASE DemoDB_John SET TRUSTWORTHY ON Now I just have to type the first few characters of the snippet name, select the snippet from SQL Prompt’s candidate list, and execute the code. Simple! The problem is that this can only work once due to the hard-coded database name. Luckily I can leverage a nice feature in SQL Server Management Studio called Template Parameters. If I modify my snippet to be: CREATE DATABASE <DBName,, DemoDB_> GO USE <DBName,, DemoDB_> GO ALTER DATABASE <DBName,, DemoDB_> SET TRUSTWORTHY ON Once I’ve invoked the snippet, I can press Ctrl-Shift-M, which calls up the Specify Values for Template Parameters dialog, where I can type in my database name just once. Now you can click OK and run the query. Easy. Ideally I’d like for SQL Prompt to auto-invoke the Template Parameter dialog for all snippets where it detects the angled bracket syntax, but typing in the keyboard shortcut is a small price to pay for the time savings.

    Read the article

  • Code refactoring with Visual Studio 2010 Part-4

    - by Jalpesh P. Vadgama
    I have been writing few post with code refactoring features in Visual Studio 2010. This post also will be part of series and this post will be last of the series. In this post I am going explain two features 1) Encapsulate Field and 2) Extract Interface. Let’s explore both features in details. Encapsulate Field: This is a nice code refactoring feature provides by Visual Studio 2010. With help of this feature we can create properties from the existing private field of the class. Let’s take a simple example of Customer Class. In that I there are two private field called firstName and lastName. Below is the code for the class. public class Customer { private string firstName; private string lastName; public string Address { get; set; } public string City { get; set; } } Now lets encapsulate first field firstName with Encapsulate feature. So first select that field and goto refactor menu in Visual Studio 2010 and click on Encapsulate Field. Once you click that a dialog box will appear like following. Now once you click OK a preview dialog box will open as we have selected preview reference changes. I think its a good options to check that option to preview code that is being changed by IDE itself. Dialog will look like following. Once you click apply it create a new property called FirstName. Same way I have done for the lastName and now my customer class code look like following. public class Customer { private string firstName; public string FirstName { get { return firstName; } set { firstName = value; } } private string lastName; public string LastName { get { return lastName; } set { lastName = value; } } public string Address { get; set; } public string City { get; set; } } So you can see that its very easy to create properties with existing fields and you don’t have to change anything there in code it will change all the stuff itself. Extract Interface: When you are writing software prototype and You don’t know the future implementation of that then its a good practice to use interface there. I am going to explain here that How we can extract interface from the existing code without writing a single line of code with the help of code refactoring feature of Visual Studio 2010. For that I have create a Simple Repository class called CustomerRepository with three methods like following. public class CustomerRespository { public void Add() { // Some code to add customer } public void Update() { //some code to update customer } public void Delete() { //some code delete customer } } In above class there are three method Add,Update and Delete where we are going to implement some code for each one. Now I want to create a interface which I can use for my other entities in project. So let’s create a interface from the above class with the help of Visual Studio 2010. So first select class and goto refactor menu and click Extract Interface. It will open up dialog box like following. Here I have selected all the method for interface and Once I click OK then it will create a new file called ICustomerRespository where it has created a interface. Just like following. Here is a code for that interface. using System; namespace CodeRefractoring { interface ICustomerRespository { void Add(); void Delete(); void Update(); } } Now let's see the code for the our class. It will also changed like following to implement the interface. public class CustomerRespository : ICustomerRespository { public void Add() { // Some code to add customer } public void Update() { //some code to update customer } public void Delete() { //some code delete customer } } Isn't that great we have created a interface and implemented it without writing a single line of code. Hope you liked it. Stay tuned for more.. Till that Happy Programming.

    Read the article

  • Drag and drop fearture for a website

    - by gpuguy
    I have to design a website which will have drag and drop features for creating an e-card. So you select items from a tool box and drag and drop this item on the card area. Once you have completed the design you can publish the e-card on the web by clicking "Save and publish" button. What are the possible technologies for implementing this feature? The requirement is that the application should not degrade the performance of the website, and should not take much time in publishing once the user click "Save and publish" button.

    Read the article

  • How can the maximum number of simultaneous users to log in to Ubuntu server be increased?

    - by nixnotwin
    I use ubuntu server 10.04 on a fairly good machine, with 2.40 duel-core processor and 2GB RAM. My users login with ssh or samba. I have setup LDAP with PAM to sync user accounts between unix and samba. When I allowed about 90 users to login over ssh at once the server refused login for many users. I am using dropbear as ssh server. Even samba logins failed for many users. I need to allow at least 100 users to login at once. Is there anyway to do this?

    Read the article

  • Application Composer: Exposing Your Customizations in BI Analytics and Reporting

    - by Richard Bingham
    Introduction This article explains in simple terms how to ensure the customizations and extensions you have made to your Fusion Applications are available for use in reporting and analytics. It also includes four embedded demo videos from our YouTube channel (if they don't appear check the browser address bar for a blocking shield icon). If you are new to Business Intelligence consider first reviewing our getting started article, and you can read more about the topic of custom subject areas in the documentation book Extending Sales. There are essentially four sections to this post. First we look at how custom fields added to standard objects are made available for reporting. Secondly we look at creating custom subject areas on the standard objects. Next we consider reporting on custom objects, starting with simple standalone objects, then child custom objects, and finally custom objects with relationships. Finally this article reviews how flexfields are exposed for reporting. Whilst this article applies to both Cloud/SaaS and on-premises deployments, if you are an on-premises developer then you can also use the BI Administration Tool to customize your BI metadata repository (the RPD) and create new subject areas. Whilst this is not covered here you can read more in Chapter 8 of the Extensibility Guide for Developers. Custom Fields on Standard Objects If you add a custom field to your standard object then it's likely you'll want to include it in your reports. This is very simple, since all new fields are instantly available in the "[objectName] Extension" folder in existing subject areas. The following two minute video demonstrates this. Custom Subject Areas for Standard Objects You can create your own subject areas for use in analytics and reporting via Application Composer. An example use-case could be to simplify the seeded subject areas, since they sometimes contain complex data fields and internal values that could confuse business users. One thing to note is that you cannot create subject areas in a sandbox, as it is not supported by BI, so once your custom object is tested and complete you'll need to publish the sandbox before moving forwards. The subject area creation processes is essentially two-fold. Once the request is submitted the ADF artifacts are generated, then secondly the related metadata is sent to the BI presentation server API's to make the updates there. One thing to note is that this second step may take up to ten minutes to complete. Once finished the status of the custom subject area request should show as 'OK' and it is then ready for use. Within the creation processes wizard-like steps there are three concepts worth highlighting: Date Flattening - this feature permits the roll up of reports at various date levels, such as data by week, month, quarter, or year. You simply check the box to enable it for that date field. Measures - these are your own functions that you can build into the custom subject area. They are related to the field data type and include min-max for dates, and sum(), avg(), and count() for  numeric fields. Implicit Facts - used to make the BI metadata join between your object fields and the calculated measure fields. The advice is to choose the most frequently used measure to ensure consistency. This video shows a simple example, where a simplified subject area is created for the customer 'Contact' standard object, picking just a few fields upon which users can then create reports. Custom Objects Custom subject areas support three types of custom objects. First is a simple standalone custom object and for which the same process mentioned above applies. The next is a custom child object created on a standard object parent, and finally a custom object that is related to a parent object - usually through a dynamic choice list. Whilst the steps in each of these last two are mostly the same, there are differences in the way you choose the objects and their fields. This is illustrated in the videos below.The first video shows the process for creating a custom subject area for a simple standalone custom object. This second video demonstrates how to create custom subject areas for custom objects that are of parent:child type, as well as those those with dynamic-choice-list relationships. &lt;span id=&quot;XinhaEditingPostion&quot;&gt;&lt;/span&gt; Flexfields Dynamic and Extensible Flexfields satisfy a similar requirement as custom fields (for Application Composer), with flexfields common across the Fusion Financials, Supply Chain and Procurement, and HCM applications. The basic principle is when you enable and configure your flexfields, in the edit page under each segment region (for both global and context segments) there is a BI Enabled check box. Once this is checked and you've completed your configuration, you run the Scheduled Process job named 'Import Oracle Fusion Data Extensions for Transactional Business Intelligence' to generate and migrate the related BI artifacts and data. This applies for dynamic, key, and extensible flexfields. Of course there is more to consider in terms of how you wish your flexfields to be implemented and exposed in your reports, and details are given in Chapter 4 of the Extending Applications guide.

    Read the article

  • How do we provide valid time estimates during Sprint Planning without doing "too much" design?

    - by Michael Edenfield
    My team is getting up to speed with Scrum, but most of us are more familiar with non-agile or "pseudo-"agile methodologies. The part that is the biggest hurdle for us is running an efficient Sprint Planning meeting where we break our backlog items into tasks, and estimate hours. (I'm using the terminology from the VS2010 Scrum Template; apologies if I use the wrong word somewhere.) When we try to figure out how long a task is going to take, we often fall into the trap of designing the feature at the code level -- table layout, interfaces, etc -- in order to figure out how long that's going to take. I'm pretty sure this is not the appropriate place to be doing that kind of design. We should be scheduling tasks for these design meetings during the sprint. However, we are having trouble figuring out how else to come up with meaningful estimates for the tasks. Are there any practical habits/techniques/etc. for making a judgement call about how long a feature is going to take, without knowing how you plan to implement it? If our time estimates are going to change significantly once the design has been completed, how can we properly budget our Sprint backlog ahead of time? EDIT: Just to clarify, since some of the comments/answers are very valid but I think addressing the wrong question. We know that what we're doing is not right, and that we should be building time into the sprint for this design. Conceptually all of the developers understand that. We also also bringing in a team member with Scrum experience to keep us on track if we start going off into the weeds. The problem is that, without going through this design process, we are finding it difficult to provide concrete time estimates for anything. We are constantly saying things like "well if we design it this way it might take 8 hours but if we end up having to do this other way instead that will take about 32 but it might not be as bad once we start trying to write it...". I also assume that this process will get better once we have some historical velocity to work from, but many of the technologies and architectural patterns we are using are new to us. But if potentially-wildly-wrong estimates are just a natural part of adapting this process then we will just need to recondition ourselves to accept that :)

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • Help identify the pattern for reacting on updates

    - by Mike
    There's an entity that gets updated from external sources. Update events are at random intervals. And the entity has to be processed once updated. Multiple updates may be multiplexed. In other words there's a need for the most current state of entity to be processed. There's a point of no-return during processing where the current state (and the state is consistent i.e. no partial update is made) of entity is saved somewhere else and processing goes on independently of any arriving updates. Every consequent set of updates has to trigger processing i.e. system should not forget about updates. And for each entity there should be no more than one running processing (before the point of no-return) i.e. the entity state should not be processed more than once. So what I'm looking for is a pattern to cancel current processing before the point of no return or abandon processing results if an update arrives. The main challenge is to minimize race conditions and maintain integrity. The entity sits mainly in database with some files on disk. And the system is in .NET with web-services and message queues. What comes to my mind is a database queue-like table. An arriving update inserts row in that table and the processing is launched. The processing gathers necessary data before the point of no-return and once it reaches this barrier it looks into the queue table and checks whether there're more recent updates for the entity. If there are new updates the processing simply shuts down and its data is discarded. Otherwise the processing data is persisted and it goes beyond the point of no-return. Though it looks like a solution to me it is not quite elegant and I believe this scenario may be supported by some sort of middleware. If I would use message queues for this then there's a need to access the queue API in the point of no-return to check for the existence of new messages. And this approach also lacks elegance. Is there a name for this pattern and an existing solution?

    Read the article

  • More about JuJu cache

    - by koxon
    I would like to know more about JuJu cache. After I start a charm and JuJu provisions a machine for the first time, the charm will be downloaded and all dependencies installed (apt-get, etc). This process can be very long. Once the charm has been build, configured and deployed once, can JuJu provision more instances of the same charm pre-built and configured? I presume that's what the cache is for, but the documentation is not very explicit: https://juju.ubuntu.com/docs/charms-deploying.html How JuJu tracks the state of the charm if it works that way? Thanks Nic

    Read the article

  • Tweaks to allows maximum number of users to login to ubuntu server.

    - by nixnotwin
    I use ubuntu server 10.04 on a fairly good machine, with 2.40 duel-core processor and 2GB RAM. My users login with ssh or samba. I have setup LDAP with PAM to sync user accounts between unix and samba. When I allowed about 90 users to login over ssh at once the server refused login for many users. I am using dropbear as ssh server. Even samba logins failed for many users. I need to allow at least 100 users to login at once. Is there anyway to do this?

    Read the article

  • What do you call the process of converting line breaks into html elements?

    - by Ben Lee
    On sites with user-created content (such as programmers SE) or blogging software back-ends, line breaks entered by the user in the content area are frequently converted into <br> and/or <p> tags when rendered on the front-end. For example, this: A limerick There once was a man from Nantucket Who kept all his cash in a bucket. Might render html like this: <p> A limerick </p> <p> There once was a man from Nantucket<br> Who kept all his cash in a bucket. </p> What is the standard name for this process of converting line breaks into html?

    Read the article

  • How can I increase the maximum number of simultaneous users to log in to a server?

    - by nixnotwin
    I use ubuntu server 10.04 on a fairly good machine, with 2.40 dual-core processor and 2GB RAM. My users login with ssh or samba. I have setup LDAP with PAM to sync user accounts between unix and samba. When I allowed about 90 users to login over ssh at once the server refused login for many users. I am using dropbear as ssh server. Even samba logins failed for many users. I need to allow at least 100 users to login at once. Is there anyway to do this?

    Read the article

  • What is a good solution for UA testing multiple projects simultaneously?

    - by Eric Belair
    My client often has several projects/tasks going at once that sometimes need to be tested simultaneously on one website. They are often separate applications on the website, but sometimes share UDFs, etc. We currently have 3 public-facing environment websites - i.e. dev.website.com, test.website.com, and www.website.com. As the programmer, I'm trying to find a good solution to allow for UA testing of multiple projects/tasks at once. Currently, I'm finding myself switching between code branches (using SubVersion). What some of my options?

    Read the article

  • Partitioning with preseed help

    - by kostasp
    I have a server that has 4 hds inside all in stadalone configurations (no hardware raid). I want using preseed to create a "regular" partition on disk1 on which i ll install ubuntu and create a raid 0 array with the remainning three disks. Is this possible? Can i use partman-auto/method twice inside the preseed file once for regular and once for raid? I need to use this for unattended provisioning so i need to set my disks inside the preseed file. Thanking you all in advance for your time. Costas

    Read the article

  • Advice: should I focus on PHP + MySQL, or split my time for more JS and CSS? [closed]

    - by fakaff
    I started learning web development about three months ago (in between working my regular job), and I'm finally starting to get some vague, distant notion of understanding. I find the server-side stuff the most interesting; though I've not gone anywhere near Apache quite yet, which I assume will be necessary at some point. As cool as toying around with visuals and UI is, programming and database stuff inspires me with new ideas and possibilities every minute (I've even bought, on a whim, a wonderfully dry bunch of books on database theory and relational algebra). And whatever CSS or Javascript tutorial I'm doing, it often feels like a distraction from the PHP/MySQL stuff I'd rather be playing with. For someone like me who's just starting out, which is the most advisable course of action? (in terms of being marketable as a programmer): To focus on PHP and SQL stuff exclusively, and only once I master those to diversify my skills. To first learn all three (PHP/MySQL, Javascript, CSS and design) and only once I'm fluent in all three focus on PHP and databases?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >