Search Results

Search found 11277 results on 452 pages for 'jeff certain'.

Page 50/452 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • .NET development on a “Retina” MacBook Pro

    - by Jeff
    The rumor that Apple would release a super high resolution version of its 15” laptop has been around for quite awhile, and one I watched closely. After more than three years with a 17” MacBook Pro, and all of the screen real estate it offered, I was ready to replace it with something much lighter. It was a fantastic machine, still doing 6 or 7 hours after 460 charge cycles, but I wanted lighter and faster. With the SSD I put in it, I was able to sell it for $750. The appeal of higher resolution goes way back, when I would plug into a projector and scale up. Consolas, as it turns out, is a nice looking font for code when it’s bigger. While I have mostly indifference for iOS, I have to admit that a higher dot pitch on the iPhone and iPad is pretty to look at. So I ordered the new 15” “Retina” model as soon as the Apple Store went live with it, and got it seven days later. I’ve been primarily using Parallels as my VM of choice from OS X for about five years. They recently put out an update for compatibility with the display, though I’m not entirely sure what that means. I figured there would have to be some messing around to get the VM to look right. The combination that seems to work best is this: Set the display in OS X to “more room,” which is roughly the equivalent of the 1920x1200 that my 17” did. It’s not as stunning as the text at the default 1440x900 equivalent (in OS X), but it’s still quite readable. Parallels still doesn’t entirely know what to do with the high resolution, though what it should do is somehow treat it as native. That flaw aside, I set the Windows 7 scaling to 125%, and it generally looks pretty good. It’s not really taking advantage of the display for sharpness, but hopefully that’s something that Parallels will figure out. Screen tweaking aside, I got the base model with 16 gigs of RAM, so I give the VM 8. I can boot a Windows 7 VM in 9 seconds. Nine seconds! The Windows Experience Index scores are all 7 and above, except for graphics, which are both at 6. Again, that’s in a VM. It’s hard to believe there’s something so fast in a little slim package like that. Hopefully this one gets me at least three years, like the last one.

    Read the article

  • AJI Report 14 &ndash; Brian Lagunas on XAML and Windows 8

    - by Jeff Julian
    We sat down with Brian at the Iowa Code Camp to talk about his sessions, WPF, Application Design, and what Infragistics has to offer developers. Infragistics is a huge supporter of regional events like Iowa Code Camp and we want to thank them for their support of the Midwest region. Brian is a sharp guy and it was great to meet him and learn more about what makes him tick. Brian Lagunas is an INETA Community Speaker, co-leader of the Boise .Net Developers User Group (NETDUG), and original author of the Extended WPF Toolkit. He is a multi-recipient of the Microsoft Community Contributor Award and can be found speaking at a variety of user groups and code camps around the nation. Brian currently works at Infragistics as a Product Manager for the award winning NetAdvantage for WPF and Silverlight components. Before geeking out, Brian served his country in the United States Army as an infantryman and later served his local community as a deputy sheriff.   Listen to the Show   Site: http://brianlagunas.com Twitter: @BrianLagunas

    Read the article

  • AJI Report with Nat Ryan&ndash;Discussion about Game Development with Corona Labs SDK

    - by Jeff Julian
    We sat down with Nat Ryan of Fully Croisened to talk about Game Development and the Corona Labs framework. The Corona SDK is a platform that allows you to write mobile games or applications using the Lua language and deploy to the iOS and Android platforms. One of the great features of Corona is the compilation output is a native application and not a hybrid application. Corona is very centered around their developer community and there are quite a few local meetups focused on the helping other developers use the platform. The community and Corona site offers a great number of resources and samples that will help you get started in a matter of a few days. If you are into Game Development and want to move towards mobile, or a business developer looking to turn your craft back into a hobby, check out this recording and Corona Labs to get started.   Download the Podcast   Site: AJI Report – @AJISoftware Site: Fully Croisened Twitter: @FullyCroisened Site: Corona Labs

    Read the article

  • Can't get wireless on macbook pro 8,2

    - by Jeff
    I'm a linux Newb, and I have tried several of the fixes listed to try and get my wifi drivers to work, but to no avail. Does anyone here know why this isn't working for me, or better yet, how to fix it? Under lspci -vvv I get the following output: 03:00.0 Network controller: Broadcom Corporation BCM4331 802.11a/b/g/n (rev 02) Subsystem: Apple Inc. AirPort Extreme Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast TAbort- SERR- Kernel modules: bcma With sudo lshw -class network I get this output: *-network UNCLAIMED description: Network controller product: BCM4331 802.11a/b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 version: 02 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:b0600000-b0603fff Any help would be greatly appreciated!

    Read the article

  • Personal | First Stop on our trip, St. Louis

    - by Jeff Julian
    St. Louis is definitely a cool city. I have always looked at it as Kansas City’s big brother. I love to Arch, wonder what is would be like to have pro hockey, really like the downtown area, and have some great friends who live there. The reason we left for St. Louis on Thursday evening was to get us a head start on our journey. Since we were doing a Diners, Drive-ins, and Dives tour, it made since to have the journey start there. We picked the Hyatt Downtown as our hotel because they had an Arch Package which was suppose to get you tickets to the arch so you didn’t need to arrive early and wait in line. That ended up not working cause the arch had been selling out every day and they were no longer accepting the hotels tickets. No biggie and the hotel did try very hard to get us tickets, but we just took our chances in the line and waited. We walked over to the park and had to wait for about 20 minutes for the doors to open and had tickets after another 20 minutes of waiting in line and at that point walked right up and were able to get to the elevators.I want to stop here to have a little aside. I don’t know who started the rumor that the arch ride is scary but it is not. You do sit in a small pod, but it like the accent on a roller coaster to the top of the first drop and an elevator with no windows outside. Nothing to be afraid of here if you aren’t claustrophobic. If you are afraid of small spaces, stay clear of this ride. Once you get to the top, you walk up 10 to 30 stairs depending on which car you were in (lower the number the less stairs you climb) and you are then at the top in a decent sized room where you look out the windows. Beautiful view of the city. I don’t typically like heights, but this felt like being inside a building and not hang out on a roof. Here is the view from the arch: Related Tags: Diners, Drive-ins, and Dives, St. Louis, Vacation

    Read the article

  • Site too large to officially use Google Analytics?

    - by Jeff Atwood
    We just got this email from the Google Analytics team: We love that you love our product and use it as much as you do. We have observed however, that a website you are tracking with Google Analytics is sending over 1 million hits per day to Google Analytics servers. This is well above the "5 million pageviews per month per account" limit specified in the Google Analytics Terms of Service. Processing this amount of data multiple times a day takes up valuable resources that enable us to continue to develop the product for all Google Analytics users. Processing this amount of data multiple times a day takes up valuable resources that enable us to continue to develop the product for all Google Analytics users. As such, starting August 23rd, 2010, the metrics in your reports will be updated once a day, as opposed to multiple times during the course of the day. You will continue to receive all the reports and features in Google Analytics as usual. The only change will be that data for a given day will appear the following day. We trust you understand the reasons for this change. I totally respect this decision, and I think it's very generous to not kick us out. But how do we do this the right way -- what's the official, blessed Google way to use Google Analytics if you're a "whale" website with lots of hits per day? Or, are there other analytics services that would be more appropriate for very large websites?

    Read the article

  • Lessons from rewriting POP Forums for MVC, open source-like

    - by Jeff
    It has been a ton of work, interrupted over the last two years by unemployment, moving, a baby, failing to sell houses and other life events, but it's really exciting to see POP Forums v9 coming together. I'm not even sure when I decided to really commit to it as an open source project, but working on the same team as the CodePlex folks probably had something to do with it. Moving along the roadmap I set for myself, the app is now running on a quasi-production site... we launched MouseZoom last weekend. (That's a post-beta 1 build of the forum. There's also some nifty Silverlight DeepZoom goodness on that site.)I have to make a point to illustrate just how important starting over was for me. I started this forum thing for my sites in old ASP more than ten years ago. What a mess that stuff was, including SQL injection vulnerabilities and all kinds of crap. It went to ASP.NET in 2002, but even then, it felt a little too much like script. More than a year later, in 2003, I did an honest to goodness rewrite. If you've been in this business of writing code for any amount of time, you know how much you hate what you wrote a month ago, so just imagine that with seven years in between. The subsequent versions still carried a fair amount of crap, and that's why I had to start over, to make a clean break. Mind you, much of that crap is still running on some of my production sites in a stable manner, but it's a pain in the ass to maintain.So with that clean break, there is much that I have learned. These are a few of those lessons, in no particular order...Avoid shiny object syndromeOver the years, I've embraced new things without bothering to ask myself why. I remember spending the better part of a year trying to adapt this app to use the membership and profile API's in ASP.NET, just because they were there. They didn't solve any known problem. Early on in this version, I dabbled in exotic ORM's, even though I already had the fundamental SQL that I knew worked. I bloated up the client side code with all kinds of jQuery UI and plugins just because, and it got in the way. All the new shiny can be distracting, and I've come to realize that I've allowed it to be a distraction most of my professional life.Just query what you needI've spent a lot of time over-thinking how to query data. In the SQL world, this means exotic joins, special caches, the read-update-commit loop of ORM's, etc. There are times when you have to remind yourself that you aren't Facebook, you'll never be Facebook, and that databases are in fact intended to serve data. In a lot of projects, back in the day, I used to have these big, rich data objects and pass them all over the place, through various application tiers, when in reality, all I needed was some ID from the entity. I try to be mindful of how many queries hit the database on a given request, but I don't obsess over it. I just get what I need.Don't spend too much time worrying about your unit testsIf you've looked at any of the tests for POP Forums, you might offer an audible WTF. That's OK. There's a whole lot of mocking going on. In some cases, it points out where you're doing too much, and that's good for improving your design. In other cases it shows where your design sucks. But the biggest trap of unit testing is that you worry it should be prettier. That's a waste of time. When you write a test, in many cases before the production code, the important part is that you're testing the right thing. If you have to mock up a bunch of stuff to test the outcome, so be it, but it's not wasted time. You're still doing up the typical arrange-action-assert deal, and you'll be able to read that later if you need to.Get back to your HTTP rootsASP.NET Webforms did a reasonably decent job at abstracting us away from the stateless nature of the Web. A lot of people criticize it, but I think it all worked pretty well. These days, with MVC, jQuery, REST services, and what not, we've gone back to thinking about the wire. The nuts and bolts passing between our Web browser and server matters. This doesn't make things harder, in my opinion, it makes them easier. There is something incredibly freeing about how we approach development of Web apps now. HTTP is a really simple protocol, and the stuff we push through it, in particular HTML and JSON, are pretty simple too. The debugging points are really easy to trap and trace.Premature optimization is prematureI'll go back to the data thing for a moment. I've been known to look at a particular action or use case and stress about the number of calls that are made to the database. I'm not suggesting that it's a bad thing to keep these in mind, but if you worry about it outside of the context of the actual impact, you're wasting time. For example, I query the database for last read times in a forum separately of the user and the list of forums. The impact on performance barely exists. If I put it under load, exceeding the kind of load I expect, it still barely has an impact. Then consider it only counts for logged in users. The context of this "inefficient" action is that it doesn't matter. Did I mention I won't be Facebook?Solve your own problems firstThis is another trap I've fallen into. I've often thought about what other people might need for some feature or aspect of the app. In other words, I was willing to make design decisions based on non-existent data. How stupid is that? When I decided to truly open source this thing, building for myself first was a stated design goal. This app has to server the audiences of CoasterBuzz, MouseZoom and other sites first. In this development scenario, you don't have access to mountains of usability studies or user focus groups. You have to start with what you know.I'm sure there are other points I could make too. It has been a lot of fun to work on, and I look forward to evolving the UI as time goes on. That's where I hope to see more magic in the future.

    Read the article

  • Unleash AutoVue on Your Unmanaged Data

    - by [email protected]
    Over the years, I've spoken to hundreds of customers who use AutoVue to collaborate on their "managed" data stored in content management systems, product lifecycle management systems, etc. via our many integrations. Through these conversations I've also learned a harsh reality - we will never fully move away from unmanaged data (desktops, file servers, emails, etc). If you use AutoVue today you already know that even if your primary use is viewing content stored in a content management system, you can still open files stored locally on your computer. But did you know that AutoVue actually has - built-in - a great solution for viewing, printing and redlining your data stored on file servers? Using the 'Server protocol' you can point AutoVue directly to a top-level location on any networked file server and provide your users with a link or shortcut to access an interface similar to the sample page shown below. Many customers link to pages just like this one from their internal company intranets. Through this webpage, users can easily search and browse through file server data with a 'click-and-view' interface to find the specific image, document, drawing or model they're looking for. Any markups created on a document will be accessible to everyone else viewing that document and of course real-time collaboration is supported as well. Customers on maintenance can consult the AutoVue Admin guide or My Oracle Support Doc ID 753018.1 for an introduction to the server protocol. Contact your local AutoVue Solutions Consultant for help setting up the sample shown above.

    Read the article

  • Solving Euler Project Problem Number 1 with Microsoft Axum

    - by Jeff Ferguson
    Note: The code below applies to version 0.3 of Microsoft Axum. If you are not using this version of Axum, then your code may differ from that shown here. I have just solved Problem 1 of Project Euler using Microsoft Axum. The problem statement is as follows: If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. My Axum-based solution is as follows: namespace EulerProjectProblem1{ // http://projecteuler.net/index.php?section=problems&id=1 // // If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. // The sum of these multiples is 23. // Find the sum of all the multiples of 3 or 5 below 1000. channel SumOfMultiples { input int Multiple1; input int Multiple2; input int UpperBound; output int Sum; } agent SumOfMultiplesAgent : channel SumOfMultiples { public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; } } agent MainAgent : channel Microsoft.Axum.Application { public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; } }} Let’s take a look at the various parts of the code. I begin by setting up a channel called SumOfMultiples that accepts three inputs and one output. The first two of the three inputs will represent the two possible multiples, which are three and five in this case. The third input will represent the upper bound of the problem scope, which is 1000 in this case. The lone output of the channel represents the sum of all of the matching multiples: channel SumOfMultiples{ input int Multiple1; input int Multiple2; input int UpperBound; output int Sum;} I then set up an agent that uses the channel. The agent, called SumOfMultiplesAgent, received the three inputs from the channel sent to the agent, stores the results in local variables, and performs the for loop that iterates from 1 to the received upper bound. The agent keeps track of the sum in a local variable and stores the sum in the output portion of the channel: agent SumOfMultiplesAgent : channel SumOfMultiples{ public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; }} The application’s main agent, therefore, simply creates a new SumOfMultiplesAgent in a new domain, prepares the channel with the inputs that we need, and then receives the Sum from the output portion of the channel: agent MainAgent : channel Microsoft.Axum.Application{ public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; }} The result of the calculation (which, by the way, is 233,168) is sent to the console using good ol’ Console.WriteLine().

    Read the article

  • Calculating Screen Resolutions Using WPF

    - by Jeff Ferguson
    WPF measures all elements in device independent pixels (DIPs). These DIPs equate to device pixels if the current display monitor is set to the default of 96 DPI. However, for monitors set to a DPI setting that is different than 96 DPI, then WPF DIPs will not correspond directly to monitor pixels. Consider, for example, the WPF properties SystemParameters.PrimaryScreenHeight and SystemParameters.PrimaryScreenWidth. If your monitor resolution is set to 1024 pixels wide by 768 pixels high, and your monitor is set to 96 DPI, then WPF will report the value of SystemParameters.PrimaryScreenHeight as 768 and the value of SystemParameters.PrimaryScreenWidth as 1024. No problem. This aligns nicely because the WPF device independent pixel value (96) matches your monitor's DPI setting (96). However, if your monitor is not set to display pixels at 96 DPI, then SystemParameters.PrimaryScreenHeight and SystemParameters.PrimaryScreenWidth will not return what you expect. The values returned by these properties may be greater than or less than what you expect, depending on whether or not your monitor's DPI value is less than or greater than 96. Since the SystemParameters.PrimaryScreenHeight and SystemParameters.PrimaryScreenWidth properties are WPF properties, their values are measured in WPF DIPs, rather than taking monitor DPI into effect. Once again: WPF measures all elements in device independent pixels (DIPs). To combat this issue, you must take your monitor's DPI settings into effect if you're looking for the monitor's width and height using the monitor's DPI settings. The handy code block below will help you calculate these values regardless of the DPI setting on your monitor: Window MainWindow = Application.Current.MainWindow; PresentationSource MainWindowPresentationSource = PresentationSource.FromVisual(MainWindow); Matrix m = MainWindowPresentationSource.CompositionTarget.TransformToDevice; DpiWidthFactor = m.M11; DpiHeightFactor = m.M22; double ScreenHeight = SystemParameters.PrimaryScreenHeight * DpiHeightFactor; double ScreenWidth = SystemParameters.PrimaryScreenWidth * DpiWidthFactor; The values of ScreenHeight and ScreenWidth should, after this code is executed, match the resolution that you see in the display's Properties window.

    Read the article

  • Misunderstanding Scope in JavaScript?

    - by Jeff
    I've seen a few other developers talk about binding scope in JavaScript but it has always seemed to me like this is an inaccurate phrase. The Function.prototype.call and Function.prototype.apply don't pass scope around between two methods; they change the caller of the function - two very different things. For example: function outer() { var item = { foo: 'foo' }; var bar = 'bar'; inner.apply(item, null); } function inner() { console.log(this.foo); //foo console.log(bar); //ReferenceError: bar is not defined } If the scope of outer was really passed into inner, I would expect that inner would be able to access bar, but it can't. bar was in scope in outer and it is out of scope in inner. Hence, the scope wasn't passed. Even the Mozilla docs don't mention anything about passing scope: Calls a function with a given this value and arguments provided as an array. Am I misunderstanding scope or specifically scope as it applies to JavaScript? Or is it these other developers that are misunderstanding it?

    Read the article

  • Oracle Solaris: Zones on Shared Storage

    - by Jeff Victor
    Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list. One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS. ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past. With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! What could I use, instead? [There he goes, foreshadowing again... -Ed.] Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system. Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices. Why Migrate? Why is the migration of virtual servers important? Some of the most common reasons are: Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance. Moving a workload to a larger system because the workload has outgrown its original system. If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot. You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system. Concepts For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool: contains the zone's Solaris content, i.e. the root file system does not contain any content not related to the zone can only be mounted by one Solaris instance at a time Method Overview Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!") Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.) Install the zone on one system, onto shared storage. Boot the zone. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.) Shutdown the zone. Detach the zone from the original system. Attach the zone to its new "home" system. Boot the zone. The zone can be used normally, and even migrated back, or to a different system. Details The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB". Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone. The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1." Assumptions In the example shown below, I make these assumptions. The guest that will own the zone at the beginning is named sysA. The guest that will own the zone after the first migration is named sysB. On sysA, the shared disk is named /dev/dsk/c7t2d0 On sysB, the shared disk is named /dev/dsk/c7t3d0 (Finally!) The Steps Step 1) Determine the name of the disk that will move back and forth between the systems. root@sysA:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 /pci@0,0/pci8086,2829@d/disk@2,0 Specify disk (enter its number): ^D Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated. root@sysA:~# format -e c7t2d0 selecting c7t2d0 [disk formatted] FORMAT MENU: ... format fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. n SELECT ONE OF THE FOLLOWING: ... Enter Selection: 1 ... G=EFI_SYS 0=Exit? f SELECT ONE... ... 6 format label ... Specify Label type[1]: 1 Ready to label disk, continue? y format quit root@sysA:~# ls /dev/dsk/c7t2d0 /dev/dsk/c7t2d0 Step 3) Configure zone1 on sysA. root@sysA:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1 create create: Using system default template 'SYSdefault' zonecfg:zone1 set zonename=zone1 zonecfg:zone1 set zonepath=/zones/zone1 zonecfg:zone1 add rootzpool zonecfg:zone1:rootzpool add storage dev:dsk/c7t2d0 zonecfg:zone1:rootzpool end zonecfg:zone1 exit root@sysA:~# oot@sysA:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: ... rootzpool: storage: dev:dsk/c7t2d0 Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...) root@sysA:~# zoneadm -z zone1 install Created zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install Image: Preparing at /zones/zone1/root. AI Manifest: /tmp/manifest.xml.RXaycg SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/support/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 2.8M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 1696.847 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install Step 5) Boot the Zone. root@sysA:~# zoneadm -z zone1 boot Step 6) Login to zone's console to complete the specification of system information. root@sysA:~# zlogin -C zone1 Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation. Step 7) Shutdown the zone so it can be "moved." root@sysA:~# zoneadm -z zone1 shutdown Step 8) Detach the zone so that the original global zone can't use it. root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 installed /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 484M 1.51G 23% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool Step 9) Review the result and shutdown sysA so that sysB can use the shared disk. root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# init 0 Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command. When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.) root@sysB:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysB:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: linkname: net0 ... rootzpool: storage: dev:dsk/c7t3d0 Step 11) Attaching the zone automatically imports the zpool. root@sysB:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach root@sysB:~# zoneadm -z zone1 boot root@sysB:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back. root@zone1:~# ls /opt root@zone1:~# touch /opt/fileA root@zone1:~# ls -l /opt/fileA -rw-r--r-- 1 root root 0 Oct 22 14:47 /opt/fileA root@zone1:~# exit logout [Connection to zone 'zone1' pts/2 closed] root@sysB:~# zoneadm -z zone1 shutdown root@sysB:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool root@sysB:~# init 0 Step 13) Back on sysA, check the status. Oracle Corporation SunOS 5.11 11.1 September 2012 root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - Step 14) Re-attach the zone back to sysA. root@sysA:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 491M 1.51G 24% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 boot root@sysA:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 root@zone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 1.98G 538M 1.46G 26% 1.00x ONLINE - Step 15) Check for the file created on sysB, earlier. root@zone1:~# ls -l /opt total 1 -rw-r--r-- 1 root root 0 Oct 22 14:47 fileA Next Steps Here is a brief list of some of the fun things you can try next. Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones! Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.) Conclusion Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

    Read the article

  • Silverlight Cream for November 20, 2011 -- #1169

    - by Dave Campbell
    In this Issue: Andrea Boschin, Michael Crump, Michael Sync, WindowsPhoneGeek, Jesse Liberty, Derik Whittaker, Sumit Dutta, Jeff Blankenburg(-2-), and Beth Massi. Above the Fold: WP7: "Silver VNC 1.0 for Windows Phone "Mango"" Andrea Boschin Metro/WinRT/W8: "Lighting up your C# Metro apps by being a Share Source" Derik Whittaker LightSwitch: "Using the Save and Query Pipeline to “Archive” Deleted Records" Beth Massi Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com: Silver VNC 1.0 for Windows Phone "Mango" Andrea Boschin published the first release of his "Silver VNC" version 1.0 on CodePlex. Check out the video on the blog post to see the capabilities, then go grab it from CodePlex. Fixing a broken toolbox (In Visual Studio 2010 SP1) Not Silverlight or Metro, but near to us all is Visual Studio... read how Michael Crump resolves the 'broken' toolbox that we all get now and then Windows Phone 7 – USB Device Not Recognized Error Michael Sync is looking for ideas about an error he gets any time he updates his phone. Windows Phone Toolkit MultiselectList in depth| Part2: Data Binding WindowsPhoneGeek has up the second part of his tutorial series on the MultiselectList from the Windows Phone Toolkit... this part is about data binding, complete with lots of code, discussion, pictures, and project to download New Mini-Tutorial Video Series Jesse Liberty started a new video series based on his Mango Mini tutorials. They will be on Channel 9, and he has a link on this post to the index. The firs of the series is on animation without code Lighting up your C# Metro apps by being a Share Source Derik Whittaker continues investigating Metro with this post about how to set your app up to share its content with other apps Part 21 - Windows Phone 7 - Toast Push Notification Sumit Dutta has part 21 of his WP7 series up and is talking about Toast Notification by creating a Windows form app for sending notifications to the WP7 app for viewing 31 Days of Mango | Day #6: Motion Jeff Blankenburg's Day 6 in his Mango series is about the Motion class which combines the data we get from the Accelerometer, Compass, and Gyroscope of the last couple days of posts 31 Days of Mango | Day #7: Raw Camera Data In Day 7, Jeff Blankenburg talks about the Camera on the WP7 and how to use the raw data in your own application Using the Save and Query Pipeline to “Archive” Deleted Records Beth Massi's latest LightSwith post is this one on tapping into the Save and Query pipelines to perform some data processing prior to saving or pulling data Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Temporarily share/deploy a python (flask) application

    - by Jeff
    Goal Temporarily (1 month?) deploy/share a python (flask) web app without expensive/complex hosting. More info I've developed a basic mobile web app for the non-profit I work for. It's written in python and uses flask as its framework. I'd like to share this with other employees and beta testers (<25 people). Ideally, I could get some sort of simple hosting space/service and push regular updates to it while we test and iterate on this app. Think something along the lines of dropbox, which of course would not work for this purpose. We do have a website, and hosting services for it, but I'm concerned about using this resource as our website is mission critical and this app is very much pre-alpha at this point. Options I've researched / considered Self host from local machine/network (slow, unreliable) Purchase hosting space (with limited non-profit resources, I'm concerned this is overkill) Using our current web server / hosting (not appropriate for testing) Thanks very much for your time!

    Read the article

  • Demantra 7.3.1 Upgrade Path (Doc ID 1286000.1)

    - by Jeff Goulette
    Applies to: Oracle Demantra Demand Management - Version: 6.2.6 to 7.3.1 - Release: 6 to 7.3.0Information in this document applies to any platform. What is being announced?Customers that are on v7.3.0 and v7.3.0.1 and go directly to 7.3.1 with no further steps.Customers on v7.1.0, v7.1.1 and v7.2 branch, you can upgrade to v7.3.1 but there is an issue with the workflow.   You will need to apply patch <11068174>.Customers on older versions like 6.2.6 and 7.0.2, should upgrade to v7.1.1 and then upgrade to v7.3.1.  You must apply patch <11068174> for this upgrade path. Who to contact for more information?Please contact [email protected] for additional information.

    Read the article

  • Unknown CSS font-family oddity with IE7-10 on Win Vista-8

    - by Jeff
    I am seeing the following "oddity" with IE7-10 on Win Vista-8: When declaring font-family: serif; I am seeing an old bitmapped serif font that I can't identify (see screenshot below) instead of the expected font Times New Roman. I know it's an old bitmapped font because it displays aliased, without any font smoothing, with IE7-10 on Win Vista-8 (just like Courier on every version of Win). Screenshot: I would like to know (1) can anyone else confirm my research and (2) BONUS: which font is IE displaying? Notes: IE6 and IE7 on Win XP displays Times New Roman, as they should. It doesn't matter if font-family: serif; is declared in an external stylesheet or inline on the element. Quoting the CSS attribute makes no difference. Adding "Unkown Font" to the stack also makes no difference. New Screenshot: The answer from Jukka below is correct. Here is a new screenshot with Batang (not BatangChe) to illustrate. Hope this helps someone.

    Read the article

  • GWB | Administrator Blog Is Back To Life

    - by Jeff Julian
    We are bringing back the administrator’s blog for Geekswithblogs.net as a place to get information for what is going on with GWB. Couple reasons we are doing this. One, I post a lot of information on my blog that is not Geekswithblogs.net related. Most the time it isn’t even developer related and I know I need to work on that too, but in an effort to keep the signal much higher than the noise, we are moving the information over there. The blog URL is http://geekswithblogs.net/administrator. The other reason we are doing it is I am not the only member of the GWB staff. So please subscribe to that blog and let us know what you think about Geekswithblogs.net and how we can make the site better.http://geekswithblogs.net/administrator

    Read the article

  • Is meta description still relevant?

    - by Jeff Atwood
    I received this bit of advice about the meta description tag recently: Meta descriptions are used by Google probably 80% of the time for the snippet. They don’t help with rankings but you should probably use them. You could just auto generate them from the first part of the question. The description tag exists in the header, like so: <meta name="Description" content="A brief summary of the content on the page."> I'm not sure why we would need this field, as Google seems perfectly capable of showing the relevant search terms in context in the search result pages, like so (I searched for c# list performance): In other words, where would a meta description summary improve these results? We want the page to show context around the actual search hits, not a random summary we inserted! Google Webmaster Central has this advice: For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions are more difficult. In the latter case, though, programmatic generation of the descriptions can be appropriate and is encouraged -- just make sure that your descriptions are not "spammy." Good descriptions are human-readable and diverse, as we talked about in the first point above. The page-specific data we mentioned in the second point is a good candidate for programmatic generation. I'm struggling to think of any scenario when I would want the Google-generated summary, that is, actual context from the page for the search terms, to be replaced by a hard-coded meta description summary of the question itself.

    Read the article

  • XSLT is not the solution you're looking for

    - by Jeff
    I was very relieved to see that Umbraco is ditching XSLT as a rendering mechanism in the forthcoming v5. Thank God for that. After working in this business for a very long time, I can't think of any other technology that has been inappropriately used, time after time, and without any compelling reason.The place I remember seeing it the most was during my time at Insurance.com. We used it, mostly, for two reasons. The first and justifiable reason was that it tweaked data for messaging to the various insurance carriers. While they all shared a "standard" for insurance quoting, they all had their little nuances we had to accommodate, so XSLT made sense. The other thing we used it for was rendering in the interview app. In other words, when we showed you some fancy UI, we'd often ditch the control rendering and straight HTML and use XSLT. I hated it.There just hasn't been a technology hammer that made every problem look like a nail (or however that metaphor goes) the way XSLT has. Imagine my horror the first week at Microsoft, when my team assumed control of the MSDN/TechNet forums, and we saw a mess of XSLT for some parts of it. I don't have to tell you that we ripped that stuff out pretty quickly. I can't even tell you how many performance problems went away as we started to rip it out.XSLT is not your friend. It has a place in the world, but that place is tweaking XML, not rendering UI.

    Read the article

  • Resource Graphs in top panel?

    - by Jeff Welling
    I'm running Gnome Classic in Ubuntu 11.10 and in previous versions of Ubuntu it was fairly easy to get resource graphs to appear in the top menu, but now the regular way of getting said graphs in the top menu bar don't work (right clicking on the top menu produces no result unless you click on an icon, eg sound, wifi, or battery indicators). Is it not possible to get resource graphs in the top menu bar in Gnome Classic on Ubuntu 11.10? If not Gnome Classic, is it possible in KDE? I've tried googling but the only results I'm getting are related to adding the panel, which I can't do because I can't right click on the top menu. Thanks in advance.

    Read the article

  • Learning Objective-C for iPad/iPhone/iPod Development

    - by Jeff Julian
    I am learning how to write apps for the iPad/iPhone/iPod!  Why, well several reasons.  One reason, I have 5 devices in my house on the platform.  I had an iPad and iPhone, Michelle has an iPhone, and each of the kids have iPod Touches.  They are excellent devices for life management, entertainment, and learning.  I am amazed at how well the kids pick up on it and how much it effects the way they learn.  My two year old knows how to use it better than any other device we own and she is learning new words and letters so quickly. Because of this saturation at home, it would be fun to write some apps my family could use.  Some games to bring the hobby of development back into my life.  Second reason is we want to have a Geekswithblogs app for the iPhone and iPad.  We are not sure if it is purely informational (blog posts and tweets) or if members want to be able to publish from the app.  Creating a blog editor would be tough stuff, but could be just the right challenge. There are so many more reasons, but the last one that really makes me excited is that it is a new domain of development where I get excited when I think about writing apps.  That excitement level where I want to see if there are User Groups and if we are just watching TV, to break out the MBP and start working on it.  That excitement level where I could really read a development book cover to cover and not just use as a reference.  I really do like this feeling. Who knows how long this will last and I am definitely not leaving .NET.  Microsoft software will always be my main focus, but for the time, my hobby is changing and I am getting excited about development again.   Technorati Tags: Apple,iPad Development,Objective-C,New Frontiers Image: Courtesy of Apple

    Read the article

  • Tidbits of goodness - Podcasts, REST, JSON

    - by jeff.x.davies
    I've been quiet for a while, busy with a variety of projects. I did want to let you all know about a couple of things going on. First, I have been participating in architectural podcasts with Bob Rhubart. If you are interested in hearing these short (about 10 minutes each) recordings where a group of us discuss enterprise architecture and its future, check out http://blogs.oracle.com/archbeat/2010/05/podcast_show_notes_evolving_en.html Next, I have been working on the public sample code for the Oracle Service Bus 11g release. I'm now expanding my samples to include SCA, BPEL and the Oracle Adapters. This is really great experience for me because I have been learning these other tools to a deeper level and this provides insight into developing better solutions. You know the old saying, "If the only tool you have is a hammer, you tend to appraoch every problem as if it were a nail." However, I'm not the only one working on these samples. We have alot of our best and brightest working on sample code for the 11g release. Take a look at https://soasamples.samplecode.oracle.com/ to see all of the samples for SOA Suite 11g A reader wrote to me and asked me about using OSB to return information in JSON format. I don't have a sample posted for this yet, but I am working on getting one packaged up. In the mean time I can tell you that it is dead simple to do in OSB. Use the instructions I gave in an earlier blog entry on creating REST services using OSB, specify Messaging Service as the service type that takes a Text message and returns a Text message. Then have the OSB proxy service return a JSON formatted string (by replacing the contents of the $body variable with the JSON text) and you're done! This approach allows you to use OSB services from within Javascript/AJAX seamlessly. As I get more samples posted to the OTN site, I'll let you know. I have lots of interesting stuff on the way.

    Read the article

  • Addressing a variable in VB

    - by Jeff
    Why doesn't Visual Basic.NET have the addressof operator like C#? In C#, one can int i = 123; int* addr = &i; But VB has no equivalent counter part. It seems like it should be important. UPDATE Since there's some interest, Im copying my response to Strilanc below. The case I ran into didnt necessitate pointers by any means, but I was trying to trouble shoot a unit test that was failing and there was some confusion over whether or not an object being used at one point in the stack was the same object as an object several methods away.

    Read the article

  • AJI Report #18 | Patrick Delancy On Code Smells and Anti-Patterns

    - by Jeff Julian
    Patrick Delancy, the first person we interviewed on the AJI Report, is joining us again for Episode 18. This time around Patrick explains what Code Smells and Anti-patterns are and how developers can learn from these issues in their code. Patrick takes the approach of addressing your code in his presentations instead of pointing fingers at others. We spend a lot of time talking about how to address a developer with bad practices in place that would show up on the radar as a Code Smell or Anti-Pattern without making them feel inferior. Patrick also list out a few open-source frameworks that use good patterns and practices as well as how he continues his education through interacting with other developers. Listen to the Show Site: http://patrickdelancy.com Twitter: @PatrickDelancy LinkedIn: LinkedIn Google+: Profile

    Read the article

  • GWB | What is the next feature you want to see?

    - by Jeff Julian
    We want to know what you are thinking bloggers and visitors of Geekswithblogs.net.  If you were able to add items to the product backlog for this site, what would they be? New skins? Better search? Organic tag system? Better twitter integration? More ways to link other social media outlets to your blog like LinkedIn, Plaxo, Flickr? …. What would you like to see?   You can leave feedback on this post or email me at [email protected].  We love this community and want to see how we can continue to make Geekswithblogs.net relevant to developers in 2010. Technorati Tags: Geekswithblogs,2010,Next Features

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >