Search Results

Search found 39290 results on 1572 pages for 'even numbers'.

Page 385/1572 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • Determining whether a visitor reached two different pages in one visit

    - by Shaun
    I have a funnel that I would like to track. Tracking this funnel won't work with the default "goal funnel" tracking in Google due to the fact that I am mixing events and pageviews. As such, I've created a series of reports: Visits to demo pages - An inclusion filter on "Page". Triggers an Event on these pages - An inclusion filter on "Page" and "Event Category". Does not bounce - An inclusion filter on "Page" and an exclusion filter on "Exit Page" for these same pages. Reach our storefront - ?? Purchase something - An inclusion filter on "Page" and a report that shows "Transactions". At a basic level, I need to track users who reached demo pages, then reached any page on our store. Intuitively, I created a segment, used two inclusive "Page" filters (one for the demo pages and one for any page in our store), and combined them with an "AND" operator. I thought this was working until I tried to do the same thing in a dashboard widget and on a custom report. When I tried the same thing in those areas, I got zero results. I figured this might be because widgets and custom report filters function differently from segment filters (the options are different for all of them), so I tried applying my "demo page && store page" segment to a report that gave me a general page list. All I saw was a list of the specific pages. I tried simplifying things by creating a custom report that showed all visits to store pages, then applied a segment that filtered for users who visited demo pages. This got me the same numbers as my "demo page && store page" segment, but showed a list of demo pages. This has led me to believe that the "demo page && store page segment" approach and the "demo segment && store report" functionally behave the same. However, this experience has left me questioning whether they're giving me what I want. Are these methods showing me all users who reached both sets of pages? Is there a better/easier/more standard way of doing this aside from looking at visitor flow reports? I'm trying to avoid a combination of custom variables/events and using the horizontal funnel approach since it would consume a large number of our limited goals and seems more complicated than is necessary for tracking this funnel.

    Read the article

  • Making CopySourceAsHtml add-on work with VS2010

    - by DigiMortal
    As there are still bloggers who use CopySourceAsHtml add-on for Visual Studio to get syntax highlighted code to their blog posts and there is no guidance in CSAH site how to make it work with Visual Studio 2010 I will give my guidance here. Almost all code in this blog is syntax highlighted by this add-on (read more from my post Visual Studio add-in: CopySourceAsHTML). Last version of CSAH is available for VS2008 but it is easy to make it work with VS2010. Just follow these steps. Close VS2010 if it is opened. Goto folder MyDocuments\Visual Studio 2010. Move to AddIns subfolder (create it if there is no such subfolder). Create file called CopySourceAsHtml.AddIn and open it in text editor. Paste the following XML to editor:   <?xml version="1.0" encoding="utf-8" standalone="no"?> <Extensibility xmlns="http://schemas.microsoft.com/AutomationExtensibility"> <HostApplication> <Name>Microsoft Visual Studio Macros</Name> <Version>10.0</Version> </HostApplication> <HostApplication> <Name>Microsoft Visual Studio</Name> <Version>10.0</Version> </HostApplication> <Addin> <FriendlyName>CopySourceAsHtml</FriendlyName> <Description>Adds support to Microsoft Visual Studio 2010 for copying source code, syntax highlighting, and line numbers as HTML.</Description> <Assembly>JTLeigh.Tools.Development.CopySourceAsHtml, Version=3.0.3215.1, Culture=neutral, PublicKeyToken=bb2a58bdc03d2e14, processorArchitecture=MSIL</Assembly> <FullClassName>JTLeigh.Tools.Development.CopySourceAsHtml.Connect</FullClassName> <LoadBehavior>1</LoadBehavior> <CommandPreload>0</CommandPreload> <CommandLineSafe>0</CommandLineSafe> </Addin> </Extensibility> Save file and close it. Run VS2010 and activate add-on if it is not activated yet. That’s it. If you are heavy user of CSAH then I recommend you to bookmark this post. :)

    Read the article

  • Using NSpec at various architectural layers

    - by nono
    Having read the quick start at nspec.org, I realized that NSpec might be a useful tool in a scenario which was becoming a bit cumbersome with NUnit alone. I'm adding an OAuth (or, DotNetOpenAuth) to a website and quickly made a mess of writing test methods such as [Test] public void UserIsLoggedInLocallyPriorToInvokingExternalLoginAndExternalLoginSucceedsAndExternalProviderIdIsNotAlreadyAssociatedWithUserAccount() { ... } ... and I wound up with maybe a dozen permutations of this theme, for the user already being logged in locally and not locally, the external login succeeding or failing, etc. Not only were the method names unwieldy, but every test needed a setup that contained parts in common with a different set of other tests. I realized that NSpec's incremental setup capabilities would work great for this, and for a while I was trucking a long wonderfully, with code like act = () => { actionResult = controller.ExternalLoginCallback(returnUrl); }; context["The user is already logged in"] = () => { before = () => identity.Setup(x => x.IsAuthenticated).Returns(true); context["The external login succeeds"] = () => { before = () => oauth.Setup(x => x.VerifyAuthentication(It.IsAny<string>())).Returns(new AuthenticationResult(true, providerName, "provideruserid", "username", new Dictionary<string, string>())); context["External login already exists for current user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Should add 'login sucessful' alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("Login successful"); alerts[0].AlertType.should_be(AlertType.Success); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; context["External login already exists for another user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForAnyOtherUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Adds an error alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("The external login you requested is already associated with a different user account"); alerts[0].AlertType.should_be(AlertType.Error); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; This approach seemed to work magnificently until I prepared to write test code for my ApplicationServices layer, to which I delegate viewmodel manipulation from my MVC controllers, and which coordinates the operations of the lower data repository layer: public void CreateUserAccountFromExternalLogin(RegisterExternalLoginModel model) { throw new NotImplementedException(); } public void AssociateExternalLoginWithUser(string userName, string provider, string providerUserId) { throw new NotImplementedException(); } public string GetLocalUserName(string provider, string providerUserId) { throw new NotImplementedException(); } I have no idea what in the world to name the test class, the test methods, or even if I should perhaps include the testing for this layer into the test class from my large code snippet above, so that a single feature or user action could be tested without regard to architectural layering. I can't find any tutorials or blog posts which cover more than simple examples, so I would appreciate any recommendations or pointing in the right direction. I would even welcome "your question is invalid"-type answers as long as some explanation is provided.

    Read the article

  • AS3 - At exactly 23 empty alpha channels, images below stop drawing

    - by user46851
    I noticed, while trying to draw large numbers of circles, that occasionally, there would be some kind of visual bug where some circles wouldn't draw properly. Well, I narrowed it down, and have noticed that if there is 23 or more objects with 00 for an alpha value on the same spot, then the objects below don't draw. It appears to be on a pixel-by-pixel basis, since parts of the image still draw. Originally, this problem was noticed with a class that inherited Sprite. It was confirmed to also be a problem with Sprites, and now Bitmaps, too. If anyone can find a lower-level class than Bitmap which doesn't have this problem, please speak up so we can try to find the origin of the problem. I prepared a small test class that demonstrates what I mean. You can change the integer value at line 20 in order to see the three tests I came up with to clearly show the problem. Is there any kind of workaround, or is this just a limit that I have to work with? Has anyone experienced this before? Is it possible I'm doing something wrong, despite the bare-bones implementation? package { import flash.display.Sprite; import flash.events.Event; import flash.display.Bitmap; import flash.display.BitmapData; public class Main extends Sprite { public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); // entry point Test(3); } private function Test(testInt:int):void { if(testInt==1){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var i:int = 0; i < 22; i++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } } if(testInt==2){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var j:int = 0; j < 23; j++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } } if(testInt==3){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var k:int = 0; k < 22; k++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } var M:Bitmap = new Bitmap(new BitmapData(100, 100, true, 0x00000000)); M.x += 50; M.y += 50; addChild(M); } } } }

    Read the article

  • ADF page security - the untold password rule

    - by ankuchak
    I'm kinda new to Oracle ADF. So, in this blog post I'm going to share something with you that I faced (and recovered from) recently. Initially I thought if I should at all put a blog post on this, because it's totally simple. Still, simplicity is a relative term. So without wasting further time, let's kick off.    I was exploring the ADF security aspect to secure a page through html basic authentication. The idea is very simple and the credential store etc. come into picture. But I was not able to run a successful test of this phenomenally simple thing even after trying for over 30 minutes. This is what I did.   I created a simple jsf page and put a panel in it. And I put a simple el to show the current user name.  Next I created a user that I should test with. I named the password as myuser, just to keep it simple. Then I created an enterprise role and mapped the user that I just created. Then I created an application role and mapped the enterprise role to it. Then I mapped the resource, the simple jsf page in this case, to this application role. This way, only users with the given application role can only access this page (as if you didn't know this duh!).  Of course, I had to create the page definition for the page before I could map it to an application role. What else! done! Then I hit the run menu item and it all went well...   Until... I got this message. I put the correct credentials repeatedly 2-3 times. Still I got the same error. Why? I didn't get any error message during the deployment. nope.  Then, as I said before, I spent over 30 minutes trying different things out, things like mapping only the user(not the role) to the page, changing the context root etc. Nothing worked!  Then of course, I bothered to look at the logs and found this. See the first red line. That says it all. So the problem was with that password. The password must have at least one special character and one digit in it. I think I was misled by the missing password hint/rule and the fact that the deployment didn't fail even if the user was not created properly. Well, yes, I agree that I was fool enough not to look at the logs.  Later I changed the password to something like myuser123# . And it worked. I hope it helped.

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • When should I use a Process Model versus a Use Case?

    - by Dave Burke
    This Blog entry is a follow on to https://blogs.oracle.com/oum/entry/oum_is_business_process_and and addresses a question I sometimes get asked…..i.e. “when I am gathering requirements on a Project, should I use a Process Modeling approach, or should I use a Use Case approach?” Not surprisingly, the short answer is “it depends”! Let’s take a scenario where you are working on a Sales Force Automation project. We’ll call the process that is being implemented “Lead-to-Order”. I would typically think of this type of project as being “Process Centric”. In other words, the focus will be on orchestrating a series of human and system related tasks that ultimately deliver value to the business in a cost effective way. Put in even simpler terms……implement an automated pre-sales system. For this type of (Process Centric) project, requirements would typically be gathered through a series of Workshops where the focal point will be on creating, or confirming, the Future-State (To-Be) business process. If pre-defined “best-practice” business process models exist, then of course they could and should be used during the Workshops, but even in their absence, the focus of the Workshops will be to define the optimum series of Tasks, their connections, sequence, and dependencies that will ultimately reflect a business process that meets the needs of the business. Now let’s take another scenario. Assume you are working on a Content Management project that involves automating the creation and management of content for User Manuals, Web Sites, Social Media publications etc. Would you call this type of project “Process Centric”?.......well you could, but it might also fall into the category of complex configuration, plus some custom extensions to a standard software application (COTS). For this type of project it would certainly be worth considering using a Use Case approach in order to 1) understand the requirements, and 2) to capture the functional requirements of the custom extensions. At this point you might be asking “why couldn’t I use a Process Modeling approach for my Content Management project?” Well, of course you could, but you just need to think about which approach is the most effective. Start by analyzing the types of Tasks that will eventually be automated by the system, for example: Best Suited To? Task Name Process Model Use Case Notes Manage outbound calls Ö A series of linked human and system tasks for calling and following up with prospects Manage content revision Ö Updating the content on a website Update User Preferences Ö Updating a users display preferences Assign Lead Ö Reviewing a lead, then assigning it to a sales person Convert Lead to Quote Ö Updating the status of a lead, and then converting it to a sales order As you can see, it’s not an exact science, and either approach is viable for the Tasks listed above. However, where you have a series of interconnected Tasks or Activities, than when combined, deliver value to the business, then that would be a good indicator to lead with a Process Modeling approach. On the other hand, when the Tasks or Activities in question are more isolated and/or do not cross traditional departmental boundaries, then a Use Case approach might be worth considering. Now let’s take one final scenario….. As you captured the To-Be Process flows for the Sales Force automation project, you discover a “Gap” in terms of what the client requires, and what the standard COTS application can provide. Let’s assume that the only way forward is to develop a Custom Extension. This would now be a perfect opportunity to document the functional requirements (behind the Gap) using a Use Case approach. After all, we will be developing some new software, and one of the most effective ways to begin the Software Development Lifecycle is to follow a Use Case approach. As always, your comments are most welcome.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • How to Export a Contact to and Import a Contact from a vCard (.vcf) File in Outlook 2013

    - by Lori Kaufman
    vCard is the abbreviation for Virtual Business Card and is the standard format (.vcf files) for electronic business cards. vCards allow you to create and share contact information over the internet, such as in email messages and instant messaging. You can also use vCards to move contact information from one email or personal information management program to another, as long as both programs support the .vcf file format. vCards can contain name and address information, as well as phone numbers, email addresses, URLs, images, and audio clips. We will show you how to export a contact to and import a contact from a vCard, or .vcf file, in Outlook. First access the People section by clicking People at the bottom of the Outlook window. To view your contact in business card format, click Business Card in the Current View section of the Home tab. Select a contact by clicking on the name bar at the top of the business card. To export the selected contact as a vCard, click the File tab. On the Account Information screen, click Save As in the list of options on the left. The Save As dialog box displays. By default, the name of the contact is used to name the .vcf file in the File name edit box. Change the name, if desired, select a location for the file, and click Save. The contact is saved as a .vcf file. To import a vCard, or .vcf file, into Outlook, simply double-click on the .vcf file. By default, .vcf files are automatically associated with Outlook, so the file is opened in Outlook as a Contact. Make any changes or additions to the contact in the contact editing window. To save the contact, click Save & Close in the Actions section of the Contact tab. NOTE: Notice that because this contact is new, the full contact editing window displays rather than the Contact Card that displays when double-clicking on a contact. You can open the full contact editing window instead of the Contact Card when editing a contact or searching for a contact. The contact is added to the Contacts folder. You can add your contact information to a signature in business card format, and it will display as shown above in emails. We have covered how to create signatures and will be discussing more about signatures and business cards in Outlook.     

    Read the article

  • Need Help with partitions and such Dual Booting 13.10 w/ windows 7

    - by Aymax
    so I've spent the past couple days freaking out about this and looking for answers, and I decided to resort to asking on forums. probably should have before I wasted 2 entire days. so I am trying to DUAL-INSTALL Ubuntu 13.10 with my x64 Windows 7 home premium computer. I have 6 gb of ram, 1TB hard drive, and a 3.3GHZ dual core processer (just in case it matters). I've managed to figure some things out. I've burned the ubuntu files onto a DVD, and I have been able to successfully run it off the disk. I also shrunk my Windows partition by 120Gb and partitioned that for Ubuntu (all using the windows Disk Manager). Problems: When I turn my computer on with the DVD in the tray, the computer cant find windows. it flashes a screen real quick that says something about not being able to find an operating system, and then goes to "grub" and asks what i want to do with Ubuntu. this scares me, because I don't know if that means that I will not be able to boot windows if I install Ubuntu. The Ubuntu 13.10 installer does not detect my Windows operating system. I only have the options to Erase everything on my drive, or "something else." I choose that, which brings me to 3 I don't understand the partition table. I have no idea which drive im selecting to install stuff on, much less which one to select. I tried to tell by the amount of memory partitioned off, but none of the numbers seem to be accurate. Plus, all the names are dev/sda(#). I know Ubuntu knows the name of my partition, because on the sidebars it shows the names of the different drives, including the partition I made; so why don't they use the names? I have no idea what I'm going to be erasing. I've read that I should know which is which by the file system type, but they are all NTFS, including the one I made. my only other option was FAT, none for EXT2 or any of that like people said to do. My main concerns are that of accidentally erasing windows or not being able to access windows. any feasible solution is helpful, weather it helps me with the install or to make Ubuntu see windows. I realize this question has been asked much, but i have found no feasible answers so far. I am relatively new to this, and have never installed an operating system before, so I do not know most of the jargon. please keep it relatively simple, please. I am not a programmer. Thanks.

    Read the article

  • High Resolution Timeouts

    - by user12607257
    The default resolution of application timers and timeouts is now 1 msec in Solaris 11.1, down from 10 msec in previous releases. This improves out-of-the-box performance of polling and event based applications, such as ticker applications, and even the Oracle rdbms log writer. More on that in a moment. As a simple example, the poll() system call takes a timeout argument in units of msec: System Calls poll(2) NAME poll - input/output multiplexing SYNOPSIS int poll(struct pollfd fds[], nfds_t nfds, int timeout); In Solaris 11, a call to poll(NULL,0,1) returns in 10 msec, because even though a 1 msec interval is requested, the implementation rounds to the system clock resolution of 10 msec. In Solaris 11.1, this call returns in 1 msec. In specification lawyer terms, the resolution of CLOCK_REALTIME, introduced by POSIX.1b real time extensions, is now 1 msec. The function clock_getres(CLOCK_REALTIME,&res) returns 1 msec, and any library calls whose man page explicitly mention CLOCK_REALTIME, such as nanosleep(), are subject to the new resolution. Additionally, many legacy functions that pre-date POSIX.1b and do not explicitly mention a clock domain, such as poll(), are subject to the new resolution. Here is a fairly comprehensive list: nanosleep pthread_mutex_timedlock pthread_mutex_reltimedlock_np pthread_rwlock_timedrdlock pthread_rwlock_reltimedrdlock_np pthread_rwlock_timedwrlock pthread_rwlock_reltimedwrlock_np mq_timedreceive mq_reltimedreceive_np mq_timedsend mq_reltimedsend_np sem_timedwait sem_reltimedwait_np poll select pselect _lwp_cond_timedwait _lwp_cond_reltimedwait semtimedop sigtimedwait aiowait aio_waitn aio_suspend port_get port_getn cond_timedwait cond_reltimedwait setitimer (ITIMER_REAL) misc rpc calls, misc ldap calls This change in resolution was made feasible because we made the implementation of timeouts more efficient a few years back when we re-architected the callout subsystem of Solaris. Previously, timeouts were tested and expired by the kernel's clock thread which ran 100 times per second, yielding a resolution of 10 msec. This did not scale, as timeouts could be posted by every CPU, but were expired by only a single thread. The resolution could be changed by setting hires_tick=1 in /etc/system, but this caused the clock thread to run at 1000 Hz, which made the potential scalability problem worse. Given enough CPUs posting enough timeouts, the clock thread could be a performance bottleneck. We fixed that by re-implementing the timeout as a per-CPU timer interrupt (using the cyclic subsystem, for those familiar with Solaris internals). This decoupled the clock thread frequency from timeout resolution, and allowed us to improve default timeout resolution without adding CPU overhead in the clock thread. Here are some exceptions for which the default resolution is still 10 msec. The thread scheduler's time quantum is 10 msec by default, because preemption is driven by the clock thread (plus helper threads for scalability). See for example dispadmin, priocntl, fx_dptbl, rt_dptbl, and ts_dptbl. This may be changed using hires_tick. The resolution of the clock_t data type, primarily used in DDI functions, is 10 msec. It may be changed using hires_tick. These functions are only used by developers writing kernel modules. A few functions that pre-date POSIX CLOCK_REALTIME mention _SC_CLK_TCK, CLK_TCK, "system clock", or no clock domain. These functions are still driven by the clock thread, and their resolution is 10 msec. They include alarm, pcsample, times, clock, and setitimer for ITIMER_VIRTUAL and ITIMER_PROF. Their resolution may be changed using hires_tick. Now back to the database. How does this help the Oracle log writer? Foreground processes post a redo record to the log writer, which releases them after the redo has committed. When a large number of foregrounds are waiting, the release step can slow down the log writer, so under heavy load, the foregrounds switch to a mode where they poll for completion. This scales better because every foreground can poll independently, but at the cost of waiting the minimum polling interval. That was 10 msec, but is now 1 msec in Solaris 11.1, so the foregrounds process transactions faster under load. Pretty cool.

    Read the article

  • Are we queueing and serializing properly?

    - by insta
    We process messages through a variety of services (one message will touch probably 9 services before it's done, each doing a specific IO-related function). Right now we have a combination of the worst-case (XML data contract serialization) and best-case (in-memory MSMQ) for performance. The nature of the message means that our serialized data ends up about 12-15 kilobytes, and we process about 4 million messages per week. Persistent messages in MSMQ were too slow for us, and as the data grows we are feeling the pressure from MSMQ's memory-mapped files. The server is at 16GB of memory usage and growing, just for queueing. Performance also suffers when the memory usage is high, as the machine starts swapping. We're already doing the MSMQ self-cleanup behavior. I feel like there's a part we're doing wrong here. I tried using RavenDB to persist the messages and just queueing an identifier, but the performance there was very slow (1000 messages per minute, at best). I'm not sure if that's a result of using the development version or what, but we definitely need a higher throughput[1]. The concept worked very well in theory but performance was not up to the task. The usage pattern has one service acting as a router, which does all reads. The other services will attach information based on their 3rd party hook, and forward back to the router. Most objects are touched 9-12 times, although about 10% are forced to loop around in this system for awhile until the 3rd parties respond appropriately. The services right now account for this and have appropriate sleeping behaviors, as we utilize the priority field of the message for this reason. So, my question, is what is an ideal stack for message passing between discrete-but-LAN'ed machines in a C#/Windows environment? I would normally start with BinaryFormatter instead of XML serialization, but that's a rabbit hole if a better way is to offload serialization to a document store. Hence, my question. [1]: The nature of our business means the sooner we process messages, the more money we make. We've empirically proven that processing a message later in the week means we are less likely to make that money. While performance of "1000 per minute" sounds plenty fast, we really need that number upwards of 10k/minute. Just because I'm giving numbers in messages per week doesn't mean we have a whole week to process those messages.

    Read the article

  • Oracle EBS ????“????”???????

    - by Steve He(???)
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:??; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt;} Oracle E-Business ?????????????????????,???????????EBS?????????????????“????”,???????????????????????????????? ????????? notes ?????,?????????????????????,?notes??????????????????,????????????? note,????????  ???????????????????????????????,??????????????????????????????,“????”??????????????????????????,???????????????????????????? ?? EBS ????????“????”?????? Doc ID 1501724.1 ???? EBS “????”???? ??????Receivables Transactions?“????”: ??? "Entering / Updating Transactions"????,????????: ????? "Transaction numbers are not in sequence",????????????ID 197212.1: How To Setup Gapless Document Sequencing in Receivables. EBS?????????“????” ?: Advanced Pricing Applications Technology Configurator General Ledger Human Capital Management Inventory Management Order Management Payables Process Manufacturing Purchasing Receivables Shipping Value Chain Planning

    Read the article

  • Bad disk performance on HP DL360 with Smarty Array P400i RAID controller

    - by sarge
    I have a HP DL360 server with 4x 146GB SAS disks and a Smart Array P400i RAID controller with 256MB cache. The disks are in RAID 5 (3 disks + 1 hot spare). The server is running VMware ESX 3i. The disk write performance is really bad. Here are some numbers: ns1:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 3364 MB in 2.00 seconds = 1685.69 MB/sec Timing buffered disk reads: 18 MB in 3.79 seconds = 4.75 MB/sec ns1:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 282.307 s, 3.6 MB/s real 4m52.003s user 0m2.160s sys 3m10.796s Compared to another server those number are terrible: Dell R200, 2x 500GB SATA disks, PERC raid controller (disks are mirrored). web4:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 6584 MB in 2.00 seconds = 3297.79 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.79 MB/sec web4:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 35.2919 s, 29.0 MB/s real 0m36.570s user 0m0.476s sys 0m32.298s The server isn't very loaded and the VMware Infrastructure Client performance monitor is showing 550KBps average read and 1208KBps average write for the last 30 minutes (highest write rate: 6.6MBps). This has been a problem from the start. Any ideas?

    Read the article

  • How many bits for sequence number using Go-Back-N protocol.

    - by Mike
    Hi Everyone, I'm a regular over at Stack Overflow (Software developer) that is trying to get through a networking course. I got a homework problem I'd like to have a sanity check on. Here is what I got. Q: A 3000-km-long T1 trunk is used to transmit 64-byte frames using Go-Back-N protocol. If the propagation speed is 6 microseconds/km, how many bits should the sequence numbers be? My Answer: For this questions what we need to do is lay the base knowledge. What we are trying to find is the size of the largest sequence number we should us using Go-Back-N. To figure this out we need to figure out how many packets can fit into our link at a time and then subtract one from that number. This will ensure that we never have two packets with the same sequence number at the same time in the link. Length of link: 3,000km Speed: 6 microseconds / km Frame size: 64 bytes T1 transmission speed: 1544kb/s (http://ckp.made-it.com/t1234.html) Propagation time = 6 microseconds / km * 3000 km = 18,000 microseconds (18ms). Convert 1544kb to bytes = 1544 * 1024 = 1581056 bytes Transmission time = 64 bytes / 1581056bytes / second = 0.000040479 seconds (0.4ms) So then if we take the 18ms propagation time and divide it by the 0.4ms transmission time we will see that we are going to be able to stuff ( 18 / 0.4) 45 packets into the link at a time. That means that our sequence number should be 2 ^ 45 bits long! Am I going in the right direction with this? Thanks, Mike

    Read the article

  • Windows Server 2008 / SQL 2008 Licensing for Authenticated Web Application

    - by MikeM
    Hello, I'm trying to crunch some numbers to see what the software costs involved are for hosting an application we are developing. Users will not be anonymous - they will need to log in. SQL Server 2008: SQL Server licensing is easy - it will be licensed per-processor. No real fuss there. The cost of CALs would be much higher for the number of users as compared to the processor licenses. Windows Server 2008: This is where it gets trickier. We need to license the OS for both the web servers (there will be a couple) plus the database servers (also a couple). The Web Servers could run on the Web Edition without a need for CALs, but if you continue reading, you will see that may not matter much because I will likely have user CALs for each user anyway. We can't use the "External Connector" for any of the Windows licenses, because that doesn't cover customers who are paying to access a hosted application. We can't use the Web Edition for the SQL Servers because that license only allows database running on Web Edition to host data for the local web application (i.e. other web servers can't connect to it). So that leaves us with the "full" editions of Windows Server for the database server OS. I find this a little rediculous, and I feel as though I must be missing something, but it looks to me like I will actually need to buy a CAL for every user who signs up to use our service. I feel like I'm missing something because that means that for every user, I have to shell out $40 for a CAL. That could be one or two years' worth of revenue from each user for an inexpensive service! Is there any way to serve a web application to authenticated users without paying for individual Windows Server CALs, if the web servers and SQL servers are seperate boxes?

    Read the article

  • Problem running Apache-Tomcat on every web browser installed in Windows 7

    - by Kush
    Hello everyone, I'm working on a web application in JSP and my web container is Apache Tomcat 7.0.2 (Its portable cross-platform version). As I've made extensive use of HTML5-CSS3 and my target browser is Google Chrome, I'm able to run the the Apache server only in Opera web browser, neither of the remaining installed browser run it. Here's the steps I have followed to start the server in my Windows 7 machine. -Placing the 'apache-tomcat-7.0.2' directory on my root partition (i.e. C: Drive) -Execute 'startup.bat' from 'bin' directory in it. (startup.sh if on Linux/Unix). -Then, a Console window opens that shows log during the setup of server (separately from Command Prompt), and I need to keep that console running in order to keep the Apache server running, so I minimize it. -Then, I open 'http://localhost:8080/' in various web browsers, and I could see Apache Server Homepage with same address only in Opera Web Browser (11.01), neither of other browsers installed can open it (Chrome 9, Firefox 4 Beta 10 or IE8). -I also tried other port numbers, but none of them worked. What can I do to make Apache run in every browser installed in my computer? I have my computer dual boot with Windows 7 and Ubuntu 10.10, and in Ubuntu, every web browser installed can run Apache once I start it, but same is not working in Windows.

    Read the article

  • Stange stream of HTTP GET requests in apache logs, from amazon ec2 instances

    - by Alexandre Boeglin
    I just had a look at my apache logs, and I see a lot of very similar requests: GET / HTTP/1.1 User-Agent: curl/7.24.0 (i386-redhat-linux-gnu) libcurl/7.24.0 \ NSS/3.13.5.0 zlib/1.2.5 libidn/1.18 libssh2/1.2.2 Host: [my_domain].org Accept: */* there's a steady stream of those, about 2 or 3 per minute; they all request the same domain and resource (there are slight variations in user agent version numbers); they come form a lot of different IPv4 and IPv6 addresses, in blocs that belong to amazon ec2 (in Singapore, Japan, Ireland and the USA). I tried to look for an explanation online, or even just similar stories, but couldn't find any. Has anyone got a clue as to what this is? It doesn't look malicious per say, but it's just annoying me, and I couldn't find any more information about it. I first suspected it could be a bot checking if my server is still up, but: I don't remember subscribing to such a service; why would it need to check my site twice every minute; why doesn't it use a clearly identifying fqdn. Or, should I send this question to amazon, via their abuse contact? Thanks!

    Read the article

  • Remote Desktop Mobile mangles barcodes coming from scanner

    - by sfonck
    We have an application here using handhelds to scan barcodes. These handhelds are actually making a remote desktop session towards a server where the application runs. Works fine. Now we have bought some new Motorola MC55's running 'Windows Mobile 6.1 Classic', and when using the application over remote desktop: it mangles the characters of the barcodes.... I already tried following things: When scanning a barcode on the MC55 itself it is displayed correctly When scanning a barcode via the remote desktop into a notepad session it is incorrect. Played with all options of the 'Remote Desktop Mobile' - no result Disabled 'autocorrect' and 'suggest words when entering text' on the input settings - no result The strange things is: a barcode which consists of only numbers gets scanned correctly the mangled characters comes through in lower case For some codes \t is mangled in between (should normally be entered after the barcode) e.g.: 'PERIN4' becomes 'ERINp4' 'MGZB' becomes 'GZB m' 'BAK664' becomes 'AK664 b' 'MAGBFA01' becomes 'AGBFmA01' '5021879949500' gets scanned correctly Final solution: Suppllier of the handhelds said the handheld was sending the characters too fast over the remote desktop connection. They changed the handheld to wait for 50ms between sending each character, which produced correct results right now. Scanning a barcode became somewhat slower but it's almost not remarkable to endusers.

    Read the article

  • New Dell PE R710 - Storage Question

    - by rihatum
    Hi All, Dell PE R710, received from Dell in the following state : Windows Disk 0 1800GB ( Volume C & D ) Windows Disk 1 526 GB (Volume E ) Perc6i Integrated Raid Controller 6 x 500GB Nearline SAS 7200RPM HDDs Raid 5 Configuration with two Virtual Disks I have installed Dell open Manage and it shows the following : Virtual Disk 0 - State : Background Initialization ( 7% ) Virtual Disk 1 - State : Background Initialization ( 25% ) Now when I click on Virtual Disk 0 it shows me all 6 Disks and the same happens when I click on Virtual Disk 1 it displays all 6 disks. But when I click on Storage Perc6i Connector 0 I get 4 Physical disks with the following numbers : Physical Disk 0:0:0 Physical Disk 0:0:1 Physical Disk 0:0:2 Physical Disk 0:0:3 When I click on Storage Perc6i Connector 1 I get 2 Physical Disks Listed in the following way : Physical Disk 1:0:4 Physical Disk 1:0:5 I am a little confused in this description, does this 1:0:4 interprets to Controller1, Disk4. Does this integrated raid card have two controllers coming out of it ? Also, When I first switched on the machine, the boot partition was showing 1GB Available out of 40GB, now its showing 38GB available out of 40GB. Is this because the Virtual Disks are still Initializing ? Any recommendations or suggestions ? Also, this server have 6 x 500GB NearLine SAS Hard drives, what would be a good raid config ? We are planning to use it for Hyper-V with quite a few (7 or 8) virtual servers, your suggestions would be helpful. Also, while the virtual disks are in a initialization state, can I destroy and re-create the raid configuration ? I would have to do it at the BIOS CTRL-M ? Thanks and Regards

    Read the article

  • mdadm cron job sends email that cron has run

    - by Andrew
    I've got an Ubuntu 8.04 server using mdadm to create several RAID1 arrays. I created /etc/cron.hourly/mdadm as follows: #! /bin/sh set -e mdadm --monitor /dev/md0 /dev/md3 /dev/md4 --oneshot (Yes, the array numbers are not sequential, and I'm not using --scan beacuse I have a degraded array that may or may not have been used as swap and I can't delete, but I think that's a separate issue. If it's the underlying cause of this, I need to fix it.) mdadm sends me email (configured in the /etc/mdadm/mdadm.conf) on DegradedArray etc. events. This is the desired behaviour. What is not desired, and I can't work out, is why cron is sending me (relatively pointless) emails, via an alias in /etc/aliases: From: root@<hostname> (Cron Daemon) To: root@<hostname> Subject: Cron <root@<hostname>> cd / && run-parts --report /etc/cron.hourly Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <id@hostname> Date: Fri, 7 May 2010 13:17:01 +0930 (CST) /etc/cron.hourly/mdadm: mdadm: Monitor using email address "<root_alias@domain>" from config file I've got a dozen other servers behaving correctly (mdadm sends email, cron doesnt') with identical /etc/crontab files: # /etc/crontab: system-wide crontab # <snip comments> SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly <snip anacron jobs> Should I simply remove the --report, or is there something else in my cron config somewhere causing this?

    Read the article

  • Performance of file operations on thousands of files on NTFS vs HFS, ext3, others

    - by peterjmag
    [Crossposted from my Ask HN post. Feel free to close it if the question's too broad for superuser.] This is something I've been curious about for years, but I've never found any good discussions on the topic. Of course, my Google-fu might just be failing me... I often deal with projects involving thousands of relatively small files. This means that I'm frequently performing operations on all of those files or a large subset of them—copying the project folder elsewhere, deleting a bunch of temporary files, etc. Of all the machines I've worked on over the years, I've noticed that NTFS handles these tasks consistently slower than HFS on a Mac or ext3/ext4 on a Linux box. However, as far as I can tell, the raw throughput isn't actually slower on NTFS (at least not significantly), but the delay between each individual file is just a tiny bit longer. That little delay really adds up for thousands of files. (Side note: From what I've read, this is one of the reasons git is such a pain on Windows, since it relies so heavily on the file system for its object database.) Granted, my evidence is merely anecdotal—I don't currently have any real performance numbers, but it's something that I'd love to test further (perhaps with a Mac dual-booting into Windows). Still, my geekiness insists that someone out there already has. Can anyone explain this, or perhaps point me in the right direction to research it further myself?

    Read the article

  • ipvsadm lists a few hosts by IP only, rest by name

    - by dmourati
    We use keepalived to manage our Linux Virtual Server (LVS) load balancer. The LVS VIPs are setup to use a FWMARK as configured in iptables. virtual_server fwmark 300000 { delay_loop 10 lb_algo wrr lb_kind NAT persistence_timeout 180 protocol TCP real_server 10.10.35.31 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.31" misc_timeout 30 } } real_server 10.10.35.32 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.32" misc_timeout 30 } } real_server 10.10.35.33 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.33" misc_timeout 30 } } real_server 10.10.35.34 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.34" misc_timeout 30 } } } http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.fwmark.html [root@lb1 ~]# iptables -L -n -v -t mangle Chain PREROUTING (policy ACCEPT 182G packets, 114T bytes) 190M 167G MARK tcp -- * * 0.0.0.0/0 w1.x1.y1.4 multiport dports 80,443 MARK set 0x493e0 62M 58G MARK tcp -- * * 0.0.0.0/0 w1.x1.y2.4 multiport dports 80,443 MARK set 0x493e0 [root@lb1 ~]# ipvsadm -L IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 300000 wrr persistent 180 -> 10.10.35.31:0 Masq 24 1 0 -> dis2.domain.com:0 Masq 24 3 231 -> 10.10.35.33:0 Masq 24 0 208 -> 10.10.35.34:0 Masq 24 0 0 At the time the realservers were setup, there was a misconfigured dns for some hosts in the 10.10.35.0/24 network. Thereafter, we fixed the DNS. However, the hosts continue to show up as only their IP numbers (10.10.35.31,10.10.35.33,10.10.35.34) above. [root@lb1 ~]# host 10.10.35.31 31.35.10.10.in-addr.arpa domain name pointer dis1.domain.com. OS is CentOS 6.3. Ipvsadm is ipvsadm-1.25-10.el6.x86_64. kernel is kernel-2.6.32-71.el6.x86_64. Keepalived is keepalived-1.2.7-1.el6.x86_64. How can we get ipvsadm -L to list all realservers by their proper hostnames?

    Read the article

  • mysql master-master setup as a way to simply master-slave promotion

    - by Chris Go
    I'm trying to see if the following plan is viable. Goal here is to be able to do HA (uptime) and not necessarily for load -- writes are fine on one MySQL 5.5 server (with innodb) but not really possible when the database is down. Currently, I have a master-slave replication setup which works fine except it doesn't have automatic promotion (obviously). what I am planning on doing is setup master-master replication to possibly do this "automatic promotion" using Amazon Route 53 DNS Failover (Health checks). What I am trying to avoid is to NOT have to do the auto-increment trick because the "business folks" got used to the auto-incrementing PK as consecutive numbers (yeah, I know this is bad but data is from 2004). So, setup the master-master replication WITHOUT the auto-increment collision prevention bit. The primary master is db1.domain.com and secondary master is db2.domain.com In Amazon Route 53, setup DNS Failover record for db.domain.com - primary failover is db1.domain.com - with a TCP healthcheck on IP address port 3306 - secondary failover is db2.domain.com - with a TCP healthcheck on IP address port 3306 Most of the time (99%), unless tcp://db1.domain.com:3306 is dead, db1.domain.com will be served up on DNS hits to db.domain.com. In fact, hopefully this is 100%. The possible downsides of this is the loss of a primary key (collision) and I think I am OK with losing one order. We are a low data volume B2B business and can just call our client up if this occurs (like an order disappearing). Does this sound like a good plan? Then I will also run another slave replication on db1.domain.com as "master" to a slave-db1.domain.com -- not sure why, maybe for heavy SELECTs?

    Read the article

  • Multiple contacts with shared information

    - by Keith Thompson
    Background: I currently have several hundred contacts, synchronized between a Microsoft Exchange server and several mobile devices. I also save exported copies of the contacts in .vcf format. Is there a good way (application, file format, whatever) to maintain contacts with shared information? A very common scenario is that I have contacts for two or more people who live in the same house, for example: John Doe 123 Main Street, Anytown USA Home: 555-555-1111 Work: 555-555-2222 Mobile: 555-555-3333 E-mail: [email protected] Jane Doe 123 Main Street, Anytown USA Home: 555-555-1111 Work: 555-555-4444 Mobile: 555-555-5555 E-mail: [email protected] As you can see, both contacts have the same home address and phone number, but distinct names and work and mobile phone numbers. (Other information might also be either shared or distinct.) The applications and file formats I'm familiar with don't seem to have a good way to deal with this. If I use a single "John & Jane Doe" contact for both, it's difficult to distinguish the distinct information (if I want to call Jane's mobile phone rather than John's). If I use a separate contact for each, I have to remember to update both of them (or all of them for N 2) when they move or change their home phone number. An ideal solution would let me create a record containing information for their household, and have each of their contact records contain a reference to the household record, so that when I view John's contact record I see both shared and distinct information. Is there anything out there that has good support this kind of thing? (I would think there would be, since it's a very common scenario.) (I suppose I could roll my own system that generates merged .vcf files from some extended format, but that wouldn't play well with synchronizing across multiple devices.)

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >