Search Results

Search found 33788 results on 1352 pages for 'right'.

Page 104/1352 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Best way to throw exception and avoid code duplication

    - by JF Dion
    I am currently writing code and want to make sure all the params that get passed to a function/method are valid. Since I am writing in PHP I don't have access to all the facilities of other languages like C, C++ or Java to check for parameters values and types public function inscriptionExists($sectionId, $userId) // PHP vs. public boolean inscriptionExists(int sectionId, int userId) // Java So I have to rely on exceptions if I want to make sure that my params are both integers. Since I have a lot of places where I need to check for param validity, what would be the best way to create a validation/exception machine and avoid code duplication? I was thinking on a static factory (since I don't want to pass it to all of my classes) with a signature like: public static function factory ($value, $valueType, $exceptionType = 'InvalidArgumentException'); Which would then call the right sub process to validate based on the type. Am I on the right way, or am I going completely off the road and overthinking my problem?

    Read the article

  • Use Drive Mirroring for Instant Backup in Windows 7

    - by Trevor Bekolay
    Even with the best backup solution, a hard drive crash means you’ll lose a few hours of work. By enabling drive mirroring in Windows 7, you’ll always have an up-to-date copy of your data. Windows 7’s mirroring – which is only available in Professional, Enterprise, and Ultimate editions – is a software implementation of RAID 1, which means that two or more disks are holding the exact same data. The files are constantly kept in sync, so that if one of the disks fails, you won’t lose any data. Note that mirroring is not technically a backup solution, because if you accidentally delete a file, it’s gone from both hard disks (though you may be able to recover the file). As an additional caveat, having mirrored disks requires changing them to “dynamic disks,” which can only be read within modern versions of Windows (you may have problems working with a dynamic disk in other operating systems or in older versions of Windows). See this Wikipedia page for more information. You will need at least one empty disk to set up disk mirroring. We’ll show you how to mirror an existing disk (of equal or lesser size) without losing any data on the mirrored drive, and how to set up two empty disks as mirrored copies from the get-go. Mirroring an Existing Drive Click on the start button and type partitions in the search box. Click on the Create and format hard disk partitions entry that shows up. Alternatively, if you’ve disabled the search box, press Win+R to open the Run window and type in: diskmgmt.msc The Disk Management window will appear. We’ve got a small disk, labeled OldData, that we want to mirror in a second disk of the same size. Note: The disk that you will use to mirror the existing disk must be unallocated. If it is not, then right-click on it and select Delete Volume… to mark it as unallocated. This will destroy any data on that drive. Right-click on the existing disk that you want to mirror. Select Add Mirror…. Select the disk that you want to use to mirror the existing disk’s data and press Add Mirror. You will be warned that this process will change the existing disk from basic to dynamic. Note that this process will not delete any data on the disk! The new disk will be marked as a mirror, and it will starting copying data from the existing drive to the new one. Eventually the drives will be synced up (it can take a while), and any data added to the E: drive will exist on both physical hard drives. Setting Up Two New Drives as Mirrored If you have two new equal-sized drives, you can format them to be mirrored copies of each other from the get-go. Open the Disk Management window as described above. Make sure that the drives are unallocated. If they’re not, and you don’t need the data on either of them, right-click and select Delete volume…. Right-click on one of the unallocated drives and select New Mirrored Volume…. A wizard will pop up. Click Next. Click on the drives you want to hold the mirrored data and click Add. Note that you can add any number of drives. Click Next. Assign it a drive letter that makes sense, and then click Next. You’re limited to using the NTFS file system for mirrored drives, so enter a volume label, enable compression if you want, and then click Next. Click Finish to start formatting the drives. You will be warned that the new drives will be converted to dynamic disks. And that’s it! You now have two mirrored drives. Any files added to E: will reside on both physical disks, in case something happens to one of them. Conclusion While the switch from basic to dynamic disks can be a problem for people who dual-boot into another operating system, setting up drive mirroring is an easy way to make sure that your data can be recovered in case of a hard drive crash. Of course, even with drive mirroring, we advocate regular backups to external drives or online backup services. Similar Articles Productive Geek Tips Rebit Backup Software [Review]Disabling Instant Search in Outlook 2007Restore Files from Backups on Windows Home ServerSecond Copy 7 [Review]Backup Windows Home Server Folders to an External Hard Drive TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010

    Read the article

  • Steps to deploying on Windows Azure

    - by Vincent Grondin
    Alright, these steps might be a little detailed and of few might not be necessary but still it's a pretty accurate road map to deploying on azure...     1)     Open you solution 2)      Rebuild ALL 3)      Right click on your Azure project and click "Publish" 4)      It should open a windows explorer window with your package to be uploaded (.cspkg ) and its associated configuration (.cscfg) to be uploaded too.  Keep it open, you'll need that path later on... 5)      It should also open a browser asking you to login to your passport account, please do so. 6)      After this you will be redirected to the Azure Portal where you will see your Azure Project Name below the « Projet Name » section.  Click on it. 7)      Then you should be redirected to a detailed view of your account on Azure where you will create a new service by clicking the hyperlink on the top right corner. 8)      Choose the right service type for you, most likely the "Hosted Service" type 9)      Choose a « Label » name and click « next » 10)   Choose a name for your service and validate that the name is available in the cloud by clicking the "Check Availability" button 11)   At the bottom of this same page, you can choose to create a group for your service, use no group or join an existing group.  Creating a group means that all applications that belong to the same group will see no cost to exchanging data between other applications of the same group.  Most of the time when you create a single application, creating a group is not necessary.  You should choose a region that's close to your own region. 12)   On the next window, you should see a "Production" environment and a "Staging" environment.  Beware because "Staging" and "Production" are two different environments in the cloud and applications in "Staging" even when not runing do continue to rack in charges...  Choose an environment and click "Deploy". 13)   In the following window, browse to the path where your cspkg resides and then do the same thing with your cscfg file.  Choose a name for your Label,  and click "Deploy"... 14)   From now on, the clock is ticking and unless you have free Azure hours, your credit card is being billed… 15)   Click on the « Run » button to start your application 16)   Be patient.... be very patient… 17)   Once your application has finished starting, you should see a GREEN circle on the left side of the screen indicating that your application is READY.  Click the URL to test your application and remember that if your application is a service, you have to hit the "svc" class behind the link you see there.  Something in the likes of http://testvince2.cloudapp.net/service1.svc  (this is a fictional link) 18)   Hopefully your application will show up or in the case of a service, you will see your service's wsdl meaning that everything is working fine. Happy cloud computing all!

    Read the article

  • css: zoooming-out inside the browser moves rightmost floated div below other divs

    - by John Sonderson
    I am seeing something strange in both firefox and chrome when I increase the zoom level inside these browsers, although I see nothing wrong with my CSS... I am hoping someone on this group will be able to help. Here is the whole story: I have a right-floated top-level div containing three right-floated right. The three inner divs have all box-model measurements in pixels which add up to the width of the enclosing container. Everything looks fine when the browser size is 100%, but when I start making the browser smaller with CTRL+scrollwheel or CTRL+minus the rightmost margin shrinks down too fast and eventually becomes zero, forcing my rightmost floated inner div to fall down below the other two! I can't make sense out of this, almost seems like some integer division is being performed incorrectly in the browser code, but alas firefox and chrome both display the same result. Here is the example (just zoom out with CTRL-minus to see what I mean): Click Here to View What I Mean on Example Site Just to narrow things down a bit, the tags of interest are the following: div#mainContent div#contentLeft div#contentCenter div#contentRight I've searched stackoverflow for an answer and found the following posts which seem related to my question but was not able to apply them to the problem I am experiencing: http:// stackoverflow.com/questions/6955313/div-moves-incorrectly-on-browser-resize http:// stackoverflow.com/questions/18246882/divs-move-when-resizing-page http:// stackoverflow.com/questions/17637231/moving-an-image-when-browser-resizes http:// stackoverflow.com/questions/5316380/how-to-stop-divs-moving-when-the-browser-is-resized I've duplicated the html and css code below for your convenience: Here is the HTML: <!doctype html> <html> <head> <meta charset="utf-8"> <title>Pinco</title> <link href="css/style.css" rel="stylesheet" type="text/css"> </head> <body> <div id="wrapper"> <header> <div class="logo"> <a href="http://pinco.com"> <img class="logo" src="images/PincoLogo5.png" alt="Pinco" /> </a> </div> <div class="titolo"> <h1>Benvenuti!</h1> <h2>Siete arrivati al sito pinco.</h2> </div> <nav> <ul class="menu"> <li><a href="#">Menù Qui</a></li> <li><a href="#">Menù Quo</a></li> <li><a href="#">Menù Qua</a></li> </ul> </nav> </header> <div id="mainContent"> <div id="contentLeft"> <section> <article> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque tempor turpis est, nec varius est pharetra scelerisque. Sed eu pellentesque purus, at cursus nisi. In bibendum tristique nunc eu mattis. Nulla pretium tincidunt ipsum, non imperdiet metus tincidunt ac. In et lobortis elit, nec lobortis purus. Cras ac viverra risus. Proin dapibus tortor justo, a vulputate ipsum lacinia sed. In hac habitasse platea dictumst. Phasellus sit amet malesuada velit. Fusce diam neque, cursus id dui ac, blandit vehicula tortor. Phasellus interdum ipsum eu leo condimentum, in dignissim erat tincidunt. Ut fermentum consectetur tellus, dignissim volutpat orci suscipit ac. Praesent scelerisque urna metus. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Duis pulvinar, sem a sodales eleifend, odio elit blandit risus, a dapibus ligula orci non augue. Nullam vitae cursus tortor, eget malesuada lectus. Nulla facilisi. Cras pharetra nisi sit amet orci dignissim, a eleifend odio hendrerit. </p> </article> </section> </div> <div id="contentCenter"> <section> <article> <p> Maecenas vitae purus at orci euismod pretium. Nam gravida gravida bibendum. Donec nec dolor vel magna consequat laoreet in a urna. Phasellus cursus ultrices lorem ut sagittis. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Vivamus purus felis, ornare quis ante vel, commodo scelerisque tortor. Integer vel facilisis mauris. </p> <img src="images/auto1.jpg" width="272" height="272" /> <p> In urna purus, fringilla a urna a, ultrices convallis orci. Duis mattis sit amet leo sed luctus. Donec nec sem non nunc mattis semper quis vitae enim. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Suspendisse dictum porta quam, vel lobortis enim bibendum et. Donec iaculis tortor id metus interdum, hendrerit tincidunt orci tempor. Sed dignissim cursus mattis. </p> </article> </section> </div> <div id="contentRight"> <section> <article> <img src="images/auto2.jpg" width="272" height="272" /> <img src="images/auto3.jpg" width="272" height="272" /> <p> Cras eu quam lobortis, sodales felis ultricies, rhoncus neque. Aenean nisi eros, blandit ac lacus sit amet, vulputate sodales mi. Nunc eget purus ultricies, aliquam quam sit amet, porttitor velit. In imperdiet justo in quam tristique, eget semper nisi pellentesque. Cras fringilla eros enim, in euismod nisl imperdiet ac. Fusce tempor justo vitae faucibus luctus. </p> </article> </section> </div> </div> <footer> <div class="footerText"> <p> Copyright &copy; Pinco <br />Lorem ipsum dolor sit amet, consectetur adipiscing elit. <br />Fusce ornare turpis orci, nec egestas leo feugiat ac. <br />Morbi eget sem facilisis, laoreet erat ut, tristique odio. Proin sollicitudin quis nisi id consequat. </p> </div> <div class="footerLogo"> <img class="footerLogo" src="images/auto4.jpg" width="80" height="80" /> </div> </footer> </div> </body> </html> and here is the CSS: /* CSS Document */ * { margin: 0; border: 0; padding: 0; } body { background: #8B0000; /* darkred */; } body { margin: 0; border: 0; padding: 0; } div#wrapper { margin: 0 auto; width: 960px; height: 100%; background: #FFC0CB /* pink */; } header { position: relative; background: #005b97; height: 140px; } header div.logo { float: left; width: 360px; height: 140px; } header div.logo img.logo { width: 360px; height: 140px; } header div.titolo { float: left; padding: 12px 0 0 35px; color: black; } header div.titolo h1 { font-size: 36px; font-weight: bold; } header div.titolo h2 { font-size: 24px; font-style: italic; font-weight: bold; color: white;} header nav { position: absolute; right: 0; bottom: 0; } header ul.menu { background: black; } header ul.menu li { display: inline-block; padding: 3px 15px; font-weight: bold; } div#mainContent { float: left; width: 100%; /* width: 960px; *//* height: 860px; */ padding: 30px 0; text-align: justify; } div#mainContent img { margin: 12px 0; } div#contentLeft { height: 900px; float: left; margin-left: 12px; border: 1px solid black; padding: 15px; width: 272px; background: #ccc; } div#contentCenter { height: 900px; float: left; margin-left: 12px; border: 1px solid transparent; padding: 15px; width: 272px; background: #E00; } div#contentRight { height: 900px; float: left; margin-left: 12px; border: 1px solid black; padding: 15px; width: 272px; background: #ccc; } footer { clear: both; padding: 12px; background: #306; color: white; height: 80px; font-style: italic; font-weight: bold; } footer div.footerText { float: left; } footer div.footerLogo { float: right; } a { color: white; text-decoration: none; } Thanks.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Can I get sensible labels for lm-sensors output for "applesmc-isa-0300"?

    - by TK Kocheran
    2011 8,3 Macbook Pro running 64bit 11.10. When I run sensors from the lm-sensors package, I get a lot of information, but no way to understand it: coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +53.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +53.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +52.0°C (high = +86.0°C, crit = +100.0°C) Core 2: +50.0°C (high = +86.0°C, crit = +100.0°C) Core 3: +49.0°C (high = +86.0°C, crit = +100.0°C) applesmc-isa-0300 Adapter: ISA adapter Left side : 2001 RPM (min = 2000 RPM) Right side : 2001 RPM (min = 2000 RPM) TB0T: +33.2°C TB1T: +33.2°C TB2T: +29.0°C TC0C: +52.8°C TC0D: +47.2°C TC0E: +51.8°C TC0F: +53.0°C TC0J: +1.0°C TC0P: +44.5°C TC1C: +52.0°C TC2C: +52.0°C TC3C: +52.0°C TC4C: +52.0°C TCFC: +0.2°C TCGC: +51.0°C TCSA: +52.0°C TCTD: +0.0°C TG0D: +44.5°C TG0P: +43.2°C THSP: +37.5°C TM0S: +57.5°C TMBS: +0.0°C TP0P: +50.0°C TPCD: +55.0°C The core temp info is really useful and I'm pretty sure that Left/Right Side refers to the two fans within, but otherwise, I have no idea what this information means. Is there something I can use to normalize this information?

    Read the article

  • How Microsoft Lost the API War - by Joel Spolsky

    - by TechTwaddle
    Came across another gem of an article by Joel Spolsky. It's a pretty old article written in June of 2004, has lot of tidbits and I really enjoyed reading it, so much in fact that I read it twice! So hit the link below and give it a read if you haven't already, How Microsoft Lost the API War - Joel Spolsky excerpt, "I first heard about this from one of the developers of the hit game SimCity, who told me that there was a critical bug in his application: it used memory right after freeing it, a major no-no that happened to work OK on DOS but would not work under Windows where memory that is freed is likely to be snatched up by another running application right away. The testers on the Windows team were going through various popular applications, testing them to make sure they worked OK, but SimCity kept crashing. They reported this to the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it."

    Read the article

  • How could RDBMSes be considered a fad?

    - by StuperUser
    Completing my Computing A-level in 2003 and getting a degree in Computing in 2007, and learning my trade in a company with a lot of SQL usage, I was brought up on the idea of Relational Databases being used for storage. So, despite being relatively new to development, I was taken-aback to read a comment (on Is LinqPad site quote "Tired of querying in antiquated SQL?" accurate? ) that said: [Some devs] despise [SQL] and think that it and RDBMS are a fad Obviously, a competent dev will use the right tool for the right job and won't create a relational database when e.g. flat file or another solution for storage is appropriate, but RDBMs are useful in a massive number of circumstances, so how could they be considered a fad?

    Read the article

  • How to produce assets effectively on large Flash game projects?

    - by Antoine Lassauzay
    I have been working on Flash games professionally for two years now and somehow, having our artists producing assets the right way is one of our biggest challenge. More precisely, it is very hard to have them following any kind of structure and/or standards, nor taking into consideration performance. I would say also the most of our issues concerns UI and related animations. Our current workflow is (on a Facebook hidden object game) : Artists produce PSD and animate prototypes in Flash Artists re-organize their FLA files to be a bit more "programmer friendly" Programmers retouches assets until they have the right structure and export classes inside a SWC, from Flash Programmers try to improve performances, sometimes degrading the quality of game graphics Our main idea is to hire somebody dedicated to prepare assets for programmers but I am really looking forward to improving the pipeline. I was wondering if you guys have tips of any kind to improve this workflow, whether it be team organization, training, tools or tips with Flash. Any explanation on your asset pipeline is well appreciated too.

    Read the article

  • Disabling assistive technologies during login

    - by Ivan
    I have a laptop with Ubuntu 10.04. My daughter was playing with the keyboard on the login screen, and it seems she activated some assistive technologies because now the screen is split vertically and the right side shows a magnified version of the left side. Plus, there's a screen keyboard. The way the screen is split makes it impossible for me to disable the assistive stuff from the toolbar at the bottom, since I can only see part of it. I don't know if it's a bug or what, because I'd guess I could see the entire bar on the right (magnified) side just by moving the mouse there, but I can't. I can't even type on the login screen, nor use the on-screen keyboard... Good thing I have auto-login activated, so I can still use the computer, but I can't switch users. So, does anyone know how to get the normal login screen back?

    Read the article

  • SQL SERVER – CSVExpress and Quick Data Load

    - by pinaldave
    One of the newest ETL tools is CSVexpress.com.  This is a program that can quickly load any CSV file into ODBC compliant databases uses data integration.  For those of you familiar with databases and how they operate, the question that comes to mind might be what use this program will have in your life. I have written earlier article on this subject over here SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress. You might know that RDBMS have automatic support for loading CSV files into tables – but it is not quite as easy as one click of a button.  First of all, most databases have a command line interface and you need the file and configuration script in order to load up.  You also need to know enough to write the script – which for novices can be extremely daunting.  On top of all this, if you work with more than one type of RDBMS, you need to know the ins and outs of uploading and writing script for more than one program. So you might begin to see how useful CSVexpress.com might be!  There are many other tools that enable uploading files to a database.  They can be very fancy – some can generate configuration files automatically, others load the data directly.  Again, novices will be able to tell you why these aren’t the most useful programs in the world.  You see, these programs were created with SQL in mind, not for uploading data.  If you don’t have large amounts of data to upload, getting the configurations right can be a long process and you will have to check the code that is generated yourself.  Not exactly “easy to use” for novices. That makes CSVexpress.com one of the best new tools available for everyone – but especially people who don’t want to learn a lot of new material all at once.  CSVexpress has an easy to navigate graphical user interface and no scripting or coding is required.  There are built-in constraints and data validations, and you can configure transforms and reject records right there on the screen.  But the best thing of all – it’s free! That’s right, you can download CSVexpress for free from www.csvexpress.com and start easily uploading and configuring riles almost immediately.  If you’re currently happy with your method of data configuration, keep up with the good work.  For the rest of us, there’s CSVexpress.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Resource Graphs in top panel?

    - by Jeff Welling
    I'm running Gnome Classic in Ubuntu 11.10 and in previous versions of Ubuntu it was fairly easy to get resource graphs to appear in the top menu, but now the regular way of getting said graphs in the top menu bar don't work (right clicking on the top menu produces no result unless you click on an icon, eg sound, wifi, or battery indicators). Is it not possible to get resource graphs in the top menu bar in Gnome Classic on Ubuntu 11.10? If not Gnome Classic, is it possible in KDE? I've tried googling but the only results I'm getting are related to adding the panel, which I can't do because I can't right click on the top menu. Thanks in advance.

    Read the article

  • SQL SERVER – Selecting Domain from Email Address

    - by pinaldave
    Recently I came across a quick need where I needed to retrieve domain of the email address. The email address is in the database table. I quickly wrote following script which will extract the domain and will also count how many email addresses are there with the same domain address. SELECT RIGHT(Email, LEN(Email) - CHARINDEX('@', email)) Domain , COUNT(Email) EmailCount FROM   dbo.email WHERE  LEN(Email) > 0 GROUP BY RIGHT(Email, LEN(Email) - CHARINDEX('@', email)) ORDER BY EmailCount DESC Above script will select the domain after @ character. Please note, if there is more than one @ character in the email, this script will not work as that email address is already invalid. Do you have any similar script which can do the same thing efficiently? Please post as a comment. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Recursion VS memory allocation

    - by Vladimir Kishlaly
    Which approach is most popular in real-world examples: recursion or iteration? For example, simple tree preorder traversal with recursion: void preorderTraversal( Node root ){ if( root == null ) return; root.printValue(); preorderTraversal( root.getLeft() ); preorderTraversal( root.getRight() ); } and with iteration (using stack): Push the root node on the stack While the stack is not empty Pop a node Print its value If right child exists, push the node's right child If left child exists, push the node's left child In the first example we have recursive method calls, but in the second - new ancillary data structure. Complexity is similar in both cases - O(n). So, the main question is memory footprint requirement?

    Read the article

  • Kill a tree, save your website? Content strategy in action, part III

    - by Roger Hart
    A lot has been written about how driving content strategy from within an organisation is hard. And that's true. Red Gate is pretty receptive to new ideas, so although I've not had a total walk in the park, it's been a hike with charming scenery. But I'm one of the lucky ones. Lots of people are involved in content, and depending on your organisation some of those people might be the kind who'll gleefully call themselves "stakeholders". People holding a stake generally want to stick it through something's heart and bury it at a crossroads. Winning them over is not always easy. (Richard Ingram has made a nice visual summary of how this can feel - Content strategy Snakes & ladders - pdf ) So yes, a lot of content strategy advocates are having a hard time. And sure, we've got a nice opportunity to get together and have a hug and a cry, but in the interim we could use a hand. What to do? My preferred approach is, I'll confess, brutal. I'd like nothing so much as to take a scorched earth approach to our website. Burn it, salt the ground, and build the new one right: focusing on clearly delineated business and user content goals, and instrumented so we can tell if we're doing it right. I'm never getting buy-in for that, but a boy can dream. So how about just getting buy-in for some small, tenable improvements? Easier, but still non-trivial. I sat down for a chat with our marketing and design guys. It seemed like a good place to start, even if they weren't up for my "Ctrl-A + Delete"  solution. We talked through some of this stuff, and we pretty much agreed that our content is a bit more broken than we'd ideally like. But to get everybody on board, the problems needed visibility. Doing a visual content inventory Print out the internet. Make a Wall Of Content. Seriously. If you've already done a content inventory, you know your architecture, and you know the scale of the problem. But it's quite likely that very few other people do. So make it big and visual. I'm going to carbon hell, but it seems to be working. This morning, I printed out a tiny, tiny part of our website: the non-support content pertaining to SQL Compare I made big, visual, A3 blowups of each page, and covered a wall with them. A page per web page, spread over something like 6M x 2M, with metrics, right in front of people. Even if nobody reads it (and they are doing) the sheer scale is shocking. 53 pages, all told. Some are redundant, some outdated, some trivial, a few fantastic, and frighteningly many that are great ideas delivered not-quite-right. You have to stand quite far away to get it all in your field of vision. For a lot of today, a whole bunch of folks have been gawping in amazement, talking each other through it, peering at the details, and generally getting excited about content. Developers, sales guys, our CEO, the marketing folks - they're engaged. Will it last? I make no promises. But this sort of wave of interest is vital to getting a content strategy project kicked off. While the content strategist is a saucer-eyed orphan in the cupboard under the stairs, they're not getting a whole lot done. Of course, just printing the site won't necessarily cut it. You have to know your content, and be able to talk about it. Ideally, you'll also have page view and time-on-page metrics. One of the most powerful things you can do is, when people are staring at your wall of content, ask them what they think half of it is for. Pretty soon, you've made a case for content strategy. We're also going to get folks to mark it up - cover it with notes and post-its, let us know how they feel about our content. I'll be blogging about how that goes, but it's exciting. Different business functions have different needs from content, so the more exposure the content gets, and the more feedback, the more you know about those needs. Fingers crossed for awesome.

    Read the article

  • Disable Add-Ons to Speed Up Browsing in Internet Explorer 9

    - by Lori Kaufman
    We’ve shown you how to enhance Internet Explorer with add-ons, similar to Firefox and Chrome. However, too many add-ons can slow down Internet Explorer and even cause it to crash. However, you can easily disable some or all add-ons. To begin, activate the Command bar, if it’s not already available. Right-click on an empty area of the tab bar and select Command bar from the popup menu. Click the Tools button on the Command bar and select Toolbars | Disable add-ons from the popup menu. Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre

    Read the article

  • Gnome panel not found

    - by emilbochnik
    Hi I installed the Ubuntu 10.10 on my laptop. 1st time Ubuntu user ever. After successful installation only panel on top with small ubuntu logo on left and system/connections, time, keyboard, volume icons/ on right. No menu and not able to create menu. Right click on the panel - no options. I tried everything, but it could be the most basic think as i have no experience with ubuntu. Please can you help me to resolve this issue. thank you bochnik

    Read the article

  • Worst code I've written in a while

    - by merrillaldrich
    Here's a nice, compact bit of WTF-ery I had to write for a prod issue today: Again: UPDATE TOP ( 1 ) dbo . someTable SET field3 = 'NEW' WHERE field2 = 'NEW' AND field3 = '' IF @@ROWCOUNT > 0 GOTO Again Can you guess from the code what awesomesauce issues I was working around? This was a reminder for me that sometimes there is time to do it right, but sometimes you just have to do it now. I need that lesson sometimes, as I tend to be a perfectionist. If you are trying to do it right , please don't...(read more)

    Read the article

  • Getting from a user-story to code while using TDD (scrum)

    - by Ittai
    I'm getting into scrum and TDD and I think I have some confusion which I'd like to get your feedback about. Let's assume I have a user-story in my backlog, in order for me to start developing it as part of TDD I need to have requirements, right so far? Is it true to say that the product manager and the QA should be responsible for taking the user-story and breaking it down to acceptance tests? I think the above is true since the acceptance tests need to be formal, so they can be used as tests, but also human readable so that the product can approve they are the requirements, right? Is it also true that I later take these acceptance tests and use them as my requirements, i.e. they are a set of use-cases which I implement (through TDD)? I hope I'm not making too much of a mess but that's the current flow I have in mind right now. Update I think my initial intentions were unclear so I'll try to rephrase. I want to know more details about the scrum flow of turning a user-story into code while using TDD. The starting point is obvious, a user surfaces a need (or the user's representative as the product) which is a short 1-2 lines description in the known format and that is added to the product backlog. When there is a spring planning meeting user-stories are taken from the backlog and assigned to developers. In order for a developer to write code they need requirements (especially in TDD since the requirements are what the tests are derived from). When, by whom and to which format are the requirements compiled? What I had in mind was that the product and QA define the requirements via acceptance tests (I'm thinking of automatic using FitNesse or the sort but that's not the core I think) which help to serve 2 purposes at the same time: They define "Done" properly. They give a developer something to derive tests from. I wasn't sure when these were written (before the sprint they're picked then that might be a waste since additional information will arrive or the story won't be picked, during the iteration then the developer might get stuck waiting for them...)

    Read the article

  • How to report a bug to developers? A programmers quest to educated on bug reporting.

    - by Ryan Detzel
    I'm hoping to get some tips and advice on how to educate the rest of the company on how to submit proper bug reports. Currently we get tickets like: When I click this link I get a 404. (They include the page that 404s and not the page that caused it) Sometimes the right column flows into the button column. (no screenshot or additional information) Changes to xxx does seem to be working right. (EOM) Does anyone have a bug submission process/form that guides users into submitting as much information as possible?

    Read the article

  • CSS Style Element if it does not contain another specific type of Element [migrated]

    - by Chris S
    My CSS includes the following: #mainbody a[href ^='http'] { background:transparent url('/images/icons/external.svg') no-repeat top right; padding-right: 12px; } This places an "external" icon next to links that start with "http" (all internal site links are relative). Works perfectly except if I link an Image, it also get this icon. For example: <a href='http://example.com'><img src='whatever.jpg'/></a> would also get the "external" icon next to the image. I can live with this if necessary, but would like to eliminate it. This must be implement in CSS (no JS); must not require any special IDs, Classes, styling in the html for the image or anchor around the image. Is this possible?

    Read the article

  • Why All The Hype Around Live Help?

    - by ruth.donohue
    I am pleased to introduce guest blogger, Damien Acheson today. Based in Cambridge, MA, Damien is the Product Marketing Manager for ATG’s Live Help products. Welcome, Damien!! BY DAMIEN ACHESON Why all the hype around live help? An eCommerce professional recently asked me: “Why all the hype around live chat and click to call?” I already have a customer service phone number that’s available to my online visitors. Why would I want to add live help? If anything, I want my website to reduce the number of calls to my contact center, not increase it!” The effect of adding live help to a website is counter-intuitive. Done right, live help doesn’t increase your call volume; it optimizes it by replacing traditional telephone calls with smarter, more productive, live voice and live chat interactions. This generates instant cost savings, and a measurable lift in sales and customer retention. A live help interaction differs from a traditional telephone call in six radical ways: Targeting. With live help you can target specific visitors at just the exact right time with a live call or live chat invitation based on hundreds of different parameters. For example, visitors who appear to hesitate before making a large purchase may receive a live help invitation, while others may not. Productivity. By reserving live voice to visitors with complex questions, and offering self-service and live chat for more simple interactions, agents with the right domain expertise can handle simultaneous queries and achieve substantial productivity gains. Routing. Live help interactions take into account visitors’ web context to intelligently route queries to the best available agent, thereby lifting first contact resolution. Context. Traditional telephone numbers force online customers to “change channels” and “start over” with a phone agent. With Live help, agents get the context of the web session and can instantly access the customer’s transaction details and account information, substantially reducing handle times. Interaction. Agents can solve a customer’s problem more effectively co-browsing and collaborating with the visitor in real-time to complete online forms and transactions. Analytics. Unlike traditional telephone numbers, live help allows you to tie Web analytics to customer satisfaction and agent performance indicators. To better understand these differences and advantages over traditional customer service, watch this demo on optimizing customer interactions with Live Help. Technorati Tags: ATG,Live Help,Commerce

    Read the article

  • How do I move the camera sideways in Libgdx?

    - by Bubblewrap
    I want to move the camera sideways (strafe). I had the following in mind, but it doesn't look like there are standard methods to achieve this in Libgdx. If I want to move the camera sideways by x, I think I need to do the following: Create a Matrix4 mat Determine the orthogonal vector v between camera.direction and camera.up Translate mat by v*x Multiply camera.position by mat Will this approach do what I think it does, and is it a good way to do it? And how can I do this in libgdx? I get "stuck" at step 2, as I have not found any standard method in Libgdx to calculate an orthogonal vector. EDIT: I think I can use camera.direction.crs(camera.up) to find v. I'll try this approach tonight and see if it works. EDIT2: I got it working and didn't need the matrix after all: Vector3 right = camera.direction.cpy().crs(camera.up).nor(); camera.position.add(right.mul(x));

    Read the article

  • Make one monitor act like two, split in half

    - by Nathan J. Brauer
    Context: Ubuntu 11.10, Unity Let's say I have a screen at resolution 1000x500. What I'd like to do is split the screen down the middle so [Unity or X or ?] acts as if there are two displays (each of 500x500). Examples: Unity will display a different toolbar (the top one) on each side of the display. If I maximize a window on the left side of the screen, it will fill the left side only. If I maximize on the right, it will fill the right. If I hit "fullscreen" in youtube (flash) or Chrome or Movie Player, it will only fill the side of the display that it's on. If it's really is impossible to do this with Unity, will it work with Gnome3 and how? A million thanks!

    Read the article

  • Confused about ASP.NET Ajax, jQuery and JavaScript

    - by Mr.Y
    Yesterday, I read couple of chapters on ASP.NET Ajax and jQuery from my ASP.NET 4 book and I found those frameworks pretty interesting and decide to learn more about them. Today, I borrowed some books from library on Ajax and JavaScript. It seems ASP.NET Ajax is different from Ajax and jQuery seems like the "new" JavaScript. Does it mean that I can skip JavaScript and learn jQuery directly? On the other hand, the non-ASP.NET Ajax book I borrowed seems to apply to the client side web programming only and looks quite different from what I learned from ASP.NET Ajax. If I'm an ASP.NET developer, I guess I should stick with ASP.NET Ajax instead of client side Ajax right? What about PHP? Is there a "PHP Ajax" similar to ASP.NET Ajax? It's not that I'm lazy to learn other tools, but I just want to focus on the right ones.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >