Search Results

Search found 23576 results on 944 pages for 'case study'.

Page 270/944 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • g++ cannot find include files (qt3)

    - by Allan
    allan@allan-VirtualBox:~/blackjack_for_the_hopelessly_luckless$ make g++ -c -pipe -g -Wall -W -O2 -D_REENTRANT -DQT_NO_DEBUG -DQT_THREAD_SUPPORT -DQT_SHARED -DQT_TABLET_SUPPORT -I/usr/share/qt3/mkspecs/default -I. -I. -I/usr/include/qt3 -o advicewindow.o advicewindow.cpp advicewindow.cpp:32:19: fatal error: QWidget: No such file or directory compilation terminated. make: *** [advicewindow.o] Error 1 allan@allan-VirtualBox:~/blackjack_for_the_hopelessly_luckless$ qt3 was installed using apt-get. Header files are located in /usr/include/qt3/ Is there a g++ config file or something I need to update? I'm new to compiling from source and not sure what to do. Makefile was created using Qmake from project file. Files in include directory are all lower case, should I change the code in advicewindow.cpp to qwidget.h? Any help appreciated. Thanks.

    Read the article

  • Unity Desktop Displays strange lines

    - by Alex Holsgrove
    Didn't quite know what title to give this problem, but hopefully the screenshot will explain more. I am running a Samsung R60+ laptop on Ubuntu 13.10 with a Radeon X1250 GPU. After I login and the Unity desktop shows, I can see these strange lines at the top of the screen. I presumed it was perhaps a driver issue and found this article to see if I could resolve the issue: https://help.ubuntu.com/community/RadeonDriver I cannot get on with Unity at all (where are all of the menus gone!) so perhaps reverting back to Gnome may be a solution in my case? I'd welcome any ideas please.

    Read the article

  • front usb wont mount harddrives, internal usb ports do

    - by Thesgsuser
    I have noticed something in my new build, i am using Ubuntu desktop newest version my motherboard is the asus f1a75-m pro R2.0 with the usb ports in the back all my NTFS hard disks or usb sticks work fine, but then.. when i put them in the front usb ports of my chassis (silverstone milo ml-03) they wont mount... I have 2 usb 3.0 ports in front of the case connected with a internal usb 3.0 header. But i verified that the usb 3.0 ports on the back do mount the harddisk so it has nothing to do with usb 3.0 i think. The strange thing is, my mouse works fine on the front usb ports. Every usb hardware piece seems to work except if it has any memory inside it :( What seems to be the problem?

    Read the article

  • The practical cost of swapping effects

    - by sebf
    I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • Using multiple A-records for my domain - do web browsers ever try more than one?

    - by Jonas
    If I add multiple A-records for my domain, they are returned in a round robin order by DNS-servers. E.g: 1.1.1.1 A example.com 1.1.1.2 A example.com 1.1.1.3 A example.com But how does webbrowsers react if the first host (1.1.1.1) is down (unreachable)? do they try the second host (1.1.1.2) or do they return a error message to the user? Are there any difference between the most popular browsers? If I implement my own application, I can implement so that the second is used in case the first is down, so it's possible. And this would be very helpful to create a fault tolerant website.

    Read the article

  • Getting started with ClojureScript and Google Closure

    - by Andrea
    I would like to investigate whether ClojureScript, with the associated Google Closure library is a reasonable tool to build modern, in-browser, Javascript applications. My current Javascript stack consists of jQuery, Backbone and RequireJS with the possible additions of some widgets libraries like jQueryUI or KendoUI. So it will be quite a big leap (I already know how to work in Clojure, although I have little experience). What is a good roadmap to do so? Should I learn the Google Closure library first, or can I grasp it together with ClojureScript? One thing I am concerned about is the overall application structure. Backbone is rather opinionated on how to organize your application. I am not sure whether Google Closure also includes some components to help with the design of the application. And, if this is the case, I do not know how to tell whether this structure will port to ClojureScript or a ClojureScript application will require a different organization anyway, and only use - say - the widgets and DOM manipulation features of Closure.

    Read the article

  • Difference between spring setter and interface injection?

    - by Satish Pandey
    I know how constructor and setter injection works in spring. Normally I use interfaces instead of classes to inject beans using setter and I consider it as interface injection, but in case of constructor we also use interfaces (I am confused). In following example I use JobProcessor interface instead of JobProcessorImpl class. public class JobScheduler { // JobProcessor interface private JobProcessor jobProcessor; // Dependecy injection public void setJobProcessor(JobProcessor jobProcessor){ this.jobProcessor = jobProcessor; } } I tried to find a solution by googling but there are different opinions by writers. Even some people says that spring doesn't support interface injection in their blogs/statements. Can someone help me by example?

    Read the article

  • host and share files in my hosting

    - by user1314836
    I currently have a domain+hosting with unlimited hosting space for our website. On the other hand, I use Dropbox to share our organizational files and photos between about 10 users. The thing is that sharing photos uses too much space for what a free Dropbox account offers. So I am thinking of taking advantage of my hosting space, but using FTP seems not to be ideal for users who are not too skilled with computers. In addition, it doesn't handle versions in case some user makes a mess of it. And using a public FTP to upload and giving them only download permission doesn't seem a good idea as I am only the CTO. So what I want is basically to implement a local Dropbox for a few users, but I'd prefer something that is not too complex to install/mantain. Thank you a lot.

    Read the article

  • Do first impressions really count?

    - by Matt
    So, i am currently writing something up for a college class. Problem is everything is hypothetical. I need some proof. I believe a first impression on a website is imperative so that people actually use it and in my case, buy your product or services as well. Basically I'm wondering has there been any studies that shows how a better web design will increase revenue for any kind of services? I don't just mean selling products like a T-shirt, but labor services as well. If someone wanted their computer fixed and searched for companies that can do so, will a first impression on the website help them make their decision to use your company? Are there any studies like this? White papers maybe? Thanks!

    Read the article

  • "has no motion" warnings

    - by Adam R. Grey
    When I reimport my project's Library, I get lots of warnings such as State combat.Ghoul Attack has no motion but I have no idea why. In this specific case, I looked up Ghoul Attack. Here's the state in which it appears, in the only animator controller that includes anything called Ghoul Attack: State: m_ObjectHideFlags: 3 m_PrefabParentObject: {fileID: 0} m_PrefabInternal: {fileID: 0} m_Name: Ghoul Attack m_Speed: 1 m_CycleOffset: 0 m_Motions: - {fileID: 7400000, guid: 0db269712a91fd641b6dd5e0e4c6d507, type: 3} - {fileID: 0} m_ParentStateMachine: {fileID: 110708233} m_Position: {x: 492, y: 132, z: 0} m_IKOnFeet: 1 m_Mirror: 0 m_Tag: I thought perhaps that second one - {fileID: 0} was throwing up the warning incorrectly, so I removed it. There was no effect, I still get warnings about Ghoul Attack. So given that the only state I know of with that name does in fact have motion, what is this warning actually trying to tell me?

    Read the article

  • Dealing with "I-am-cool-and-you-are-dumb" manager [closed]

    - by Software Guy
    I have been working with a software company for about 6 months now. I like the projects I work on there and I really like all the people there except for 1 guy. That guy is technically smart, and he is a co-founder of the company. He is an okay guy in person (the kind you wouldn't want to care about much) but things get tricky when he is your manager. In general I am all okay but there are times when I feel I am not being treated fairly: He doesn't give much thought to when he makes mistakes and when I do something similar, he is super critical. Recently he went as far as to say "I am not sure if I can trust you with this feature". The detais of this specific case are this: I was working on this feature, and I was already a couple of hours over my normal working hours, and then I decided to stop and continue tomorrow. We use git, and I like to commit changes locally and only push when I feel they are ready. This manager insists that I push all the changes to the central repo (in case my hard drive crashes). So I push the change, and the ticket is marked as "to be tested". Next day I come in, he sits next to me and starts complaining and says that I posted above. I really didn't know what to say, I tried to explain to him that the ticket is still being worked upon but he didn't seem to listen. He interrupts me in-between when I am coding, which I do not mind, but when I do that same, his face turns like this :| and reacts as if his work was super important and I am just wasting his time. He asks me to accumulate all questions, and then ask him altogether which is not always possible, as you need a clarification before you can continue on a feature implementation. And when I am coding, he talks on the phone with his customers next to me (when he can go to the meeting room with his laptop) and doesn't care. He made me switch to a whole new IDE (from Netbeans to a commercial IDE costing a lot of money) for a really tiny feature (which I later found out was in Netbeans as well!). I didn't make a big deal out of it as I am equally comfortable working with this new IDE, but I couldn't get the science behind his obsession. He said this feature makes sure that if any method is updated by a programmer, the IDE will turn the method name to red in places where it is used. I told him that I do not have a problem since I always search for method usage in the project and make sure its updated. IDEs even have refactoring features for exactly that, but... I recently implemented a feature for a project, and I was happy about it and considering him a senior, I asked him his comments about the implementation quality.. he thought long and hard, made a few funny faces, and when he couldn't find anything, he said "ummm, your program will crash if JS is disabled" - he was wrong, since I had made sure it would work fine with default values even if JS was disabled. I told him that and then he said "oh okay". BUT, the funny thing is, a few days back, he implemented something and I objected with "But that would not run if JS is disabled" and his response was "We don't have to care about people who disable JS" :-/ Once he asked me to investigate if there was a way to modify a CMS generated menu programmatically by extending the CMS, I did my research and told him that the only was is to inject a menu item using JavaScript / jQuery and his reaction was "ah that's ugly, and hacky, not acceptable" and two days later, I see that feature implemented in the same way as I had suggested. The point is, his reaction was not respectful at all, even if what I proposed was hacky, he should be respectful, that I know what's hacky and if I am suggesting something hacky, there must be a reason for it. There are plenty of other reasons / examples where I feel I am not being treated fairly. I want your advice as to what is it that I am doing wrong and how to deal with such a situation. The other guys in the team are actually very good people, and I do not want to leave the job either (although I could, if I want to). All I want is respect and equal treatment. I have thought about talking to this guy in a face to face meeting, but that worries me that his attitude might get worse and make things more difficult for me (since he doesn't seem to be the guy who thinks he can be wrong too). I am also considering talking to the other co-founder but I am not sure how he will take it (as both founders have been friends forever). Thanks for reading the long message, I really appreciate your help.

    Read the article

  • Do you count a Masters in CS as a negative?

    - by Pete Hodgson
    In my experience interviewing developers I feel like candidates who've achieved a Masters in Comp Sci tend to be worse programmers on average that those who don't have a Masters. Is that just me, or have others noticed this phenomenon? If so, why would that be the case? UPDATE I appreciate the thoughtful comments. I think I should have been clearer in the comparison I'm making. Given two candidates who graduated from college around the same time, someone who went on to gain a Masters seems on average to be a worse programmer than someone who spent all their time in industry.

    Read the article

  • Isolated Unit Tests and Fine Grained Failures

    - by Winston Ewert
    One of the reasons often given to write unit tests which mock out all dependencies and are thus completely isolated is to ensure that when a bug exists, only the unit tests for that bug will fail. (Obviously, an integration tests may fail as well). That way you can readily determine where the bug is. But I don't understand why this is a useful property. If my code were undergoing spontaneous failures, I could see why its useful to readily identify the failure point. But if I have a failing test its either because I just wrote the test or because I just modified the code under test. In either case, I already know which unit contains a bug. What is the useful in ensuring that a test only fails due to bugs in the unit under test? I don't see how it gives me any more precision in identifying the bug than I already had.

    Read the article

  • How to install Percona Xtrabackup to Ubuntu 12.04LTS?

    - by coding crow
    I am trying to install Percona Xtrabackup to my Ubuntu 12.04 LTS insatlled on Amazon EC2. I am trying to follow instruction on the Xtrabackup installation page here. The instruction follows as Add this to /etc/apt/sources.list, replacing squeeze with the name of your distribution: deb http://repo.percona.com/apt squeeze main deb-src http://repo.percona.com/apt squeeze main In my case I will replace squeeze with precise but when I open /etc/apt/sources.list for editing it says the following It is suggestion three alternatives instead of editing which are listed a.), b.) and c.). My Question What should I do to install Percona Xtrabackup to my box?

    Read the article

  • the "additional drivers" shows nothing

    - by Yasser al-Zainy
    I started using Ubuntu 32 bit last week. I love it but I recognized there was a problem with the cooling system. the fan doesn't stop and slightly loud all the time (that wasn't the case while running with windows 7). I told a friend who claimed that it should be a drivers problem. My machine is dell inspiron n5110 and the official site recommends win 7 64 bit only. there's no support for linux. (the page showing the machine drivers and system recommendation I tried to fix the problem using the "additional drivers", it opens but it shows nothing (no drivers to activate, just the help and the close buttons) is there a way to fix this?

    Read the article

  • Oracle Retail Mobile Point-of-Service

    - by David Dorf
    When most people discuss mobile in retail, they immediately go to shopping applications.  While I agree the consumer side of mobile is huge, I believe its also important to arm store associates with mobile tools.  There are around a dozen major roll-outs of mobile POS to chain retailers, and all have been successful.  This does not, however, signal the demise of traditional registers.  Retailers will adopt mobile POS slowly and reduce the number of fixed registers over time, but there's likely to be a combination of both for the foreseeable future.  Even Apple retains at least one fixed register in every store, you just have to know where to look. The business benefits for mobile POS are pretty straightforward: 1. Faster checkout.  Walmart's CFO recently reported that for every second they shave off the average transaction time, they can potentially save $12M a year in labor.  I think its more likely that labor will be redeployed to enhance the customer experience. 2. Smarter associates.  The sales associates on the floor need the same access to information that consumers have, if not more.  They need ready access to product details, reviews, inventory, etc. to meet consumer expectations.  In a recent study, 40% of consumers said a savvy store associate can impact their final product selection more than a website. 3. Lower costs.  Mobile POS hardware (iPod touch + sled) costs about a fifth of fixed registers, not to mention the reclaimed space that can be used for product displays. But almost all Mobile POS solutions can claim those benefits equally.  Where there's differentiation is on the technical side.  Oracle recently announced availability of the Oracle Retail Mobile Point-of-Service, and it has three big technology advantages in the market: 1. Portable. We used a popular open-source component called PhoneGap that abstracts the app from the underlying OS and hardware so that iOS, Android, and other platforms could be supported.  Further, we used Web technologies such as HTML5 and JavaScript, which are commonly known by many programmers, as opposed to ObjectiveC which is more difficult to find.  The screen can adjust to different form-factors and sizes, just like you see with browsers.  In the future when a new, zippy device gets released, retailers will have the option to move to that device more easily than if they used a native app. 2. Flexible.  Our Mobile POS is free with the Oracle Retail Point-of-Service product.  Retailers can use any combination of fixed and mobile registers, and those ratios can change as required.  Perhaps start with 1 mobile and 4 fixed per store, then transition over time to 4 mobile and 1 fixed without any additional software licenses.  Our scalable solution supports lots of combinations. 3. Consistent.  Because our Mobile POS is fully integrated to our traditional POS, the same business logic is reused.  Third-party Mobile POS solutions often handle pricing, promotions, and tax calculations separately leading to possible inconsistencies within the store.  That won't happen with Oracle's solution. For many retailers, Mobile POS can lower costs, increase customer service, and generally enhance a consumer's in-store experience.  Apple led the way, but lots of other retailers are discovering the many benefits of adding mobile capabilities in their stores.  Just be sure to examine both the business and technology benefits so you get the most value from your solution for the longest period of time.

    Read the article

  • How to write a product definition?

    - by Skarab
    I would like to learn how to write a software product definition. Therefore I am looking for online materials or books, which would help me to learn more about this topic. I would like to learn: what must be in what must not to be in how to make a product definition to sell internally the product finding balance between use case descriptions (the why), and feature descriptions (the how). ... I am aware that it is not something that can learn in 15 minutes but I think such a discussion could help me to have a good start.

    Read the article

  • SOA, Java EE and data organization

    - by jolasveinn
    At the company I work for, we're currently splitting up our monolith solution into a number of small services (SOA). Many of the services are small, so we'd like to deploy a number of these services on the same application server, JBoss 7.1 in this case. As per the SOA philosophy, the independence of each service and the teams working on them is very important. What would be the best way to organize the data? Use one schema per service Would you use one datasource per schema in the application server? Or use one datasource, prefixing all DB object names with the schema name in some transparent manner? Use a shared schema, but evading any naming collisions by requiring each service to use a distinct prefix for all DB objects Other options? Am I maybe thinking this completely wrong here? :)

    Read the article

  • Defaulting the HLSL Vertex and Pixel Shader Levels to Feature Level 9_1 in VS 2012

    - by Michael B. McLaughlin
    I love Visual Studio 2012. But this is not a post about that. This is a post about tweaking one particular parameter that I’ve found a bit annoying. Disclaimer: You will be modifying important MSBuild files. If you screw up you will break your build tools. And maybe your computer will catch fire. I’m not responsible. No warranties or guaranties of any sort. This info is provided “as is”. By default, if you add a new vertex shader or pixel shader item to a project, it will be set to build with shader profile 4.0_level_9_3. If you need 9_3 functionality, this is all well and good. But (especially for Windows Store apps) you really want to target the lowest shader profile possible so that your game will run on as many computers as possible. So it’s a good idea to default to 9_1. To do this you could add in new HLSL files via “Add->New Item->Visual C++->HLSL->______ Shader File (.hlsl)” and then edit the shader files’ properties to set them manually to use 9_1 via “Properties->HLSL Compiler->General->Shader Model”. This is fine unless you forget to do this once and then submit your game with 9_3 shaders instead of 9_1 shaders to the Windows Store or to some other game store. Then you’d wind up with either rejection or angry “this doesn’t work on my computer! ripoff!” messages. There’s another option though. In “Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\VC\HLSL\1033\VertexShader” (note the path might vary slightly for you if you are using a 32-bit system or have a non-ENU version of Visual Studio 2012) you will find a “VertexShader.vstemplate” file. If you open this file in a text editor (e.g. Notepad++), then inside the CustomParameters tag within the TemplateContent tag you should see a CustomParameter tag for the ShaderType, i.e.: <CustomParameter Name="$ShaderType$" Value="Vertex"/> On a new line, we are going to add another CustomParameter tag to the CustomParameters tag. It will look like this: <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/> such that we now have:     <CustomParameters>       <CustomParameter Name="$ShaderType$" Value="Vertex"/>       <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/>     </CustomParameters> You can then save the file (you will need to be an Administrator or have Administrator access). Back in the 1033 directory (or whatever the number is for your language), go into the “PixelShader” directory. Edit the “PixelShader.vstemplate” file and make the same change (note that this time $ShaderType$ is “Pixel” not “Vertex”; you shouldn’t be changing that line anyway, but if you were to just copy and replace the above four lines then you will wind up creating pixel shaders that the HLSL compiler would try to compile as vertex shaders, with all sort of weird errors as a result). Once you’ve added the $ShaderModel$ line to “PixelShader.vstemplate” and have saved it, everything should be done. Since Feature Level 9_1 and 9_3 don’t support any of the other shader types, those are set to default to their appropriate minimums already (Compute and Geometry are set to “4.0” and Domain and Hull are set to “5.0”, which are their respective minimums (though not all 4.0 cards support Compute shaders; they were an optional feature added with DirectX 10.1 and only became required for DirectX 11 hardware). In case you are wondering where these magic values come from, you can find them all in the “fxc.xml” file in the “\Program Files (x86)\MSBuild\Microsoft.CPP\v4.0\V110\1033” directory (or whatever your language number is; 1033 is ENU and various other product languages have their own respective numbers (see: http://msdn.microsoft.com/en-us/goglobal/bb964664.aspx ) such that Japanese is 1041 (for example), though for all I know MSBuild tasks might be 1033 for everyone). If, like me, you installed VS 2012 to a drive other than the C:\ drive, you will find the vstemplate files in the drive to which you installed VS 2012 (D:\ in my case) but you will find the fxc.xml file on the C:\ drive. You should not edit fxc.xml. You will almost definitely break things by doing that; it’s just something you can look through to see all the other options that the FXC task takes such that you could, if needed, add further CustomParameter tags if you wanted to default to other supported options. I haven’t tried any others though so I don’t have any advice on how to set them.

    Read the article

  • Why fork a library for your own application?

    - by Mr. Shickadance
    Why should a programmer ever fork a library for inclusion in a widely used application? I ask this question because I was reading an article about why Chromium isn't packaged for many Linux distros like Fedora. Apparently its largely due to the fact that Google has forked a number of libraries, modified them, and included them in Chromium. This has driven up the complexity of packaging releases. There are a number of reasons why this can be a bad thing, but how strong a case can you actually make for doing so in a large widely used application such as Chromium? The original article: http://ostatic.com/blog/making-projects-easier-to-package-why-chromium-isnt-in-fedora Isn't it usually worth the effort to make slight modifications to your own program in order to use a popular and well developed library?

    Read the article

  • Studies on code documentation productivity gains/losses

    - by J T
    Hi everyone, After much searching, I have failed to answer a basic question pertaining to an assumed known in the software development world: WHAT IS KNOWN: Enforcing a strict policy on adequate code documentation (be it Doxygen tags, Javadoc, or simply an abundance of comments) adds over-head to the time required to develop code. BUT: Having thorough documentation (or even an API) brings with it productivity gains (one assumes) in new and seasoned developers when they are adding features, or fixing bugs down the road. THE QUESTION: Is the added development time required to guarantee such documentation offset by the gains in productivity down-the-road (in a strictly economical sense)? I am looking for case studies, or answers that can bring with them objective evidence supporting the conclusions that are drawn. Thanks in advance!

    Read the article

  • yield – Just yet another sexy c# keyword?

    - by George Mamaladze
    yield (see NSDN c# reference) operator came I guess with .NET 2.0 and I my feeling is that it’s not as wide used as it could (or should) be.   I am not going to talk here about necessarity and advantages of using iterator pattern when accessing custom sequences (just google it).   Let’s look at it from the clean code point of view. Let's see if it really helps us to keep our code understandable, reusable and testable.   Let’s say we want to iterate a tree and do something with it’s nodes, for instance calculate a sum of their values. So the most elegant way would be to build a recursive method performing a classic depth traversal returning the sum.           private int CalculateTreeSum(Node top)         {             int sumOfChildNodes = 0;             foreach (Node childNode in top.ChildNodes)             {                 sumOfChildNodes += CalculateTreeSum(childNode);             }             return top.Value + sumOfChildNodes;         }     “Do One Thing” Nevertheless it violates one of the most important rules “Do One Thing”. Our  method CalculateTreeSum does two things at the same time. It travels inside the tree and performs some computation – in this case calculates sum. Doing two things in one method is definitely a bad thing because of several reasons: ·          Understandability: Readability / refactoring ·          Reuseability: when overriding - no chance to override computation without copying iteration code and vice versa. ·          Testability: you are not able to test computation without constructing the tree and you are not able to test correctness of tree iteration.   I want to spend some more words on this last issue. How do you test the method CalculateTreeSum when it contains two in one: computation & iteration? The only chance is to construct a test tree and assert the result of the method call, in our case the sum against our expectation. And if the test fails you do not know wether was the computation algorithm wrong or was that the iteration? At the end to top it all off I tell you: according to Murphy’s Law the iteration will have a bug as well as the calculation. Both bugs in a combination will cause the sum to be accidentally exactly the same you expect and the test will PASS. J   Ok let’s use yield! That’s why it is generally a very good idea not to mix but isolate “things”. Ok let’s use yield!           private int CalculateTreeSumClean(Node top)         {             IEnumerable<Node> treeNodes = GetTreeNodes(top);             return CalculateSum(treeNodes);         }             private int CalculateSum(IEnumerable<Node> nodes)         {             int sumOfNodes = 0;             foreach (Node node in nodes)             {                 sumOfNodes += node.Value;             }             return sumOfNodes;         }           private IEnumerable<Node> GetTreeNodes(Node top)         {             yield return top;             foreach (Node childNode in top.ChildNodes)             {                 foreach (Node currentNode in GetTreeNodes(childNode))                 {                     yield return currentNode;                 }             }         }   Two methods does not know anything about each other. One contains calculation logic another jut the iteration logic. You can relpace the tree iteration algorithm from depth traversal to breath trevaersal or use stack or visitor pattern instead of recursion. This will not influence your calculation logic. And vice versa you can relace the sum with product or do whatever you want with node values, the calculateion algorithm is not aware of beeng working on some tree or graph.  How about not using yield? Now let’s ask the question – what if we do not have yield operator? The brief look at the generated code gives us an answer. The compiler generates a 150 lines long class to implement the iteration logic.       [CompilerGenerated]     private sealed class <GetTreeNodes>d__0 : IEnumerable<Node>, IEnumerable, IEnumerator<Node>, IEnumerator, IDisposable     {         ...        150 Lines of generated code        ...     }   Often we compromise code readability, cleanness, testability, etc. – to reduce number of classes, code lines, keystrokes and mouse clicks. This is the human nature - we are lazy. Knowing and using such a sexy construct like yield, allows us to be lazy, write very few lines of code and at the same time stay clean and do one thing in a method. That's why I generally welcome using staff like that.   Note: The above used recursive depth traversal algorithm is possibly the compact one but not the best one from the performance and memory utilization point of view. It was taken to emphasize on other primary aspects of this post.

    Read the article

  • Oracle Fusion Supply Chain Management (SCM) Designs May Improve End User Productivity

    - by Applications User Experience
    By Applications User Experience on March 10, 2011 Michele Molnar, Senior Usability Engineer, Applications User Experience The Challenge: The SCM User Experience team, in close collaboration with product management and strategy, completely redesigned the user experience for Oracle Fusion applications. One of the goals of this redesign was to increase end user productivity by applying design patterns and guidelines and incorporating findings from extensive usability research. But a question remained: How do we know that the Oracle Fusion designs will actually increase end user productivity? The Test: To answer this question, the SCM Usability Engineers compared Oracle Fusion designs to their corresponding existing Oracle applications using the workflow time analysis method. The workflow time analysis method breaks tasks into a sequence of operators. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time can be calculated. The workflow time analysis method has been recently adopted by the Applications User Experience group for use in predicting end user productivity. Using this method, a design can be tested and refined as needed to improve productivity even before the design is coded. For the study, we selected some of our recent designs for Oracle Fusion Product Information Management (PIM). The designs encompassed tasks performed by Product Managers to create, manage, and define products for their organization. (See Figure 1 for an example.) In applying this method, the SCM Usability Engineers collaborated with Product Management to compare the new Oracle Fusion Applications designs against Oracle’s existing applications. Together, we performed the following activities: Identified the five most frequently performed tasks Created detailed task scenarios that provided the context for each task Conducted task walkthroughs Analyzed and documented the steps and flow required to complete each task Applied standard time estimates to the operators in each task to estimate the overall task completion time Figure 1. The interactions on each Oracle Fusion Product Information Management screen were documented, as indicated by the red highlighting. The task scenario and script provided the context for each task.  The Results: The workflow time analysis method predicted that the Oracle Fusion Applications designs would result in productivity gains in each task, ranging from 8% to 62%, with an overall productivity gain of 43%. All other factors being equal, the new designs should enable these tasks to be completed in about half the time it takes with existing Oracle Applications. Further analysis revealed that these performance gains would be achieved by reducing the number of clicks and screens needed to complete the tasks. Conclusions: Using the workflow time analysis method, we can expect the Oracle Fusion Applications redesign to succeed in improving end user productivity. The workflow time analysis method appears to be an effective and efficient tool for testing, refining, and retesting designs to optimize productivity. The workflow time analysis method does not replace usability testing with end users, but it can be used as an early predictor of design productivity even before designs are coded. We are planning to conduct usability tests later in the development cycle to compare actual end user data with the workflow time analysis results. Such results can potentially be used to validate the productivity improvement predictions. Used together, the workflow time analysis method and usability testing will enable us to continue creating, evaluating, and delivering Oracle Fusion designs that exceed the expectations of our end users, both in the quality of the user experience and in productivity. (For more information about studying productivity, refer to the Measuring User Productivity blog.)

    Read the article

  • Error installing Windows7 64 bits on VirtualBox

    - by MetaDark
    I am trying to set up Windows in Virtual Box, so I don't need to reboot in the rare occasion that I actually need it. The problem is, Virtual Box doesn't preform any errors when I insert the 32bit installation CD but when I try to use the 64bit installation; What!? I am already using the installation disc! I've checked my BIOS to see if I have SVM (AMD's version of VT) disabled and all I see is "Enabled" I have a K9N6PGM2-V2 motherboard A Triple Core AMD Athlon II A Nvdia NForce 430 integrated graphics card 4GB of RAM An 80GB IDE And a 1TB SATA I don't think the last three specifications matter but just in case XP I am pretty sure the CD isn't broken ( I am going to make sure in just a moment ), what could be the cause to this problem? Edit: The 64bit installation CD is not broken, but I found out when trying to install from the 32bit version that it's trying to upgrade, not preform a fresh install - Odd.

    Read the article

  • Decrease filesize when resizing with mogrify

    - by plua
    I love the command line options of imagemagick. Mogrify is great to resize images and change quality, which is what I use most often. However, I have noted that the filesize if often larger than what it should be. Especially with small images. For instance, I have a regular 640px (width) photo, which I change to quality 80 and a width of 80px: mogrify -quality 80 -resize 80 file.jpg Works well and my image gets resized and the quality is changed to 80. However, the filesize is around 40Kb. For such a tiny image, that is huge! When I use mtPaint, and open the file and save it (not changing anything, just CTRL+O, CTRL+S), the filesize decreases with more than 95% to less than 2Kb! I have seen this is often the case. What goes wrong?

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >