Search Results

Search found 69140 results on 2766 pages for 'design time'.

Page 388/2766 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • How to determine where on a path my object will be at a given point in time?

    - by Dave
    I have map and an obj that is meant to move from start to end in X amount of time. The movements are all straight lines, as curves are beyond my ability at the moment. So I am trying to get the object to move from these points, but along the way there are way points which keep it on a given path. The speed of the object is determined by how long it will take to get from start to end (based on X). This is what i have so far: //get_now() returns seconds since epoch var timepassed = get_now() - myObj[id].start; //seconds since epoch for departure var timeleft = myObj[id].end - get_now(); //seconds since epoch for arrival var journey_time = 60; //this means 60 minutes total journey time var array = [[650,250]]; //way points along the straight paths if(step == 0 || step =< array.length){ var destinationx = array[step][0]; var destinationy = array[step][1]; }else if( step == array.length){ var destinationx = 250; var destinationy = 100; } else { var destinationx = myObj[id].startx; var destinationy = myObj[id].starty; } step++; When the user logs in at any given time, the object needs to be drawn in the correct place of the path, almost as if its been travelling along the path whilst the user has not been at the PC with the available information i have above. How do I do this? Note: The camera angle in the game is a birds eye view so its a straight forward X:Y rather than isometric angles.

    Read the article

  • Software development process for a part time University project for 1 developer?

    - by Pricey
    I will be doing a part time University project soon and the time frame for it is around 8 months with approximately 10-15 hours a week spent working on it, with a review by a tutor each quarter. My question is what software development process would you recommend using when the course requires you to work on your own in order to manage yourself as well as the project? I wanted to use a weekly or bi-weekly iterative approach to my work but a lot of the processes seem tailored to teams of people. I am looking at XP (Extreme Programming) OR Scrum as something that is less than the norm for University work but again Scrum I don't know a lot about yet, and a question I have is; can you say you are doing XP without pair-programming? because my tutor seems to think that I have to stick to all the practices otherwise I can't do it (nevermind if I am working alone). We can have external user input as well but due to the small timescales with part time work it may be more beneficial for myself to be the user as well, which is not what I prefer considering how I can get lost in the design.

    Read the article

  • Should you remove all warnings in your Verilog or VHDL design? Why or why not?

    - by Brian Carlton
    In (regular) software I have worked at companies where the gcc option -Wall is used to show all warnings. Then they need to be dealt with. With non-trivial FPGA/ASIC design in Verilog or VHDL there are often many many warnings. Should I worry about all of them? Do you have any specific techniques to suggest? My flow is mainly for FPGAs (Altera and Xilinx in particular), but I assume the same rules would apply to ASIC design, possibly more so due to the inability to change the design after it is built.

    Read the article

  • How do I cleanly design a central render/animation loop?

    - by mtoast
    I'm learning some graphics programming, and am in the midst of my first such project of any substance. But, I am really struggling at the moment with how to architect it cleanly. Let me explain. To display complicated graphics in my current language of choice (JavaScript -- have you heard of it?), you have to draw graphical content onto a <canvas> element. And to do animation, you must clear the <canvas> after every frame (unless you want previous graphics to remain). Thus, most canvas-related JavaScript demos I've seen have a function like this: function render() { clearCanvas(); // draw stuff here requestAnimationFrame(render); } render, as you may surmise, encapsulates the drawing of a single frame. What a single frame contains at a specific point in time, well... that is determined by the program state. So, in order for my program to do its thing, I just need to look at the state, and decide what to render. Right? Right. But that is more complicated than it seems. My program is called "Critter Clicker". In my program, you see several cute critters bouncing around the screen. Clicking on one of them agitates it, making it bounce around even more. There is also a start screen, which says "Click to start!" prior to the critters being displayed. Here are a few of the objects I'm working with in my program: StartScreenView // represents the start screen CritterTubView // represents the area in which the critters live CritterList // a collection of all the critters Critter // a single critter model CritterView // view of a single critter Nothing too egregious with this, I think. Yet, when I set out to flesh out my render function, I get stuck, because everything I write seems utterly ugly and reminiscent of a certain popular Italian dish. Here are a couple of approaches I've attempted, with my internal thought process included, and unrelated bits excluded for clarity. Approach 1: "It's conditions all the way down" // "I'll just write the program as I think it, one frame at a time." if (assetsLoaded) { if (userClickedToStart) { if (critterTubDisplayed) { if (crittersDisplayed) { forEach(crittersList, function(c) { if (c.wasClickedRecently) { c.getAgitated(); } }); } else { displayCritters(); } } else { displayCritterTub(); } } else { displayStartScreen(); } } That's a very much simplified example. Yet even with only a fraction of all the rendering conditions visible, render is already starting to get out of hand. So, I dispense with that and try another idea: Approach 2: Under the Rug // "Each view object shall be responsible for its own rendering. // "I'll pass each object the program state, and each can render itself." startScreen.render(state); critterTub.render(state); critterList.render(state); In this setup, I've essentially just pushed those crazy nested conditions to a deeper level in the code, hiding them from view. In other words, startScreen.render would check state to see if it needed actually to be drawn or not, and take the correct action. But this seems more like it only solves a code-aesthetic problem. The third and final approach I'm considering that I'll share is the idea that I could invent my own "wheel" to take care of this. I'm envisioning a function that takes a data structure that defines what should happen at any given point in the render call -- revealing the conditions and dependencies as a kind of tree. Approach 3: Mad Scientist renderTree({ phases: ['startScreen', 'critterTub', 'endCredits'], dependencies: { startScreen: ['assetsLoaded'], critterTub: ['startScreenClicked'], critterList ['critterTubDisplayed'] // etc. }, exclusions: { startScreen: ['startScreenClicked'], // etc. } }); That seems kind of cool. I'm not exactly sure how it would actually work, but I can see it being a rather nifty way to express things, especially if I flex some of JavaScript's events. In any case, I'm a little bit stumped because I don't see an obvious way to do this. If you couldn't tell, I'm coming to this from the web development world, and finding that doing animation is a bit more exotic than arranging an MVC application for handling simple requests - responses. What is the clean, established solution to this common-I-would-think problem?

    Read the article

  • Information about how much time in spent in a function, based on the input of this function

    - by olchauvin
    Is there a (quantitative) tool to measure performance of functions based on its input? So far, the tools I used to measure performance of my code, tells me how much time I spent in functions (like Jetbrain Dottrace for .Net), but I'd like to have more information about the parameters passed to the function in order to know which parameters impact the most the performance. Let's say that I have function like that: int myFunction(int myParam1, int myParam 2) { // Do and return something based on the value of myParam1 and myParam2. // The code is likely to use if, for, while, switch, etc.... } If would like a tool that would allow me to tell me how much time is spent in myFunction based on the value of myParam1 and myParam2. For example, the tool would give me a result looking like this: For "myFunction" : value | value | Number of | Average myParam1 | myParam2 | call | time ---------|----------|-----------|-------- 1 | 5 | 500 | 301 ms 2 | 5 | 250 | 1253 ms 3 | 7 | 1268 | 538 ms ... That would mean that myFunction has been call 500 times with myParam1=1 and myParam2=5, and that with those parameters, it took on average 301ms to return a value. The idea behind that is to do some statistical optimization by organizing my code such that, the blocs of codes that are the most likely to be executed are tested before the one that are less likely to be executed. To put it bluntly, if I know which values are used the most, I can reorganize the if/while/for etc.. structure of the function (and the whole program) to optimize it. I'd like to find such tools for C++, Java or.Net. Note: I am not looking for technical tips to optimize the code (like passing parameters as const, inlining functions, initializing the capacity of vectors and the like).

    Read the article

  • Protecting design ideas from being copied by other websites?

    - by mickburkejnr
    Hi everyone, I'm planning a project at the moment, while building a completely different project at the same time. Both of these projects are quite innovative in the way they either work or the way they are presented. One of the projects hasn't been done before, and the other is being made has competition, but I feel the competitions websites are light years behind what I'm doing. Is there a way for me to prevent the way my sites work or presented from being stolen? I've thought of patenting parts of them, but it requires £10,000 and I don't have that amount of money. Also, would me putting a Copyright notice on the site or an All Rights Reserved tag give me any muscle when going to websites that I feel have stolen my ideas (if they have)? Cheers!

    Read the article

  • How can I run and jump at the same time?

    - by Jan
    I'm having some trouble with the game I started. http://testing.fyrastudio.com/lab/tweetOlympics/v0.002/ The thing is that i have an athlete running and he must jump at the same time. A race with obstacles. I have him running (with pressing the letter Q repeateadly). I also have him jumping (with letter P) But the thing is that when he runs and jumps at the same time, he seems to be jumping at the same place, instead of going forward with the jump... any ideas how can I fix this?? This is the code I'm using for running and jumping on a continuos loop. //if accelearing and the last time that he accelerated was less than X seconds ago, hes running an accelaring if (athlete.accelerating && timeCurrent - athlete.last_acceleration > athlete.delay_acceleration) { athlete.accelerating = false; athlete.last_acceleration = timeCurrent; athlete.running = true; } if (!athlete.accelerating && timeCurrent - athlete.last_acceleration > athlete.delay_acceleration) { athlete.decelerating = true; } if(athlete.decelerating && timeCurrent - athlete.last_deceleration > athlete.delay_deceleration){ if(athlete.speed >= 1){ //athlete starts to decelarate athlete.last_deceleration = timeCurrent; athlete.decelerate(); }else { athlete.running = false; } } if (athlete.running) { athlete.position += athlete.speed; } if (athlete.jumping) { if (athlete.jump_height < 1) { athlete.jump_height = 1; }else { if (athlete.jump_height >= athlete.jump_max_height) { athlete.jump_height = athlete.jump_max_height; athlete.jumping = false; }else { athlete.jump_height = athlete.jump_height * athlete.jump_speed; } } } if (!athlete.jumping) { if(athlete.jump_height > 1){ athlete.jump_height = athlete.jump_height * 0.9; }else { athlete.jump_height = 1; } } athlete.scaleX = athlete.scaleY = athlete.jump_height; athlete.x = athlete.position; Thanks!

    Read the article

  • Immutable design with an ORM: How are sessions managed?

    - by Programmin Tool
    If I were to make a site with a mutable language like C# and use NHibernate, I would normally approach sessions with the idea of making them as create only when needed and dispose at request end. This has helped with keeping a session for multiple transactions by a user but keep it from staying open too long where the state might be corrupted. In an immutable system, like F#, I would think I shouldn't do this because it supposes that a single session could be updated constantly by any number of inserts/updates/deletes/ect... I'm not against the "using" solution since I would think that connecting pooling will help cut down on the cost of connecting every time, but I don't know if all database systems do connection pooling. It just seems like there should be a better way that doesn't compromise the immutability goal. Should I just do a simple "using" block per transaction or is there a better pattern for this?

    Read the article

  • How should an undergraduate programmer organize his time learning the maximum possible?

    - by nischayn22
    I started programming lately(pre-final year of a CS degree) and now feel like there's a sea of uncovered treasure for me out there. So, I decided to cover as much as is possible before I look out for a job after graduation. So, I started to read books (The C++ Programming Language, Introduction to Algorithms, Cracking the Coding Interview, Programming Pearls,etc ) participate in StackExchange sites, solving problems (InterviewStreet and ProjectEuler), coding for open source, chatting to fellow programmers/mentors and try to learn more and more. Good,then what's the problem?? The problem is I am trying to do many things, but I am doubtful that I am still utilizing my time properly. I am reading many books and sometimes I just leave a book halfway (jumping from one book to another), sometimes I spend way too much time on chatting and also in getting lost somewhere in the huge internet world, and lastly the wasteful burden of attending classes (I don't think my teachers know good enough or I prefer learning on my own) May be some of you had similar situation. How did you organize your time? Or what do you think is the best way to organize it for an undergraduate? Also what mistakes am I making that you can warn me of

    Read the article

  • Computer Science graduate. Master or full-time job? [closed]

    - by Alex
    Possible Duplicate: Is a Master's worth it? I have just gotten my Bachelor's Degree in Computer Science and I have to make choice. Whether to continue with my full-time job I just got or put the job slightly in the background and concentrate on getting a Master's degree. I am currently working as an embedded C developer in a small company. The cool thing is that, because the team is quite small, my engineering ideas really play a part in the final product. Not to mention that I get to work on very different areas of embedded programming: device drivers and development of a Real Time OS. I am very enthusiastic about my job and what I do. On the other hand, in my country there isn't really a master's degree that focuses on embedded development so my gain from getting this degree will mainly in the field of general computer science knowledge. That being said, is it worth giving up all my spare time which I now use to study different areas of embedded devices and work mainly to get a degree rather than pure knowledge and experience in the field I want to work in?

    Read the article

  • How are certain analytics metrics (time on site, etc.) usually distributed?

    - by a barking spider
    I'm not sure if I've come to the right place to ask this question, but I'm gathering some information for a research project. We're trying to design an experiment that'll heavily involve web analytics, and I'm trying to figure out some sensible values of mean +/- standard deviation for the following visitor-level (i.e., visitor 1 spends 2 minutes on site, visitor 2 spends 1 minute -- mean 1.5 +/- 0.71...) metrics: time spent on site page views If time allowed, we would put up the sites and gather the information ourselves, but we have a grant deadline coming up. I realize that even though these the distributions of these quantities are probably going to be heavily skewed towards zero, we'll need some reasonable figures or estimates of these figures in order to do sample size calculations, etc. Anyway, I'm not sure where else I'd turn, and I certainly have had a difficult time finding these values in the prior literature. If someone could direct me to a paper with the right information, or if you have these figures on hand (perhaps taken directly from your logs!) -- that would be amazing, and I'd love to hear from you. Thanks in advance, and even though I'm not allowed to reveal too much, rest assured that this info'll be applied towards a good cause :)

    Read the article

  • Any advice about how to make the design of an application.

    - by VansFannel
    Hello. I want to design an application and I don't know where to start. I know I can use UML to design the application, but I don't know the steps I must follow. I've started doing the UML class diagram, but I suppose, I'm been doing the database model, not the class model for the application. If I don't explain it well, tell me. Is there any tutorial about how to design an application? Thank you.

    Read the article

  • How to design an interface, where things need to be called in a specific sequence?

    - by Vorac
    The task is to configure a piece of hardware within the device, according to some input specification. This should be achieved as follows: 1) Collect the configuration information. This can happen at different times and places. For example, module A and module B can both request (at different times) some resources from my module. Those 'resources' are actually what the configuration is. 2) After it is clear that no more requests are going to be realized, a startup command, giving a summary of the requested resources, needs to be sent to the hardware. 3) Only after that, can (and must) detailed configuration of said resources be done. 4) Also, only after 2), can (and must) routing of selected resources to the declared callers be done. A common cause for bugs, even for me, who wrote the thing, is mistaking this order. What naming conventions, designs or mechanisms can I employ to make the interface usable by someone who sees the code for the first time?

    Read the article

  • How do you quantify competency in terms of time (years)?

    - by o.k.w
    While looking for a job via agencies some time ago, I kept having questions from the recuitment agents or in the application forms like: How many years of experience do you have in: Oracle ASP.NET J2EE etc etc etc.... At first I answered faithfully... 5yrs, 7yrs, 2 yrs, none, few months etc etc.. Then I thought; I can be doing something shallow for 7 years and not being competent at it simply because I am just doing a minor support for a legacy system running SQL2000 which requires 10 days of my time for the past 7 years. Eventualy I declined to answers such questions. I wonder why do they ask these questions anymore. Anyone who just graduated with a computer science can claim 3 to 4 years experience in anything they 'touched' in the cirriculum, which to me can be equivalent to zero or 10 years depending how you look at it. It might hold true decades ago where programmers and IT skills are of very different nature. I might be wrong but I really doubt 'time' or 'years' are a good gauge of competency or experience anymore. Any opinion/rebuttal are welcome!

    Read the article

  • Why most people change from being a contractor to full time at my companies, but not the other way around?

    - by ????
    I have seen most people changed from being a contractor to being a full time employee, but not the other way around. And that happened in startups that had maybe 20% chance of IPO or being acquired, and another that had maybe a 50% chance. As far as I know, the rate (even for a 3 year experience graphics designer, or a programmer), can be $75 to $80 an hour. While a programmer with 15 years of experience may get $120,000 per year. So, the programmer with 15 years of programming experience earns $10,000 per month. At the same time, the programmer with 3 year of experience or the graphics designer will get $14,000 per month ($80 x 22 days x 8 hours). I know I have to buy my own insurance, but I can't imagine buying those for $4,000 each month... maybe $200, $300 at most. I probably need to pay Social Security (FICA) both way (myself and as self-employed = $210 x 2), but still, each month there will be extra $3,000 of income. So is the above calculation correct? But most often, I do see contractors wanting or becoming full time, but not full time employees becoming a contractor. Does somebody know what the reason is?

    Read the article

  • Build-time dependency resolving coming to Entity Framework. Now, how about those BI tools too?

    - by jamiet
    Three months ago I wrote a blog post entitled Some thoughts on Visual Studio database references and how they should be used for SQL Server BI where I shared some thoughts on a feature available to database developers in Visual Studio 2010 that I would love to see added to SQL Server Integration Services (SSIS), Analysis Services (SSAS) and Reporting Services (SSRS). In there I said: Over the past few weeks I have been making heavy use of the Database tools in Visual Studio 2010 and one of the features that has most impressed me has been database references.   Database references allow you to have stored procedures in your database project that refer to objects (tables, views, stored procedures etc…) that exist in other database projects and hence when you build your database project it is able to resolve those references.   It occurred to me that similar functionality would be incredibly useful for SQL Server Integration Services(SSIS), Analysis Services (SSAS) & Reporting Services (SSRS) projects. After all reports, packages and data source views are rife with references to database objects – why shouldn’t we be able to have design-time dependency checking in our BI projects the same way that database and .Net developers do? In that blog post I shared links to three Connect submissions where I requested this feature be added to SSIS, SSAS & SSRS. In addition I also submitted a request that the feature be extended to .Net projects so that any reference to a database object in a .Net assembly can be resolved at build time. That Connect submission is at [Entity FX] Use database references to constrain the EDM and overnight it received this comment from Microsoft: We have been working on this feature for a while and and will be available soon This is really good news - it improves the Microsoft developer ecosystem by ensuring invalid references to database references get caught at build time (ideally as part of a Continuous integration build) rather than run time. [Hopefully it might nip this code-first nonsense in the bud too (Ooo...way to incite flame comments :) ) ]. If you want to see this feature in action then check out a video from Teched Europe last month entitled SQL Server Developer Tools Code-named "Juneau" where it is demo'd by Lance Delano and Tim Laverty.   The point of this blog post though is not just to draw attention to this forthcoming feature for .Net developers, it is to ask you to petition Microsoft to get this feature added to SSIS/SSAS/SSRS too. After all, we already know (from the video above) that the feature is coming to this new code-name Juneau development environment plus we also know that Juneau will be the development environment for SSIS/SSAS/SSRS as well - is it really much of a stretch to expect the BI tools to have access to this great feature too? I don't think so and if you agree with me then I urge you to vote and add a comment to the Connection submissions that are requesting this feature. They are at: [SSAS] Declare Object Dependancies [SSRS] Declare Object Dependancies [SSIS] Declare Object Dependancies (Update, Apparently someone at Microsoft has deemed it necassary to set this to private and I am not able to change it back even though I submitted it. You can still vote on the other two though.) Let's close that SQL Developer Gap!   @Jamiet    

    Read the article

  • How to seek to a specific time in a RTP stream?

    - by Cipi
    I am streaming a prerecorded H264 video that has the following structure: [I] [x] [x] [x] [I] [x] [x] [x] [I]... In between the IDR (I-s in my structure) I have 32 (only 3 presented here) other frames (all other stuff that is not IDR like SEI, SPS, PPS... X-es) Now, let assume that the timing of my frames is such: TIME: 1 2 3 4 5 6 7 8 9 FRAME: [I] [x] [x] [x] [I] [x] [x] [x] [I]... Now i want to seek to the time 4. If I seek to that frame, and send it, the picture gets messed up because the decoder needs a IDR to decode it properly, so I resorted to finding the appropriate IDR (in this case one with the time 1) and sending it as the frame with the time 4. So now the picture is decoded properly, all is well... but... If my GOV is 32, and I need to send the non IDR frame that has the index 31, and if the time span between it and the corresponding IDR is 3 seconds, I actually get 3 seconds earlier then the time I want. Now, this is not precise, because I cannot seek to the half of the GOV time span. Also, I cant set smaller GOV, so I want other ideas... My other idea was to send the last known IDR, and then send all other non IDR frames that come before the one I want, only I would set for all of them RTP-TIME to be the same as the corresponding IDR. In this case the picture gets decoded perfectly, but now in the above case, 3 seconds that follow non IDR frame with the wanted time get fast paced in the decoder/player (there is no instantaneous seek)... Any ideas? Or I can only seek to IDR-s and not the frames in between?

    Read the article

  • Delight and Excite

    - by Applications User Experience
    Mick McGee, CEO & President, EchoUser Editor’s Note: EchoUser is a User Experience design firm in San Francisco and a member of the Oracle Usability Advisory Board. Mick and his staff regularly consult on Oracle Applications UX projects. Being part of a user experience design firm, we have the luxury of working with a lot of great people across many great companies. We get to help people solve their problems.  At least we used to. The basic design challenge is still the same; however, the goal is not necessarily to solve “problems” anymore; it is, “I want our products to delight and excite!” The question for us as UX professionals is how to design to those goals, and then how to assess them from a usability perspective. I’m not sure where I first heard “delight and excite” (A book? blog post? Facebook  status? Steve Jobs quote?), but now I hear these listed as user experience goals all the time. In particular, somewhat paradoxically, I routinely hear them in enterprise software conversations. And when asking these same enterprise companies what will make the project successful, we very often hear, “Make it like Apple.” In past days, it was “make it like Yahoo (or Amazon or Google“) but now Apple is the common benchmark. Steve Jobs and Apple were not secrets, but with Jobs’ passing and Apple becoming the world’s most valuable company in the last year, the impact of great design and experience is suddenly very widespread. In particular, users’ expectations have gone way up. Being an enterprise company is no shield to the general expectations that users now have, for all products. Designing a “Minimum Viable Product” The user experience challenge has historically been, to echo the words of Eric Ries (author of Lean Startup) , to create a “minimum viable product”: the proverbial, “make it good enough”. But, in our profession, the “minimum viable” part of that phrase has oftentimes, unfortunately, referred to the design and user experience. Technology typically dominated the focus of the biggest, most successful companies. Few have had the laser focus of Apple to also create and sell design and user experience alongside great technology. But now that Apple is the most valuable company in the world, copying their success is a common undertaking. Great design is now a premium offering that everyone wants, from the one-person startup to the largest companies, consumer and enterprise. This emerging business paradigm will have significant impact across the user experience design process and profession. One area that particularly interests me is, how are we going to evaluate these new emerging “delight and excite” experiences, which are further customized to each particular domain? How to Measure “Delight and Excite” Traditional usability measures of task completion rate, assists, time, and errors are still extremely useful in many situations; however, they are too blunt to offer much insight into emerging experiences “Satisfaction” is usually assessed in user testing, in roughly equivalent importance to the above objective metrics. Various surveys and scales have provided ways to measure satisfying UX, with whatever questions they include. However, to meet the demands of new business goals and keep users at the center of design and development processes, we have to explore new methods to better capture custom-experience goals and emotion-driven user responses. We have had success assessing custom experiences, including “delight and excite”, by employing a variety of user testing methods that tend to combine formative and summative techniques (formative being focused more on identifying usability issues and ways to improve design, and summative focused more on metrics). Our most successful tool has been one we’ve been using for a long time, Magnitude Estimation Technique (MET). But it’s not necessarily about MET as a measure, rather how it is created. Caption: For one client, EchoUser did two rounds of testing.  Each test was a mix of performing representative tasks and gathering qualitative impressions. Each user participated in an in-person moderated 1-on-1 session for 1 hour, using a testing set-up where they held the phone. The primary goal was to identify usability issues and recommend design improvements. MET is based on a definition of the desired experience, which users will then use to rate items of interest (usually tasks in a usability test). In other words, a custom experience definition needs to be created. This can then be used to measure satisfaction in accomplishing tasks; “delight and excite”; or anything else from strategic goals, user demands, or elsewhere. For reference, our standard MET definition in usability testing is: “User experience is your perception of how easy to use, well designed and productive an interface is to complete tasks.” Articulating the User Experience We’ve helped construct experience definitions for several clients to better match their business goals. One example is a modification of the above that was needed for a company that makes medical-related products: “User experience is your perception of how easy to use, well-designed, productive and safe an interface is for conducting tasks. ‘Safe’ is how free an environment (including devices, software, facilities, people, etc.) is from danger, risk, and injury.” Another example is from a company that is pushing hard to incorporate “delight” into their enterprise business line: “User experience is your perception of a product’s ease of use and learning, satisfaction and delight in design, and ability to accomplish objectives.” I find the last one particularly compelling in that there is little that identifies the experience as being for a highly technical enterprise application. That definition could easily be applied to any number of consumer products. We have gone further than the above, including “sexy” and “cool” where decision-makers insisted they were part of the desired experience. We also applied it to completely different experiences where the “interface” was, for example, riding public transit, the “tasks” were train rides, and we followed the participants through the train-riding journey and rated various aspects accordingly: “A good public transportation experience is a cost-effective way of reliably, conveniently, and safely getting me to my intended destination on time.” To construct these definitions, we’ve employed both bottom-up and top-down approaches, depending on circumstances. For bottom-up, user inputs help dictate the terms that best fit the desired experience (usually by way of cluster and factor analysis). Top-down depends on strategic, visionary goals expressed by upper management that we then attempt to integrate into product development (e.g., “delight and excite”). We like a combination of both approaches to push the innovation envelope, but still be mindful of current user concerns. Hopefully the idea of crafting your own custom experience, and a way to measure it, can provide you with some ideas how you can adapt your user experience needs to whatever company you are in. Whether product-development or service-oriented, nearly every company is ultimately providing a user experience. The Bottom Line Creating great experiences may have been popularized by Steve Jobs and Apple, but I’ll be honest, it’s a good feeling to be moving from “good enough” to “delight and excite,” despite the challenge that entails. In fact, it’s because of that challenge that we will expand what we do as UX professionals to help deliver and assess those experiences. I’m excited to see how we, Oracle, and the rest of the industry will live up to that challenge.

    Read the article

  • Is there a Windows equivalent of Unix 'CPU steal time'?

    - by Steffen Opel
    In order to assess performance monitoring accuracy on virtualization platforms, the CPU steal time has become an increasingly relevant metric - see EC2 monitoring: the case of stolen CPU for an instructive summary in the context of Amazon EC2 and IBM's paper on CPU time accounting for a more in-depth technical explanation (including illustrations) of the concept: Steal time is the percentage of time a virtual CPU waits for a real CPU while the hypervisor is servicing another virtual processor. Accordingly, it is exposed in most related Unix/Linux monitoring tools nowadays - see e.g. columns %steal or st in sar or top: st -- Steal Time The amount of CPU 'stolen' from this virtual machine by the hypervisor for other tasks (such as running another virtual machine). I've been unable to figure out how to capture the same metric on Windows though, is this possible already? (Ideally for the Windows 2008 Server R2 AMIs on EC2 and via a respective Windows Performance Counters of course.)

    Read the article

  • schedule a task to run every day within a time range?

    - by barlop
    How do I schedule a task to run once any time within a time range? and also, just once in a day without specifying a time? can windows task scheduler do it? and specifically, if my computer is off or on standby or hibernation at the time I want it to run it when it is on if it hasn't been run that day and the time has passed. I see an option to wake it to run the task , but could I then put it back to sleep? And i'd like to be able to as mentioned.. let it run when the computer is back on.

    Read the article

  • Which way should we choose to shorten backup time?

    - by facebook-100005613813158
    A company performs a full backup for its data in a daily basis for disaster recovery purposes. However, their backup process cannot be completed within the assigned backup time window. What would you recommend to this company about how to restructure its backup environment in order to minimize the backup time? We got 4 candidates, 1. Perform LAN based backup 2. Weekly full backup and daily incremental 3. Weekly full backup and daily cumulative 4. Add more ISL to increase bandwidth when comparing incremental backup with cumulative backup ,incremental backup time is surely shorter than cumulative backup time .But I don's know adding more ISL is allowed in an existing storage system,or can this operation really shorten backup time ?

    Read the article

  • File property information (last write time and file size) in explorer out of date by hours over netw

    - by David L Morris
    An application is running on a windows XP prof machine picking up file from a network share from another windows machine. It detects that the file has been updated (by date and time or optionally file size) and reads it for any new data. Most of the time the last write time and file size, seems to be up to date. Occasionally, this information stops being updated, even though the file is growing (intermittently during the day) with appended content, so that the last write time and file size remain fixed at some arbitrary moment. This is visible in explorer, where it shows a fixed last write time on the reading machine. Just opening the file to edit it in notepad, immediately refreshes the file properties, and the other application picks up where it left of. The file location can't be changed, nor the location of the relevant applications. Any solutions to resolve this problem?

    Read the article

  • Are changes to the date and time logged in Windows Server?

    - by user17605
    We've recently gone into British Summer Time in the UK. One of our techs, anticipating the move, decided to change the time on one of our servers. Bad move. This server happens to have a number of time-based incidents logged to it, and as a result of this change, the times are unreliable. I'm trying to build a concrete timeline of when the clock was changed so I can apply corrective action to our time-based records. My question is:- Does Windows record date and time changes anywhere so I can get hard, actual data? Thanks

    Read the article

  • What happens when router has been set to incorrect time?

    - by iamrohitbanga
    I have a D-Link router for my home Wi-Fi network. Everyday at least once the internet suddenly goes down. I am simply not able to connect to the Wi-Fi network. If I just restart the router, it starts working. To debug the issue I logged into the admin panel and noticed the time was set to something in 2002. I have set it to the correct time. Will wait to see if that fixes the problem. In the meanwhile I want to know what can go bad when the router has been set to show an incorrect time? What are the kinds of problems expected? My Wi-Fi was working just fine most of the time, but sometimes it lost the connection. Could this be linked to the incorrect time setting?

    Read the article

  • Can I rent exclusive time on a powerful server running linux? [closed]

    - by Mark Borgerding
    My company is involved in a proposal that requires speed estimates of our software on a server with the latest & greatest processors. This is not the first time we've been in this situation. The servers themselves are too expensive to buy a new one every time, so we end up extrapolating from what we have. There are so many variables: processor generation & speed, memory speed, memory channels, cache configurations; it makes extrapolation difficult and error-prone. Is there a business that rents time on the newest servers? At least part of the time we'd need exclusive access to an otherwise quiescent system either via ssh shell access or unattended batch jobs. I am not looking for general cloud computing services. I don't need much time on the server, but it needs to be exclusive. And the server needs to be pretty cutting edge for a solid basis of estimate.

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >