Search Results

Search found 9296 results on 372 pages for 'scheduled task'.

Page 41/372 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Oracle Fusion Procurement Designed for User Productivity

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience Oracle Fusion Procurement Design Goals In Oracle Fusion Procurement, we set out to create a streamlined user experience based on the way users do their jobs. Oracle has spent hundreds of hours with customers to get to the heart of what users need to do their jobs. By designing a procurement application around user needs, Oracle has crafted a user experience that puts the tools that people need at their fingertips. In Oracle Fusion Procurement, the user experience is designed to provide the user with information that will drive navigation rather than requiring the user to find information. One of our design goals for Oracle Fusion Procurement was to reduce the number of screens and clicks that a user must go through to complete frequently performed tasks. The requisition process in Oracle Fusion Procurement (Figure 1) illustrates how we have streamlined workflows. Oracle Fusion Self-Service Procurement brings together billing metrics, descriptions of the order, justification for the order, a breakdown of the components of the order, and the amount—all in one place. Previous generations of procurement software required the user to navigate to several different pages to gather all of this information. With Oracle Fusion, everything is presented on one page. The result is that users can complete their tasks in less time. The focus is on completing the work, not finding the work. Figure 1. Creating a requisition in Oracle Fusion Self-Service Procurement is a consumer-like shopping experience. Will Oracle Fusion Procurement Increase Productivity? To answer this question, Oracle sought to model how two experts working head to head—one in an existing enterprise application and another in Oracle Fusion Procurement—would perform the same task. We compared Oracle Fusion designs to corresponding existing applications using the keystroke-level modeling (KLM) method. This method is based on years of research at universities such as Carnegie Mellon and research labs like Xerox Palo Alto Research Center. The KLM method breaks tasks into a sequence of operations and uses standardized models to evaluate all of the physical and cognitive actions that a person must take to complete a task: what a user would have to click, how long each click would take (not only the physical action of the click or typing of a letter, but also how long someone would have to think about the page when taking the action), and user interface changes that result from the click. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time is calculated. Task times from the model enable researchers to predict end-user productivity. For the study, we focused on modeling procurement business process task flows that were considered business or mission critical: high-frequency tasks and high-value tasks. The designs evaluated encompassed tasks that are currently performed by employees, professional buyers, suppliers, and sourcing professionals in advanced procurement applications. For each of these flows, we created detailed task scenarios that provided the context for each task, conducted task walk-throughs in both the Oracle Fusion design and the existing application, analyzed and documented the steps and actions required to complete each task, and applied standard time estimates to the operators in each task to estimate overall task completion times. The Results The KLM method predicted that the Oracle Fusion Procurement designs would result in productivity gains in each task, ranging from 13 percent to 38 percent, with an overall productivity gain of 22.5 percent. These performance gains can be attributed to a reduction in the number of clicks and screens needed to complete the tasks. For example, creating a requisition in Oracle Fusion Procurement takes a user through only two screens, while ordering the same item in a previous version requires six screens to complete the task. Modeling user productivity has resulted not only in advances in Oracle Fusion applications, but also in advances in other areas. We leveraged lessons learned from the KLM studies to establish products like Oracle E-Business Suite (EBS). New user experience features in EBS 12.1.3, such as navigational improvements to the main menu, a Google-type search using auto-suggest, embedded analytics, and an in-context list of values tool help to reduce clicks and improve efficiency. For more information about KLM, refer to the Measuring User Productivity blog.

    Read the article

  • How to explain to non-technical person why the task will take much longer then they think?

    - by Mag20
    Almost every developer has to answer questions from business side like: Why is going to take 2 days to add this simple contact form? When developer estimates this task, they may divide it into steps: make some changes to Database optimize DB changes for speed add front end HTML write server side code add validation add client side javascript use unit tests make sure SEO is setup is working implement email confirmation refactor and optimize the code for speed ... These maybe hard to explain to non-technical person, who basically sees the whole task as just putting together some HTML and creating a table to store the data. To them it could be 2 hours MAX. So is there a better way to explain why the estimate is high to non-developer?

    Read the article

  • Is there a term for "Use procedures that execute a single task"?

    - by Tom
    I'm having a discussion with a fellow developer, and I'm trying to argument this in something like a short "term". SoC (Separation of Concerns) is pretty straight forward design practice, but it dwells deeper. If we want to pick on it's deep corners, we can Google it and there are plenty of articles that pop up, and after taking a glimpse, we know a lot more, and might find some examples. But, what about "Use procedures that execute a single task"? That's also a great design principle to use when writing applications and it becomes more and more rewarding, the larger the application gets. Is there a term for Use procedures that execute a single task?

    Read the article

  • When is a domain computer account scheduled to change the password?

    - by Jason Stangroome
    I understand domain-joined computers have machine accounts in AD and these accounts have passwords that expire (apparently every 30 days by default) and those passwords are automatically changed without user intervention. Given that this is known to cause issues when restoring snapshots of domain-joined virtual machines, is it possible to query the domain-joined computer or AD to determine when the machine account password is next scheduled to be changed?

    Read the article

  • Upstart: sense of "stop on..." stanza when job is a task

    - by Binarus
    Hi, an upstart question (I think I have read all relevant man pages but could not find the answer there): What is the sense of using a "stop on ..." stanza in the definition of a job which is a task? The manuals tell us that such a job, after being started, just waits until its script (or exec stanza) is executed completely, and then stops automatically. Given that, what is the point in using "stop on ..." stanzas in such job definitions? For example, this is the job definition for Upstart's (very important) rc job in Natty 11.04 (leaving out comments and empty lines): start on runlevel [0123456] stop on runlevel [!$RUNLEVEL] export RUNLEVEL export PREVLEVEL console output env INIT_VERBOSE task exec /etc/init.d/rc $RUNLEVEL IMHO, the job, after being started by a runlevel event, will be stopped automatically as soon as /etc/init.d/rc $RUNLEVEL has finished. Thank you very much for any explanation!

    Read the article

  • Sense of "stop on..." stanza when job is a task

    - by Binarus
    Hi, an upstart question (I think I have read all relevant man pages but could not find the answer there): What is the sense of using a "stop on ..." stanza in the definition of a job which is a task? The manuals tell us that such a job, after being started, just waits until its script (or exec stanza) is executed completely, and then stops automatically. Given that, what is the point in using "stop on ..." stanzas in such job definitions? For example, this is the job definition for Upstart's (very important) rc job in Natty 11.04 (leaving out comments and empty lines): start on runlevel [0123456] stop on runlevel [!$RUNLEVEL] export RUNLEVEL export PREVLEVEL console output env INIT_VERBOSE task exec /etc/init.d/rc $RUNLEVEL IMHO, the job, after being started by a runlevel event, will be stopped automatically as soon as /etc/init.d/rc $RUNLEVEL has finished. Thank you very much for any explanation!

    Read the article

  • Due Date set via EWS is wrong in reminder popup

    - by Paul McLean
    I'm having some trouble using EWS with tasks and reminders, specifically, the due date. When I run my code, shown below, the task is added to my exchange account and I can see it fine in outlook. All the data in it looks fine too. However, if I specify to have a reminder for the task, the due date it shows is very wrong. It's usually 17 hours in the future, but the screenshot I've provided shows it being 19 hours in the future. I'm finding it very strange that if I open the task, the due date looks fine, but the reminder is saying it is due well into the future. Any ideas? Screenshot: http://s970.photobucket.com/albums/ae187/paulehn/?action=view&current=ewstask.jpg ExchangeVersion exchVersion = new ExchangeVersion(); exchVersion = ExchangeVersion.Exchange2007_SP1; ExchangeService service = new ExchangeService(exchVersion); service.UseDefaultCredentials = true; service.Url = new Uri("https://mail.domain.com.au/ews/exchange.asmx"); Task task = new Task(service); task.Subject = "Subject"; task.Body = new MessageBody(BodyType.HTML, "Body"); task.StartDate = DateTime.Today; task.DueDate = DateTime.Now.AddHours(2); task.ReminderDueBy = DateTime.Now; task.ReminderMinutesBeforeStart = 15; task.IsReminderSet = true; task.Save();

    Read the article

  • Defining multiple values in DefineConstants in MsBuild element?

    - by Sardaukar
    I'm currently integrating my Wix projects in MSBuild. It is necessary for me to pass multiple values to the Wix project. One value will work (ProductVersion in the sample below). <Target Name="BuildWixSetups"> <MSBuild Condition="'%(WixSetups.Identity)'!=''" Projects="%(WixSetups.Identity)" Targets="Rebuild" Properties="Configuration=Release;OutputPath=$(OutDir);DefineConstants=ProductVersion=%(WixSetups.ISVersion)" ContinueOnError="true"/> </Target> However, how do I pass multiple values to the DefineConstants key? I've tried all the 'logical' separators (space, comma, semi-colon, pipe-symbol), but this doesn't work. Has someone else come across this problem? Solutions that don't work: Trying to add a DefineConstants element does not work because DefineConstants needs to be expressed within the Properties attribute.

    Read the article

  • How to invoke the same msbuild target twice with different parameters from within msbuild project fi

    - by mark
    Dear ladies and sirs. I have the following piece of msbuild code: <PropertyGroup> <DirA>C:\DirA\</DirA> <DirB>C:\DirB\</DirB> </PropertyGroup> <Target Name="CopyToDirA" Condition="Exists('$(DirA)') AND '@(FilesToCopy)' != ''" Inputs="@(FilesToCopy)" Outputs="@(FilesToCopy -> '$(DirA)%(Filename)%(Extension)')"> <Copy SourceFiles="@(FilesToCopy)" DestinationFolder="$(DirA)" /> </Target> <Target Name="CopyToDirB" Condition="Exists('$(DirB)') AND '@(FilesToCopy)' != ''" Inputs="@(FilesToCopy)" Outputs="@(FilesToCopy -> '$(DirB)%(Filename)%(Extension)')"> <Copy SourceFiles="@(FilesToCopy)" DestinationFolder="$(DirB)" /> </Target> <Target Name="CopyFiles" DependsOnTargets="CopyToDirA;CopyToDirB"/> So invoking the target CopyFiles copies the relevant files to $(DirA) and $(DirB), provided they are not already there and up-to-date. But the targets CopyToDirA and CopyToDirB look identical except one copies to $(DirA) and the other - to $(DirB). Is it possible to unify them into one target first invoked with $(DirA) and then with $(DirB)? Thanks.

    Read the article

  • Handle existing instance of root activity when launching root activity again from intent filter

    - by Robert
    Hi, I'm having difficulties handling multiple instances of my root (main) activity for my application. My app in question has an intent filter in place to launch my application when opening an email attatchment from the "Email" app. My problem is if I launch my application first through the the android applications screen and then launch my application via opening the Email attachment it creates two instances of my root activity. steps: Launch root activity A, press home Open email attachment, intent filter triggers launches root activity A Is it possible when opening the Email attachment that when the OS tries to launch my application it detects there is already an instance of it running and use that or remove/clear that instance?

    Read the article

  • Can I override task :environment in test_helper.rb to test rake tasks?

    - by Michael Barton
    I have a series of rake tasks in a Rakefile which I'd like to test as part of my specs etc. Each task is defined in the form: task :do_somthing => :environment do # Do something with the database here end Where the :environment task sets up an ActiveRecord/DataMapper database connection and classes. I'm not using this as part of Rails but I have a series of tests which I like to run as part of BDD. This snippet illustrates how I'm trying to test the rake tasks. def setup @rake = Rake::Application.new Rake.application = @rake load File.dirname(__FILE__) + '/../../tasks/do_something.rake' end should "import data" do @rake["do_something"].invoke assert something_in_the_database end So my request for help - is it possible to over-ride the :environment task in my test_helper.rb file so I my rake testing interacts with the my test database, rather than production? I've tried redefining the task in the helper file, but this doesn't work. Any help for a solution would be great, as I've been stuck on this for the past week.

    Read the article

  • Can XmlMassUpdate be used to delete an attribute?

    - by tlianza
    For example, I have this line: <forms loginUrl="/redirecttosignin.aspx" name="NAME_HERE" requireSSL="false" timeout="60" domain=".blah.com" /> And I want to delete the "name" attribute altogether. I know I can do this to blank it: <forms xmu:key="loginUrl" loginUrl="/redirecttosignin.aspx" name="" /> But, I literally want to get the name attribute out, and leave the other attributes in tact. Couldn't find any examples of that. Thanks!

    Read the article

  • Programatically find TFS changes since last good build

    - by abigblackman
    I have several branches in TFS (dev, test, stage) and when I merge changes into the test branch I want the automated build and deploy script to find all the updated SQL files and deploy them to the test database. I thought I could do this by finding all the changesets associated with the build since the last good build, finding all the sql files in the changesets and deploying them. However I don't seem to be having the changeset associated with the build for some reason so my question is twofold: 1) How do I ensure that a changeset is associated with a particular build? 2) How can I get a list of files that have changed in the branch since the last good build? I have the last successfully built build but I'm unsure how to get the files without checking the changesets (which as mentioned above are not associated with the build!)

    Read the article

  • MSBuild Build Sequence

    - by pm_2
    I got this from here. <ItemGroup> <SolutionToBuild Include="$(SolutionRoot)\path\MySolution.sln /> <SolutionToBuild Include="$(SolutionRoot)\Scribble\scribble.sln" /> <SolutionToBuild Include="$(SolutionRoot)\HelloWorld\HelloWorld.sln" /> <SolutionToBuild Include="$(SolutionRoot)\TestProject1\TestProject1.sln" /> </ItemGroup> It says that the sequence of the build is determined by the order above. So, for example, MySolution would be built before scribble. However, is this safe if scribble is dependant on MySolution? If MySolution and scribble are changed simultaneously, will the build wait for MySolution to be completely compiled before moving to the next project?

    Read the article

  • MSBuild Starter Kits...

    - by vdh_ant
    Hi guys Just wondering if anyone knows if there are any MSBuild starter kits out there. What I mean by starter kits is that from the looks of it most builds to kinda the same sort of steps with minor changes here and there (i.e. most builds would run test, coverage, zip up the results, produce a report, deploy etc). Also what most people in general want from a CI build, test build, release build is mostly the same with minor changes here and there. Now don't get me wrong i think that most scripts are fairly different in the end. But I can't help but think that most start out life being fairly similar. Hence does anyone know of any "starter kits" that have like a dev/CI/test/release build with the common tasks that most people would want that you can just start changing and modifying? Cheers Anthony

    Read the article

  • bat file using winrar taking too long to run

    - by Jessie
    hi guys, i have this scripts which extracts all my folder's and files from my c:\projects locations and put its in winrar and transfers them to c:\backup\project for /f "delims==" %%D in ('DIR C:\projects /A /B /S') do ( "C:\Program Files\WinRAR\WinRAR.EXE" m -r "c:\backup\projects.rar" "%%D" ) i have also tried the below script which uses the same source c:\projects but put them in their own separate winrar folder like in the source then transfers the folders into my c:\backup. FOR /F "DELIMS==" %%D in ('DIR C:\projects /AD /B') DO ( "C:\Program Files\WinRAR\WinRAR.EXE" m -r "C:\Backup\%%D.rar" "%%D" ) my question is, my second scripts only takes two hours to run when my first script takes over 24 hours to run, is there any way to make my first script faster? if anything shouldn't my first script be faster?

    Read the article

  • base for setQueueXmlPath

    - by antony.trupe
    I can't figure out how to point unit tests at the queue config file. Unit Test snippet // TaskQueue setup LocalTaskQueueTestConfig tqConfig = new LocalTaskQueueTestConfig(); tqConfig.setQueueXmlPath("/war/WEB_INF/queue.xml"); Stack Trace java.lang.IllegalStateException: The specified queue is unknown : zip-fetch at com.google.appengine.api.labs.taskqueue.QueueApiHelper.translateError(QueueApiHelper.java:56) at com.google.appengine.api.labs.taskqueue.QueueApiHelper.translateError(QueueApiHelper.java:111) at com.google.appengine.api.labs.taskqueue.QueueApiHelper.makeSyncCall(QueueApiHelper.java:32) at com.google.appengine.api.labs.taskqueue.QueueImpl.add(QueueImpl.java:310) at com.google.appengine.api.labs.taskqueue.QueueImpl.add(QueueImpl.java:282) at com.google.appengine.api.labs.taskqueue.QueueImpl.add(QueueImpl.java:267) at ...

    Read the article

  • Windows 7 DWM weirdness

    - by Max
    I'm looking to write a FOSS "Alt+Tab" replacement (window switcher) for Windows, since there are a few features I feel it's (still) lacking; but I'm noticing two quirks I can't seem to fix: #1. (Somewhat Unrelated) In the default Windows 7 window switcher, one computer allows left clicking on a thumbnail to focus the window; however on another, similarly specced computer, I have to use a right click. The only difference between these two fresh installs is the theme. Any ideas? #2. (Directly Related) In both the default Windows 7 window switcher and the DWM API output, minimized windows often have no thumbnail, and instead show only the taskbar. This has been a long running problem with the Windows API, and in the past I've seen the popular recommendation being "restore (un-minimize) the window, take a screenshot, then re-minimize" - but this is sloppy and causes flickering, etc. Has anyone done this successfully using the newer DWM API? If sharing code, I'd prefer C# syntax, but VB.NET will do as well. Thanks!

    Read the article

  • Does Parallel.ForEach require AsParallel()

    - by dkackman
    ParallelEnumerable has a static member AsParallel. If I have an IEnumerable<T> and want to use Parallel.ForEach does that imply that I should always be using AsParallel? e.g. Are both of these correct (everything else being equal)? without AsParallel: List<string> list = new List<string>(); Parallel.ForEach<string>(GetFileList().Where(file => reader.Match(file)), f => list.Add(f)); or with AsParallel? List<string> list = new List<string>(); Parallel.ForEach<string>(GetFileList().Where(file => reader.Match(file)).AsParallel(), f => list.Add(f));

    Read the article

  • "Work stealing" vs. "Work shrugging"?

    - by John
    Why is it that I can find lots of information on "work stealing" and nothing on "work shrugging" as a dynamic load-balancing strategy? By "work-shrugging" I mean busy processors pushing excessive work towards less loaded neighbours rather than idle processors pulling work from busy neighbours ("work-stealing"). I think the general scalability should be the same for both strategies. However I believe that it is much more efficient for busy processors to wake idle processors if and when there is definitely work for them to do than having idle processors spinning or waking periodically to speculatively poll all neighbours for possible work. Anyway a quick google didn't show up anything under the heading of "Work Shrugging" or similar so any pointers to prior-art and the jargon for this strategy would be welcome. Clarification/Confession In more detail:- By "Work Shrugging" I actually envisage the work submitting processor (which may or may not be the target processor) being responsible for looking around the immediate locality of the preferred target processor (based on data/code locality) to decide if a near neighbour should be given the new work instead because they don't have as much work to do. I am talking about an atomic read of the immediate (typically 2 to 4) neighbours' estimated q length here. I do not think this is any more coupling than implied by the thieves polling & stealing from their neighbours - just much less often - or rather - only when it makes economic sense to do so. (I am assuming "lock-free, almost wait-free" queue structures in both strategies). Thanks.

    Read the article

  • VS2008 Pre-Build Event Command BuildAction=None

    - by Frederick
    Hi Guys, I am trying to add a prebuild even command line which essentially sets Build Action = None For a list of files before the solution is packaged up for release. How would I go about adding this & what command would I use to exclude a number of files in the web solution ? i.e. \script\some-script.js [Set Build Action = None] etc \script\some-script2.js [Set Build Action = None] etc ?

    Read the article

  • Does CodeIgniter have to load view in the final step?

    - by Peter
    I have a function function do_something() { // process $this->load->view('some_view', $data); exec('mv /path/to/folder1/*.mp3 /path/to/folder2/'); } My intention is to move files after outputting the view. But apparently it is done before rendering the view. My question is, does $this->load->view(); have to be the final step in a function? I did a little research, and seems like my question is similar to this topic. Correct?

    Read the article

  • Looking for "tech call" tracking software.

    - by jacook11
    The company I work for is looking for the best way to track "tech calls". We would most likely develop in house using vb.net, but possibly could look at using some open source vb.net software already out there. We will probably want to track just the basic info like client, datetime, length of call & a notes section about the call. One idea that has floated around is recording everyone's calls, watching a directory for new files and popping up a form so the user can enter the info when the call is over. We really don't want to spend a lot of time tracking/logging these calls, something quick & simple. Anybody have a good idea or solution that they have used before?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >