Search Results

Search found 5018 results on 201 pages for 'sharepoint workflow'.

Page 134/201 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Did you download the Office RTM before May 1st?

    - by simonsabin
    If so then you may find the product key you have doesn’t enable all the functionality. Looking at the list of what isn’t enabled its the collaboration stuff like workflow and publishing to libraries. You can read more with the KB article http://support.microsoft.com/default.aspx/kb/983473?p=1...(read more)

    Read the article

  • New RUP Patch for iSupplier Portal, Sourcing and Supplier Lifecycle Management (SLM)

    - by LuciaC
    Just released - the 12.1.3 Rollup (RUP) Patch 17525552:R12.PRC_PF.B for iSupplier Portal, Sourcing and Supplier Lifecycle Management (SLM). Who should apply this patch? Anyone that is on Release 12.1.3 and is using  iSupplier Portal, Sourcing or Supplier Lifecycle Management (SLM) functionality. The following areas have had major fixes: Prospective Supplier Guided Navigation: The train-navigation is introduced for prospective supplier registration so that prospective suppliers can see all steps needed to successfully register themselves. Supplier Registration Workflow Enhancement: With this release, provided the Approval Management Engine (AME) action required notifications for supplier approval, so that all workflow related features can be enabled. Vacation rules can be set, approvals can be forwarded and more information can be requested through the notification itself.  Additionally AME parallel Approval support for Supplier Registration approvals has been added. Reinstate Supplier Request: Allow buyer to reopen/reinstate the rejected supplier. Supplier is able to access their previously rejected registration again and make changes and resubmit request. Contact Address Association: The prospective supplier is allowed to associate addresses with contacts (including Primary) during the prospective supplier registration process. Primary Contact Enhancement: The prospective supplier can be registered without creating a user account for the primary contact. Mandatory Attributes: In the negotiation requirement creation page, the lookup meaning of 'Internal' has been changed to 'Internal Optional', and a new lookup value with meaning as 'Internal Required' has been added. The values available in the 'Type' dropdown now are Display Only, Internal Optional, Internal Required, Supplier Optional and Supplier Required.  So now during supplier evaluations, internal user response can be set as mandatory by using Internal Required type during requirement creation. Notifications to Supplier:  When the supplier saves and submits their supplier registration request, then a notification with a registration status page link will be sent for further access.  When the buyer approves, rejects or returns the request, the supplier will be notified in an email with the current status. There are also 10 major enhancements included in this RUP. For information about this RUP; including, the fixes and enhancements included, how to access and apply the patch, performing an impact analysis on your system, and testing recommendations, see Doc ID 1591198.1.  Don’t delay apply the patch today!

    Read the article

  • Using WF4 WorkflowInvoker

    This article describes a design, implementation and usage of the custom service operation invoker for invoking a xaml workflow. It is based on the upcoming Microsoft .NET 4 Technology.

    Read the article

  • Possible applications of algorithm devised for differentiating between structured vs random text

    - by rooznom
    I have written a program that can rapidly (within 5 sec on a 2GB RAM desktop, 2.33 Ghz CPU) differentiate between structured text (e.g english text) and random alphanumeric strings. It can also provide a probability score for the prediction. Are there any practical applications/uses of such a program. Note that the program is based on entropy models and does not have any dictionary comparisons in its workflow. Thanks in advance for your responses

    Read the article

  • Help with deleted components registry keys (2 replies)

    Hello, I did a big mistake and I deleted the path of these files in windows xp registry: System.Workflow.Activities.dll PresentationFramework.Luna.dll RedistList\\FrameworkList.xml The keys that should contain the paths are: [HKEY LOCAL MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S 1 5 18\Components\300DC0511590697408C9B53F71E7AB4A] &quot;0DC1503A46F231838AD88BCDDC8E8F7C&quot; &quot;&quot; [H...

    Read the article

  • SQL Performance Analyzer

    Any activity that may impact a statement's execution plan is a candidate for using SPA to investigate the possible consequences - both good and bad. Steve Callan discusses the workflow and provides a working example.

    Read the article

  • What's New in the latest release of Oracle User Productivity Kit 11.0

    Enterprises are always looking to reduce overall project timelines, optimize business processes, and increase acceptance of their enterprise applications to ensure maximum ROI. The latest release of Oracle User Productivity Kit helps customers streamline the workflow process for the creation of content and offers conceptual-based assessment options to increase user adoption. Discover what is great and innovative about the latest release of Oracle UPK and UPK Professional. Learn about the integration of the UPK Developer and the Knowledge Center, which provides developers with a centralized, web-based platform for content deployment, tracking, and reporting.

    Read the article

  • Help with deleted components registry keys (2 replies)

    Hello, I did a big mistake and I deleted the path of these files in windows xp registry: System.Workflow.Activities.dll PresentationFramework.Luna.dll RedistList\\FrameworkList.xml The keys that should contain the paths are: [HKEY LOCAL MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S 1 5 18\Components\300DC0511590697408C9B53F71E7AB4A] &quot;0DC1503A46F231838AD88BCDDC8E8F7C&quot; &quot;&quot; [H...

    Read the article

  • git commit –m “CodePlex now supports Git!”

    Finally, yes, CodePlex now supports Git! Git has been one of the top rated requests from the CodePlex community for some time: Admittedly, when we launched CodePlex, we never expected that at some point we would be running a source control system originally invented by Linus Torvalds to use for the Linux kernel. Though I would also say, nobody would have thought the open source ecosystem would be as important to Microsoft as it has become now. Giving CodePlex users what they ask for and supporting their open source efforts has always been important to us, and we have a long list of improvements planned, so stay tuned as we have more up our sleeves! Why Git? So why Git? CodePlex already has Mercurial for distributed version control and TFS (which also supports subversion clients) for centralized version control. The short answer is that the CodePlex community voted, loud and clear, that Git support was critical. Additionally, we just like it, we use Git on our team every day and making the DVCS workflows more available to the CodePlex community is just the right thing to do. Forks and Pull Requests One of the capabilities that distributed version control systems, such as Mercurial and Git, enable is the Fork and Pull Request workflow.  Just like with Mercurial, projects configured to use Git enable Forking the source and submitting contributions back via Pull Requests. The Fork/Pull Request workflow is a key accelerator to many open source projects and you will see improvements in our support coming later this year. More Choice With the addition of Git, now CodePlex has three options when it comes to Open Source project hosting. Projects can now select between TFS, Mercurial, and Git. Each developer has their own preferences, and for some, centralized version control makes more sense to them. For others, DVCS is the only way to go. We’re equally committed to supporting both these technologies for our users. You can get started today by creating a new project or contribute to an existing project by creating a fork. For help on getting started with Git on CodePlex, see our help documentation here. If you would like to switch your project to use Git, please contact us at CodePlex Support with your project information, and we will be happy to help you out. We're Listening CodePlex is your community, and we want to deliver the experiences you need to have a successful open source project. We want your ideas and feedback to make CodePlex a great development community.  The issue tracker on CodePlex is publicly available. Add suggestions or vote up existing suggestions. And you can always find us on Twitter, I’m @mgroves84; follow us to keep up to date with our latest releases: @codeplex

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • APress Deal of the Day 20/Dec/2010 - Beginning SQL Server Modeling: Model-Driven Application Development in SQL Server 2008

    - by TATWORTH
    Todays $10 bargain PDF from Apress is: Beginning SQL Server Modeling: Model-Driven Application Development in SQL Server 2008 Get ready for model-driven application development with SQL Server Modeling! This book covers Microsoft's SQL Server Modeling (formerly known under the code name "Oslo") in detail and contains the information you need to be successful with designing and implementing workflow modeling. $49.99 | Published Jul 2010 |

    Read the article

  • Structural and Sampling (JavaScript) Profiling in Google Chrome

    Structural and Sampling (JavaScript) Profiling in Google Chrome Slow JavaScript code on your pages? Chrome provides both a sampling, and a structural profiler to help you track down, isolate, and fix the underlying problem. Tune in to learn how to use both profilers, and how to improve your own workflow to build better, faster browser applications! We'll talk about chrome://tracing, the built-in JS profiler in DevTools, and much more. From: GoogleDevelopers Views: 0 3 ratings Time: 01:00:00 More in Science & Technology

    Read the article

  • Free APress e-book on GIT!

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/07/24/free-apress-e-book-on-git.aspxA free e-book in PDF, mobi and ePub formats is available at http://git-scm.com/book"Programmers or project leaders will learn to use Git, the version control system developed by Linus Torvalds for Linux kernel development. You'll discover the world of distributed version control and learn how to build a Git development workflow, with expert guidance from Scott Chacon."

    Read the article

  • Has anyone tried the Lenovo USB 3.0 Dock on Ubuntu?

    - by user88360
    I'm thinking on buying the Lenovo USB 3.0 Dock and use it with Ubuntu and Unity, but I haven't found information regarding if Ubuntu already has built-in drivers for it. The link to this product is this one: http://shop.lenovo.com/SEUILibrary/controller/e/web/LenovoPortal/en_US/catalog.workflow:item.detail?GroupID=460&Code=0A33970#overview So, I'd just like to know if its a good idea getting it or I'd better not because I might have a difficult time trying to set it up. Thanks.

    Read the article

  • Your own OEM configuration: YaST Firstboot

    <b>openSUSE Lizards:</b> "The YaST firstboot utility is a special kind of configuration workflow that can be run after the basic system is installed. It is started on the first boot of the system and guides a user through a series of steps that allow for easier configuration of their desktops."

    Read the article

  • Join the Cloud - Just Like Lending Club

    - by Di Seghposs
    See why Lending Club, the leading platform for investing in and obtaining personal loans, selected Oracle Fusion Financials to help improve decision-making and workflow, implement robust reporting, and take advantage of the scalability and cost savings provided by the cloud. Watch the Lending Club video. Additional Resources: Oracle ERP Cloud Service Video Oracle ERP Cloud Service Executive Strategy Brief Oracle Fusion Financials Quick Tour of Oracle Fusion Financials

    Read the article

  • WorkflowProbe

    The WorkflowProbe is the custom WF Activity for capturing the business behavior within the type/XOML activated Workflow and publishing it to the WCF channel.

    Read the article

  • SQL Server 2008 R2 Quiet Installation Failure

    - by pk
    I've downloaded the SQL Server 2008 R2 software from Microsoft and am working on scripting a silent installation. I'm getting the following errors (and the duplicate paste job is not an accident, that's how it shows up for me) The following error occurred: Exception has been thrown by the target of an invocation. Error result: 1152035024 Result facility code: 1194 Result error code: 43216 Please review the summary.txt log for further details The following error occurred: Exception has been thrown by the target of an invocation. Error result: 1152035024 Result facility code: 1194 Result error code: 43216 Please review the summary.txt log for further details Microsoft (R) SQL Server 2008 R2 Setup 10.50.1600.01 This is what shows up in the detailed SQL install log. 2011-02-23 09:53:13 Slp: Running Action: ExecuteInitWorkflow 2011-02-23 09:53:13 Slp: Workflow to execute: 'INITIALIZATION' 2011-02-23 09:53:13 Slp: Error: Action "Microsoft.SqlServer.Configuration.BootstrapExtension.ExecuteWorkflowAction" threw an exception during execution. 2011-02-23 09:53:13 Slp: Microsoft.SqlServer.Setup.Chainer.Workflow.ActionExecutionException: Exception has been thrown by the target of an invocation. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.ArgumentNullException: Value cannot be null. 2011-02-23 09:53:13 Slp: Parameter name: InstallMediaPath Hopefully someone can help me work through this. Here is a simple version of my PowerShell code. $arguments = @() $arguments += "/q" $arguments += "/ACTION=Install" $arguments += "/FEATURES=SQL,Tools" $arguments += "/INSTANCENAME=MSSQLSERVER" $arguments += "/SQLSVCACCOUNT=`"$NetBIOSDomainName\$SQLServerServiceAccount`"" $arguments += "/SQLSVCPASSWORD=`"$SQLServerServiceAccountPassword`"" $arguments += "/SQLSYSADMINACCOUNTS=`"$NetBIOSDomainName\$SQLSysAdminAccount`"" $arguments += "/AGTSVCACCOUNT=`"$NetBIOSDomainName\$SQLServerAgentAccount`"" $arguments += "/IACCEPTSQLSERVERLICENSETERMS" Start-Process "$SQLServerSetupLocation\setup.exe" -Wait -ArgumentList $arguments -RedirectStandardOutput error.txt

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • emacs, writing custom commands which use term-mode

    - by valya
    Hello, I'm using Emacs and M-x term for a terminal. Since my typical workflow looks like this: edit some code C-x C-o to the terminal buffer (or C-x b term[Enter] or something) press Up key to use the last command press Enter to run it C-x C-o to go back I want to bind all of these (except the first step... maybe) to one command, I believe Emacs is awesome enough to do that :-) So, a command must: go to the buffer with terminal (maybe it shouldn't change any windows at all, maybe it should split the window vertially (if it weren't split already) and use the right sid) run a last command what've been run there go back to the last buffer/part of the screen Thank you! I'm not really used to the Emacs scripting system, and I hope someone will help me and someone else will be able to use the answer to improve his workflow, since I believe this is a pretty common one Examples of commands: python manage.py test python manage.py test stats python solve.py # for project-euler puzzles :-) the first and the second runs over a ssh (in a terminal) sometimes (I like developing with vagrant) I understand that it's easy to bind the first and the third ones, but the second changes too often - I'd just like to "run last command"

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    Hi, I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Solutions on how to use an OS X calendar as a more perfect time tracking solution for 5-10 users in a small agency?

    - by jnthnclrk
    I really like OS X's iCal. Entering events is easy with the mouse and it also gives you a very real visual sense of how long tasks take to complete. We often work remotely in our organisation, so we use a few shared calendars between key individuals to provide us with an overview of hours worked, availability & schedule conflicts without too much disruption to our various, hectic workflows. It really is a neat solution, especially on shared tasks. How many times have you tasked a remote colleague and then lost the thread on whether that task was completed or not? With shared calendars you get a much clearer idea of what your people are working on without having to pick up the phone or compose a chat. However, there are a few areas where this approach fails... iCloud syncing often needs to be re-jiggered The "view only" option on shared calendars does not seem to work, which makes all shared calendars editable by others There is no decent reporting with this workflow There is no task categorisation or tagging Things get very busy in iCal when working with more than 2 shared calendars I've looked at a few task management apps like Basecamp and Harvest, but nothing appears to let me edit my calendar natively and then sync with a 3rd party. Interested in solutions to improve the above workflow and enable us to elegantly increase the amount of users.

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >