Search Results

Search found 4474 results on 179 pages for 'git workflow'.

Page 93/179 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Disabling Bluetooth crashes 3G: can I consider this a bug?

    - by workflow
    I'm on an Acer TravelMate 8372G here runnning Ubuntu 11.10 64bit and I get the following weird behaviour: As soon as I turn off bluetoooth, the built-in 3G connection disconnects and Network Manager is unable to reconnect, leaving the following track in syslog: NetworkManager[1205]: <warn> GSM modem enable failed: (32) Unknown error The only way I'm able to reconnect 3G is by going all the way through rfkill unblock bluetooth rfkill block wwan rfkill unblock wwan sudo service network-manager restart My question to you guys: should I file a bug, or is there any any logical connection I'm missing that makes this a feature? = thanks! workflow

    Read the article

  • fluent nHibernate mapping of subclassed structure

    - by Codezy
    I have a workflow class that has a collection of phases, each phase has a collection of tasks. You can design a workflow that will be used by many engagements. When used in engagement I want to be able to add properties to each class (workflow, phase, and task). For example a task in the designer does not have people assigned, but a task in an engagement would need extra properties like who is assigned to it. I have tried many different approaches using subclasses or interfaces but I just can't get it to map the way I want. Currently I have the engagement level versions as subclasses, but I can't get Engagement phases to map to engagement workflows. Public Class WorkflowMapping Inherits ClassMap(Of Workflow) Sub New() Id(Function(x As Workflow) x.Id).Column("Workflow_Id").GeneratedBy.Identity() Map(Function(x As Workflow) x.Description) Map(Function(x As Workflow) x.Generation) Map(Function(x As Workflow) x.IsActive) HasMany(Function(x As Workflow) x.Phases).Cascade.All() End Sub End Class Public Class EngagementWorkflowMapping Inherits SubclassMap(Of EngagementWorkflow) Sub New() Map(Function(x As EngagementWorkflow) x.ClientNo) Map(Function(x As EngagementWorkflow) x.EngagementNo) End Sub End Class How would you approach mapping this in fluent (or hbm) so that you could load just the workflow base class when designing the flow, or the engagement subclass versions of each when being used by an engagement?

    Read the article

  • Running bundle install fails trying to remote fetch from rubygems.org/quick/Marshal...

    - by dreeves
    I'm getting a strange error when doing bundle install: $ bundle install Fetching source index for http://rubygems.org/ rvm/rubies/ree-1.8.7-2010.02/lib/ruby/site_ruby/1.8/rubygems/remote_fetcher.rb:304 :in `open_uri_or_path': bad response Not Found 404 (http://rubygems.org/quick/Marshal.4.8/resque-scheduler-1.09.7.gemspec.rz) (Gem::RemoteFetcher::FetchError) I've tried bundle update, gem source -c, gem update --system, gem cleanup, etc etc. Nothing seems to solve this. I notice that the URL beginning with http://rubygems.org/quick does seem to be a 404 -- I don't think that's any problem with my network, though if that's reachable for anyone else then that would be a simple explanation for my problem. More hints: If I just gem install resque-scheduler it works fine: $ gem install resque-scheduler Successfully installed resque-scheduler-1.9.7 1 gem installed Installing ri documentation for resque-scheduler-1.9.7... Installing RDoc documentation for resque-scheduler-1.9.7... And here's my Gemfile: source 'http://rubygems.org' gem 'json' gem 'rails', '>=3.0.0' gem 'mongo' gem 'mongo_mapper', :git => 'git://github.com/jnunemaker/mongomapper', :branch => 'rails3' gem 'bson_ext', '1.1' gem 'bson', '1.1' gem 'mm-multi-parameter-attributes', :git=>'git://github.com/rlivsey/mm-multi-parameter-attributes.git' gem 'devise', '~>1.1.3' gem 'devise_invitable', '~> 0.3.4' gem 'devise-mongo_mapper', :git => 'git://github.com/collectiveidea/devise-mongo_mapper' gem 'carrierwave', :git => 'git://github.com/rsofaer/carrierwave.git' , :branch => 'master' gem 'mini_magick' gem 'jquery-rails', '>= 0.2.6' gem 'resque' gem 'resque-scheduler' gem 'SystemTimer' gem 'capistrano' gem 'will_paginate', '3.0.pre2' gem 'twitter', '~> 1.0.0' gem 'oauth', '~> 0.4.4'

    Read the article

  • Xcode / Interface Builder - better workflow from designer to coder?

    - by tbarbe
    Were dealing with some pretty custom UI elements while building our OSX / Cocoa and iPhone / IPad apps. I was wondering if anyone has good recommendations or tricks for getting a better workflow between UI designers and coders while using Xcode / Interface Builder? It seems that many things require programmatic settings with UI editing in Cocoa... if you stray from the pre-built UI elements then you can't really easily drag-drop build a UI... instead we end up handing off a design doc ( photoshop/illustrator ) and then the poor developer has to deal with recreating this masterpiece in code or by using interface builder - usually a combination of both. This work flow is leading us to not so great results and we have to re-iterate around the UI elements to get them to work better. We love CSS and / or Flash designer to developer workflow where the UI could look exactly as it should and the hand off to developer was more seamless. Is there anyone out there who has some tricks - or insights into getting better workflow when using tools like Xcode / Interface Builder and doing Cocoa apps?

    Read the article

  • Bash: Quotes getting stripped when a command is passed as argument to a function

    - by Shoaibi
    I am trying to implement a dry run kind of mechanism for my script and facing the issue of quotes getting stripped off when a command is passed as an argument to a function and resulting in unexpected behavior. dry_run () { echo "$@" #printf '%q ' "$@" if [ "$DRY_RUN" ]; then return 0 fi "$@" } email_admin() { echo " Emailing admin" dry_run su - $target_username -c "cd $GIT_WORK_TREE && git log -1 -p|mail -s '$mail_subject' $admin_email" echo " Emailed" } Output is: su - webuser1 -c cd /home/webuser1/public_html && git log -1 -p|mail -s 'Git deployment on webuser1' [email protected] Expected: su - webuser1 -c "cd /home/webuser1/public_html && git log -1 -p|mail -s 'Git deployment on webuser1' [email protected]" With printf enabled instead of echo: su - webuser1 -c cd\ /home/webuser1/public_html\ \&\&\ git\ log\ -1\ -p\|mail\ -s\ \'Git\ deployment\ on\ webuser1\'\ [email protected] Result: su: invalid option -- 1 That shouldn't be the case if quotes remained where they were inserted. I have also tried using "eval", not much difference. If i remove the dry_run call in email_admin and then run script, it work great.

    Read the article

  • Android Source code download error

    - by user351850
    Hi all I have followed the instructions on the Android website on how to download the latest android source code files but it gives errors when i run this command: repo init -u git://android2.git.kernel.org/platform/manifest.git It gives the following error: Getting repo ... from git://android.git.kernel.org/tools/repo.git android.git.kernel.org[0: 199.6.1.176]: errno=Connection refused android.git.kernel.org[0: 130.239.17.12]: errno=Connection refused fatal: unable to connect a socket (Connection refused) On checking forums for its resolution, i was told that port 9418 was being blocked. I use Ubuntu 10.04 and ensured that the firewall wasnt blocking the port and also enabled the port and the above IP addresses. I also spoke to the networking peeps who ensured that no traffic from the internet is being blocked. I would be glad if i could get directions on how to proceed next. Many thanks as you respond. Saheed.

    Read the article

  • SSH automatic logon works for one user but not the other

    - by tinmaru
    I want to enable automatic ssh login using the .ssh/config file for my git user. Here is my .ssh/config file: Host test HostName myserver.net User test IdentityFile ~/.ssh/id_rsa Host git HostName myserver.net User git IdentityFile ~/.ssh/id_rsa It works for my test user but not for my git user so my global SSH configuration is correct. The configuration are exactly the same as far as I know. It used to work with git user but I'm unable what change has broken the automatic logon. When I type: ssh -v git I get the following log: ... debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey Offering RSA public key: /Users/mylocalusername/.ssh/id_rsa debug1: Authentications that can continue: publickey,password debug1: Next authentication method: password [email protected]'s password: _ Does anyone know what could be a possible difference

    Read the article

  • Install NPM Packages Automatically for Node.js on Windows Azure Web Site

    - by Shaun
    In one of my previous post I described and demonstrated how to use NPM packages in Node.js and Windows Azure Web Site (WAWS). In that post I used NPM command to install packages, and then use Git for Windows to commit my changes and sync them to WAWS git repository. Then WAWS will trigger a new deployment to host my Node.js application. Someone may notice that, a NPM package may contains many files and could be a little bit huge. For example, the “azure” package, which is the Windows Azure SDK for Node.js, is about 6MB. Another popular package “express”, which is a rich MVC framework for Node.js, is about 1MB. When I firstly push my codes to Windows Azure, all of them must be uploaded to the cloud. Is that possible to let Windows Azure download and install these packages for us? In this post, I will introduce how to make WAWS install all required packages for us when deploying.   Let’s Start with Demo Demo is most straightforward. Let’s create a new WAWS and clone it to my local disk. Drag the folder into Git for Windows so that it can help us commit and push. Please refer to this post if you are not familiar with how to use Windows Azure Web Site, Git deployment, git clone and Git for Windows. And then open a command windows and install a package in our code folder. Let’s say I want to install “express”. And then created a new Node.js file named “server.js” and pasted the code as below. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7: 8: console.log("Web application opened."); 9: app.listen(process.env.PORT); If we switch to Git for Windows right now we will find that it detected the changes we made, which includes the “server.js” and all files under “node_modules” folder. What we need to upload should only be our source code, but the huge package files also have to be uploaded as well. Now I will show you how to exclude them and let Windows Azure install the package on the cloud. First we need to add a special file named “.gitignore”. It seems cannot be done directly from the file explorer since this file only contains extension name. So we need to do it from command line. Navigate to the local repository folder and execute the command below to create an empty file named “.gitignore”. If the command windows asked for input just press Enter. 1: echo > .gitignore Now open this file and copy the content below and save. 1: node_modules Now if we switch to Git for Windows we will found that the packages under the “node_modules” were not in the change list. So now if we commit and push, the “express” packages will not be uploaded to Windows Azure. Second, let’s tell Windows Azure which packages it needs to install when deploying. Create another file named “package.json” and copy the content below into that file and save. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*" 6: } 7: } Now back to Git for Windows, commit our changes and push it to WAWS. Then let’s open the WAWS in developer portal, we will see that there’s a new deployment finished. Click the arrow right side of this deployment we can see how WAWS handle this deployment. Especially we can find WAWS executed NPM. And if we opened the log we can review what command WAWS executed to install the packages and the installation output messages. As you can see WAWS installed “express” for me from the cloud side, so that I don’t need to upload the whole bunch of the package to Azure. Open this website and we can see the result, which proved the “express” had been installed successfully.   What’s Happened Under the Hood Now let’s explain a bit on what the “.gitignore” and “package.json” mean. The “.gitignore” is an ignore configuration file for git repository. All files and folders listed in the “.gitignore” will be skipped from git push. In the example below I copied “node_modules” into this file in my local repository. This means,  do not track and upload all files under the “node_modules” folder. So by using “.gitignore” I skipped all packages from uploading to Windows Azure. “.gitignore” can contain files, folders. It can also contain the files and folders that we do NOT want to ignore. In the next section we will see how to use the un-ignore syntax to make the SQL package included. The “package.json” file is the package definition file for Node.js application. We can define the application name, version, description, author, etc. information in it in JSON format. And we can also put the dependent packages as well, to indicate which packages this Node.js application is needed. In WAWS, name and version is necessary. And when a deployment happened, WAWS will look into this file, find the dependent packages, execute the NPM command to install them one by one. So in the demo above I copied “express” into this file so that WAWS will install it for me automatically. I updated the dependencies section of the “package.json” file manually. But this can be done partially automatically. If we have a valid “package.json” in our local repository, then when we are going to install some packages we can specify “--save” parameter in “npm install” command, so that NPM will help us upgrade the dependencies part. For example, when I wanted to install “azure” package I should execute the command as below. Note that I added “--save” with the command. 1: npm install azure --save Once it finished my “package.json” will be updated automatically. Each dependent packages will be presented here. The JSON key is the package name while the value is the version range. Below is a brief list of the version range format. For more information about the “package.json” please refer here. Format Description Example version Must match the version exactly. "azure": "0.6.7" >=version Must be equal or great than the version. "azure": ">0.6.0" 1.2.x The version number must start with the supplied digits, but any digit may be used in place of the x. "azure": "0.6.x" ~version The version must be at least as high as the range, and it must be less than the next major revision above the range. "azure": "~0.6.7" * Matches any version. "azure": "*" And WAWS will install the proper version of the packages based on what you defined here. The process of WAWS git deployment and NPM installation would be like this.   But Some Packages… As we know, when we specified the dependencies in “package.json” WAWS will download and install them on the cloud. For most of packages it works very well. But there are some special packages may not work. This means, if the package installation needs some special environment restraints it might be failed. For example, the SQL Server Driver for Node.js package needs “node-gyp”, Python and C++ 2010 installed on the target machine during the NPM installation. If we just put the “msnodesql” in “package.json” file and push it to WAWS, the deployment will be failed since there’s no “node-gyp”, Python and C++ 2010 in the WAWS virtual machine. For example, the “server.js” file. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: var sql = require("msnodesql"); 9: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:tqy4c0isfr.database.windows.net,1433;Database=msteched2012;Uid=shaunxu@tqy4c0isfr;Pwd=P@ssw0rd123;Encrypt=yes;Connection Timeout=30;"; 10: app.get("/sql", function (req, res) { 11: sql.open(connectionString, function (err, conn) { 12: if (err) { 13: console.log(err); 14: res.send(500, "Cannot open connection."); 15: } 16: else { 17: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 18: if (err) { 19: console.log(err); 20: res.send(500, "Cannot retrieve records."); 21: } 22: else { 23: res.json(results); 24: } 25: }); 26: } 27: }); 28: }); 29: 30: console.log("Web application opened."); 31: app.listen(process.env.PORT); The “package.json” file. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*", 6: "msnodesql": "*" 7: } 8: } And it failed to deploy to WAWS. From the NPM log we can see it’s because “msnodesql” cannot be installed on WAWS. The solution is, in “.gitignore” file we should ignore all packages except the “msnodesql”, and upload the package by ourselves. This can be done by use the content as below. We firstly un-ignored the “node_modules” folder. And then we ignored all sub folders but need git to check each sub folders. And then we un-ignore one of the sub folders named “msnodesql” which is the SQL Server Node.js Driver. 1: !node_modules/ 2:  3: node_modules/* 4: !node_modules/msnodesql For more information about the syntax of “.gitignore” please refer to this thread. Now if we go to Git for Windows we will find the “msnodesql” was included in the uncommitted set while “express” was not. I also need remove the dependency of “msnodesql” from “package.json”. Commit and push to WAWS. Now we can see the deployment successfully done. And then we can use the Windows Azure SQL Database from our Node.js application through the “msnodesql” package we uploaded.   Summary In this post I demonstrated how to leverage the deployment process of Windows Azure Web Site to install NPM packages during the publish action. With the “.gitignore” and “package.json” file we can ignore the dependent packages from our Node.js and let Windows Azure Web Site download and install them while deployed. For some special packages that cannot be installed by Windows Azure Web Site, such as “msnodesql”, we can put them into the publish payload as well. With the combination of Windows Azure Web Site, Node.js and NPM it makes even more easy and quick for us to develop and deploy our Node.js application to the cloud.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Can Foswiki be used as a distributed Redmine replacement? [closed]

    - by Tobias Kienzler
    I am quite familiar with and love using git, among other reasons due to its distributed nature. Now I'd like to set up some similarly distributed (FOSS) Project Management software with features similar to what Redmine offers, such as Issue & time tracking, milestones Gantt charts, calendar git integration, maybe some automatic linking of commits and issues Wiki (preferably with Mathjax support) Forum, news, notifications Multiple Projects However, I am looking for a solution that does not require a permanently accesible server, i.e. like in git, each user should have their own copy which can be easily synchronized with others. However it should be possible to not have a copy of every Project on every machine. Since trac uses multiple instances for multiple projects anyway, I was considering using that, but I neither know how well it adapts to simply giting the database itself (which would be be easiest way to handle the distribution due to git being used anyway), nor does it include all of Redmine's feature. After checking http://www.wikimatrix.org for Wikis with integrated tracking system and RCS support, and filtering out seemingly stale project, the choices basically boil down to Foswiki, TWiki and Ikiwiki. The latter doesn't seem to offer as many usability features, and in the TWiki vs Foswiki issue I tend to the latter. Finally, there is Fossil, which starts from the other end by attempting to replace git entirely and tracking itself. I am however not too comfortable with the thought of replacing git, and Fossil's non-SCM features don't seem to be as developed. Now before I invest too much time when someone else might already have tried this, I basically have two questions: Are there crucial features of Project Management software like Redmine that Foswiki does not provide even with all the extensions available? How to set Foswiki up to use git instead of the perl RcsLite?

    Read the article

  • Le projet PHP migre sur Git, les tags des futures release seront signés par l'équipe de développement elle-même

    Le système de gestion de version Git vient de recevoir un nouveau "membre" : il s'agit du projet PHP qui vient de migrer complètement son code source sur cette plate-forme comme on peut le lire sur leur site web . Il est donc désormais possible de cloner ou de "forker" les sources de PHP depuis son miroir GitHub. De même les requêtes Pull faites via GitHub sont désormais prises en charge. Le code source est également disponible via le lien suivant et toutes les instructions relatives au clonage de l'arbre des sources de PHP peuvent être consultées sur le lien suivant .

    Read the article

  • Is it possible to publish component revisions within workflow?

    - by Ianthe
    Our current setup is that we have two targets, Staging and Live. Collaborators may update the affected component while it is still within the workflow. A final activity is set to publish the related pages to Live. Is it possible to publish component updates (revisions e.g. 2.2, 2.5) to Staging from within the workflow? The TOM API documentation for Page.Publish() method does not seem to have an input parameter to fullfil such purpose.

    Read the article

  • ssh-agent is broken after running Meerkat - can connect to git in terminal but not in Tower - no keychain access

    - by marblegravy
    My mac running Snow leopard 10.6.8 is having trouble handling it's ssh keys. I could previously access all my git repo's via Tower without an issue. The other day I ran Meerkat to see what it was about and it looks like it has broken the way ssh works. Terminal doesn't seem to have a problem and can still connect to Git, but it can't access the keychain. Tower doesn't seem to be able to access anything. The Tower support crew have been super helpful, but I wanted to float this here and see if anyone has any ideas on how to fix my problem. The only hints I have are: $ which ssh returns: /usr/bin/ssh and echo $SSH_AUTH_SOCK returns: /tmp/ssh-nBhRYVEg8t/agent.199 (This one seems to be wrong as I think it's supposed to point to a Listener, but no idea how to fix it) additional: Keychain first-aid finds no problems. The problem seems to be that ssh-agent is not being run properly... but that's just a guess.

    Read the article

  • Use msysgit/"Git for Windows" to navigate Windows shortcuts?

    - by Darthfett
    I use msysgit on Windows to use git, but I often want to navigate through a Windows-style *.lnk shortcut. I typically manage my file structure through Windows' explorer, so using a different type of shortcut (such as creating hard or soft link in git) isn't feasible. How would I navigate through this type of shortcut? For example: PCUser@PCName ~ $ cd Desktop PCUser@PCName ~/Desktop $ ls Scripts.lnk PCUser@PCName ~/Desktop $ cd Scripts.lnk sh.exe": cd: Scripts.lnk: Not a directory Is it possible to change this behavior, so that instead of getting an error, it just goes to the location of the directory? Alternatively, is there a command to get the path in a *.lnk file?

    Read the article

  • Oracle Fusion Supply Chain Management (SCM) Designs May Improve End User Productivity

    - by Applications User Experience
    By Applications User Experience on March 10, 2011 Michele Molnar, Senior Usability Engineer, Applications User Experience The Challenge: The SCM User Experience team, in close collaboration with product management and strategy, completely redesigned the user experience for Oracle Fusion applications. One of the goals of this redesign was to increase end user productivity by applying design patterns and guidelines and incorporating findings from extensive usability research. But a question remained: How do we know that the Oracle Fusion designs will actually increase end user productivity? The Test: To answer this question, the SCM Usability Engineers compared Oracle Fusion designs to their corresponding existing Oracle applications using the workflow time analysis method. The workflow time analysis method breaks tasks into a sequence of operators. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time can be calculated. The workflow time analysis method has been recently adopted by the Applications User Experience group for use in predicting end user productivity. Using this method, a design can be tested and refined as needed to improve productivity even before the design is coded. For the study, we selected some of our recent designs for Oracle Fusion Product Information Management (PIM). The designs encompassed tasks performed by Product Managers to create, manage, and define products for their organization. (See Figure 1 for an example.) In applying this method, the SCM Usability Engineers collaborated with Product Management to compare the new Oracle Fusion Applications designs against Oracle’s existing applications. Together, we performed the following activities: Identified the five most frequently performed tasks Created detailed task scenarios that provided the context for each task Conducted task walkthroughs Analyzed and documented the steps and flow required to complete each task Applied standard time estimates to the operators in each task to estimate the overall task completion time Figure 1. The interactions on each Oracle Fusion Product Information Management screen were documented, as indicated by the red highlighting. The task scenario and script provided the context for each task.  The Results: The workflow time analysis method predicted that the Oracle Fusion Applications designs would result in productivity gains in each task, ranging from 8% to 62%, with an overall productivity gain of 43%. All other factors being equal, the new designs should enable these tasks to be completed in about half the time it takes with existing Oracle Applications. Further analysis revealed that these performance gains would be achieved by reducing the number of clicks and screens needed to complete the tasks. Conclusions: Using the workflow time analysis method, we can expect the Oracle Fusion Applications redesign to succeed in improving end user productivity. The workflow time analysis method appears to be an effective and efficient tool for testing, refining, and retesting designs to optimize productivity. The workflow time analysis method does not replace usability testing with end users, but it can be used as an early predictor of design productivity even before designs are coded. We are planning to conduct usability tests later in the development cycle to compare actual end user data with the workflow time analysis results. Such results can potentially be used to validate the productivity improvement predictions. Used together, the workflow time analysis method and usability testing will enable us to continue creating, evaluating, and delivering Oracle Fusion designs that exceed the expectations of our end users, both in the quality of the user experience and in productivity. (For more information about studying productivity, refer to the Measuring User Productivity blog.)

    Read the article

  • zsh auto-complete event designator

    - by simont
    (See my previous question for additional context). I'm migrating to zsh from bash, and using oh-my-zsh. When my zsh history looks something like the following: git status git add -A git commit I want to be able to re-run git add -A. To do that, I could use !?git add, which should: !?str[?] Refer to the most recent command containing str. The trailing ‘?’ is necessary if this reference is to be followed by a modifier or followed by any text that is not to be considered part of str. The link for zsh event designators is here. Unfortunately, I can't do this - as I'm typing !?git add, as I hit the ' ', it auto-completes the command to the most recent command matching git (ie, it auto-completes with git commit). I can't use the event designator properly because of this auto-completion as I hit the space. I assume this is an oh-my-zsh feature. I have no idea where to look, though - greping for 'complet' in the oh-my-zsh source doesn't get me anywhere. My question: how do I turn off this feature? Or, if that's not something that's known, where should I be looking - if I was going to implement this auto-complete when whitespace is entered, where would be a logical place to do so in the oh-my-zsh framework?

    Read the article

  • haml with rails3 (git master) and devise: form_for syntax change breaks haml -- suggestions?

    - by z3cko
    i am trying to get haml working with a rails3 project; since i am quite far in the modeling i wanted to go to the haml views now -- seems that the current haml (git master) does not work together with the current rails3 git master because of some syntax changes in rails3 form_for does anyone have more information on the syntax changes? is there a temporary workaround to use haml with rails3? (i am on a deadline) :( see also: http://j.mp/9EYraQ thanks!

    Read the article

  • How to get path to the installed GIT in Python?

    - by Vladimir Prudnikov
    I need to get a path to the GIT on Max OS X 10.6 using Python 2.6.1 into script variables. I use this code for that: r = subprocess.Popen(shlex.split("which git"), stdout=subprocess.PIPE) print r.stdout.read() but the problem is that output is empty (I tried stderr too). It works fine with another commands such as pwd or ls. Can anyone help me with that?

    Read the article

  • Does Git support more than 32 bit file in windows?

    - by dhanasekar79
    We have a problem in cloning a repository created in unix in to a Windows box. Git fails while checking out a lengthy file that has more than 32 characters in windows. The file name is given below. BaseFCS_x0020_OnLine_x0020_Identicheck_x0020_verification_x0020_serviceConsumer.java* Is there a way to fix this issue in Git?

    Read the article

  • The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

    - by Alexandra Georgescu
    After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it. One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals: The Approvals Administrator consisting of the following roles: Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities). Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned. Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities) Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays: After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are: All, which pre-selects all entities to be included for the definitions on the subsequent tabs None, manual entity selections will be made subsequently Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions. Finally a pattern needs to be selected, which will determine the general flow of ownership: Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2 In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path. Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process. Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity. After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members. -------------------------------------------------------------------------------------------------------- TIP: In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.). Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute. -------------------------------------------------------------------------------------------------------- In the next step owners and reviewers are assigned to the PUH. Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the ordermthat you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit. -------------------------------------------------------------------------------------------------------- TIP: Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units. -------------------------------------------------------------------------------------------------------- In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment. Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries. After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals. The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here: Export/Import of PUHs: Out of Office agent: Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)): And various new and helpful reviewer actions with corresponding approval states. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Read the article

  • Stumbling Through: Making a case for the K2 Case Management Framework

    I have recently attended a three-day training session on K2s Case Management Framework (CMF), a free framework built on top of K2s blackpearl workflow product, and I have come away with several different impressions for some of the different aspects of the framework.  Before we get into the details, what is the Case Management Framework?  It is essentially a suite of tools that, when used together, solve many common workflow scenarios.  The tool has been developed over time by K2 consultants that have realized they tend to solve the same problems over and over for various clients, so they attempted to package all of those common solutions into one framework.  Most of these common problems involve workflow process that arent necessarily direct and would tend to be difficult to model.  Such solutions could be achieved in blackpearl alone, but the workflows would be complex and difficult to follow and maintain over time.  CMF attempts to simplify such scenarios not so much by black-boxing the workflow processes, but by providing different points of entry to the processes allowing them to be simpler, moving the complexity to a middle layer.  It is not a solution in and of itself, development is still required to tie the pieces together. CMF is under continuous development, both a plus and a minus in that bugs are fixed quickly and features added regularly, but it may be difficult to know which versions are the most stable.  CMF is not an officially supported K2 product, which means you will not get technical support but you will get access to the source code. The example given of a business process that would fit well into CMF is that of a file cabinet, where each folder in said file cabinet is a case that contains all of the data associated with one complaint/customer/incident/etc. and various users can access that case at any time and take one of a set of pre-determined actions on it.  When I was given that example, my first thought was that any workflow I have ever developed in the past could be made to fit this model there must be more than just this model to help decide if CMF is the right solution.  As the training went on, we learned that one of the key features of CMF is SharePoint integration as each case gets a SharePoint site created for it, and there are a number of excellent web parts that can be used to design a portal for users to get at all the information on their cases.  While CMF does not require SharePoint, without it you will be missing out on a huge portion of functionality that CMF offers.  My opinion is that without SharePoint integration, you may as well write your workflows and other components the old fashioned way. When I heard that each case gets its own SharePoint site created for it, warning bells immediately went off in my head as I felt that depending on the data load, a CMF enabled solution could quickly overwhelm SharePoint with thousands of sites so we have yet another deciding factor for CMF:  Just how many cases will your solution be creating?  While it is not necessary to use the site-per-case model, it is one of the more useful parts of the framework.  Without it, you are losing a big chunk of what CMF has to offer. When it comes to developing on top of the Case Management Framework, it becomes a matter of configuring what makes up a case, what can be done to a case, where each action on a case should take the user, and then typing up actions to case statuses.  This last step is one that I immediately warmed up to, as just about every workflow Ive designed in the past needed some sort of mapping table to set the status of a work item based on the action being taken definitely one of those common solutions that it is good to see rolled up into a re-useable entity (and it gets a nice configuration UI to boot!).  This concept is a little different than traditional workflow design, in that you dont have to think of an end-to-end process around passing a case along a path, rather, you must envision the case as central object with workflow threads branching off of it and doing their own thing with the case data.  Certainly there can be certain workflow threads that get rather complex, but the idea is that they RELATE to the case, they dont BECOME the case (though it is still possible with action->status mappings to prevent certain actions in certain cases, so it isnt always a wide-open free for all of actions on a case). I realize that this description of the Case Management Framework merely scratches the surface on what the product actually can do, and I dont think Ive conclusively defined for what sort of business scenario you can make a case for Case Management Framework.  What I do hope to have accomplished with this post is to raise awareness of CMF there is a (free!) product out there that could potentially simplify a tangled workflow process and give (for free!) a very useful set of SharePoint web parts and a nice set of (free!) reports.  The best way to see if it will truly fit your needs is to give it a try did I mention it is FREE?  Er, ok, so it is free, but only obtainable at this time for K2 partnersDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Ubuntu studio/xubuntu 12.04 instead of Ubuntu 12.04/12.10 for CAD/ arhcitectural workflow. Worth it?

    - by gabriel
    I am currently using Ubuntu 12.10. So, as described in the title I am planning to install Ubuntu studio. The programs i use are Blender, Maya 2013, NukeX, Bricscad, Sketchup (with wine) and also i am planning to install revit architecture through VirtualBox. Well, I am using a quad-core CPU and i want to have all the power of my system for rendering/modelling. So, i decided to try a more lightweight desktop than unity. Also, what made me to decide this, is that when i tried to install Bricscad v12 the program does not work. So, i thought that if i want something more professional for my work i should have only LTS versions of lightweight Ubuntu. So, my 2 questions are :1) Worth it? 2) Can i have global menu(close,minimize,maximize buttons, menu) like ubuntu/unity? Thanks

    Read the article

  • Fork dead SVN based project on GitHub

    - by Quinn Bailey
    I previously asked this at stack overflow but it was closed, I believe because 'programmers' is a more appropriate venue for this question. I have done some work on the SVN Importer project (Apache license), which appears to be effectively dead (no published changes in 5 years). I have a login to their svn server but do not have commit rights. At any rate, I'd like to convert this project to Git and push my own changes to GitHub. The GitHub site suggests the svn2git tool for converting svn projects to Git, so I was planning to convert the SVN repository to Git, add my changes, and then push this Git repository to GitHub. I'm wondering, what are the legal requirements and common conventions of this process? Is it acceptable to clone the entire history of the project and move it to GitHub? Also, even though this is essentially a dead project, once I've translated the repository to Git should I put all of my commits onto a non master branch or is it acceptable to use master in this case?

    Read the article

  • What Would You Consider Best Practice Workflow Tools For Web Application (PHP) Development?

    - by Zenph
    I'm really hoping somebody with more experience can edit the question as per my examples of answers: • using version control • test driven development • debugging code (xdebug for php) • use of UML diagrams • use of OOP for maintainable, reusable code • use of frameworks (like Zend Framework for php) for rapid application development Anything else or an elaboration of what I mentioned above? Basically, I'm in the middle of forming a team of developers (I'm a developer myself) and I'd like some advice on how professional programmers/designers etc should work together and what standards/paradigms they should use. Also, if anybody has any books or links on the subject I'd welcome that! I did find this which I guess satisfies what I'm looking for, or at least part thereof: http://www.ibm.com/developerworks/websphere/library/techarticles/0306_perks/perks2.html

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >