Search Results

Search found 3166 results on 127 pages for 'git p4'.

Page 121/127 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Sell me Distributed revision control

    - by ring bearer
    I know 1000s of similar topics floating around. I read at lest 5 threads here in SO But why am I still not convinced about DVCS? I have only following questions (note that I am selfishly worried only about Java projects) What is the advantage or value of committing locally? What? really? All modern IDEs allows you to keep track of your changes? and if required you can restore a particular change. Also, they have a feature to label your changes/versions at IDE level!? what if I crash my hard drive? where did my local repository go? (so how is it cool compared to checking in to a central repo?) Working offline or in an air plane. What is the big deal?In order for me to build a release with my changes, I must eventually connect to the central repository. Till then it does not matter how I track my changes locally. Ok Linus Torvalds gives his life to Git and hates everything else. Is that enough to blindly sing praises? Linus lives in a different world compared to offshore developers in my mid-sized project? Pitch me!

    Read the article

  • [WEB] Local/Dev/Live deployment - best workflow

    - by Adam Kiss
    Hello, situation We our little company with 3 people, each has a localhost webserver and most projects (previous and current) are on one PC network shared disk. We have virtual server, where some of our clients' sites and our site. Our standard workflow is: Coder PC ? Programmer localhost ? dev domain (client.company.com) ? live version (client.com) It often happens, that there are two or three guys working on same projects at the same time - one is on dev version, two are on localhost. When finished, we try to synchronize the files on dev version and ideally not to mess (thanks ILMV:]) up any files, which *knock knock * doesn't happen often. And then one of us deploys dev version on live webserver. question we are looking for a way to simplify this workflow while updating websites - ideally some sort of diff uploader or VCS probably (Git/SVN/VCS/...), but we are not completely sure where to begin or what way would be ideal, therefore I ask you, fellow stackoverflowers for your experience with website / application deployment and recommended workflow. We probably will also need to use Mac in process, so if it won't be a problem, that would be even better. Thank you

    Read the article

  • JOGL: cannot run java on compiled class file

    - by John Goche
    I am running ubuntu Linux 11.10. I have followed the instructions on the site http://jogamp.org/jogl/www/ and downloaded everything with git to $HOME/jogamp and built everything with ant (although I would have prefered the files installed somewhere under /usr/local). I am trying to run a simple application with JOGL found on the Wikipedia site: http://en.wikipedia.org/wiki/Java_OpenGL In order to compile the file I had to add: export CLASSPATH.:$HOME/jogamp/jogl/build/jogl/classes in order to compile the code. When I run java I get the following: $ java JOGLQuad Exception in thread "main" java.lang.NoClassDefFoundError: javax/media/nativewindow /WindowClosingProtocol at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631) at java.lang.ClassLoader.defineClass(ClassLoader.java:615) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) at java.net.URLClassLoader.defineClass(URLClassLoader.java:283) at java.net.URLClassLoader.access$000(URLClassLoader.java:58) at java.net.URLClassLoader$1.run(URLClassLoader.java:197) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Caused by: java.lang.ClassNotFoundException: javax.media.nativewindow.WindowClosingProtocol at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) ... 12 more Could not find the main class: JOGLQuad. Program will exit. How do I fix this and where can I find more JOGL code samples that I can compile and run? Thanks, John Goche

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

  • TFS How does merging work?

    - by Johannes Rudolph
    I have a release branch (RB, starting at C5) and a changeset on trunk (C10) that I now want to merge onto RB. The file has changes at C3 (common to both), one in CS 7 on RB, and one in C9 (trunk) and one in C10). So the history for my changed file looks like this: RB: C5 -> C7 Trunk: C3 -> C9 -> C10 When I merge C10 from trunk to RB, I'd expect to see a merge window showing me C10 | C3 | C7 since C3 is the common ancestor revision and C10 and C7 are the tips of my two branches respectively. However, my merge tool shows me C10 | C9 | C7. My merge tool is configured to show %1(OriginalFile)|%3(BaseFile)|%2(Modified File), so this tells me TFS chose C9 as the base revision. This is totally unexpected and completely contrary to the way I'm used to merges working in Mercurial or Git. Did I get something wrong or is TFS trying to drive me nuts with merging? Is this the default TFS Merge behavior? If so, can you provide insight into why they chose to implement it this way? I'm using TFS 2008 with VS2010 as a Client.

    Read the article

  • Trouble using termextraction ge

    - by mathee
    I'm trying to install this gem: http://github.com/alexrabarts/term_extraction. It required nokogiri, which I tried installing. I'm getting this as the output: >gem install nokogiri Successfully installed nokogiri-1.4.2.1-x86-mswin32 1 gem installed Installing ri documentation for nokogiri-1.4.2.1-x86-mswin32... No definition for parse_memory No definition for parse_file No definition for parse_with No definition for get_options No definition for set_options Installing RDoc documentation for nokogiri-1.4.2.1-x86-mswin32... No definition for parse_memory No definition for parse_file No definition for parse_with No definition for get_options No definition for set_options I was able to install the termextraction gem (per the README on git repo): >gem install alexrabarts-term_extraction -s http://gems.github.com Successfully installed alexrabarts-term_extraction-0.1.4 1 gem installed Installing ri documentation for alexrabarts-term_extraction-0.1.4... Installing RDoc documentation for alexrabarts-term_extraction-0.1.4... The issue is that I'm trying to test it out, but I'm getting an "uninitialized constant" error when I use it: ActionView::TemplateError (uninitialized constant ApplicationHelper::TermExtraction) on line #3 of app/views/questions/new. haml: 1: %h1 New question 2: -msg = "testing this context thing let's see what it gives me" 3: -getTerms(msg) 4: #new-question-form 5: .box-background 6: -form_for(@question) do |f| app/helpers/application_helper.rb:42:in `getTerms' app/views/questions/new.haml:3:in `_run_haml_app47views47questions47new46haml' haml (2.2.23) lib/haml/helpers/action_view_mods.rb:13:in `render' haml (2.2.23) lib/haml/helpers/action_view_mods.rb:13:in `render' app/controllers/questions_controller.rb:91:in `new' haml (2.2.23) lib/sass/plugin/rails.rb:20:in `process' Here is application_helper.rb: module ApplicationHelper def getTerms(context) yahoo = TermExtraction::Yahoo.new(:api_key => 'myAPIkey', :context => context) end end I'm not sure what the issue is. Any insight would be greatly appreciated!

    Read the article

  • node.js beginner tutorials?

    - by TreyK
    Hey all, I'm working on creating my first real node.js http server, and I'm sort of drowning in it. As a good teacher of mine always said, "I'll just shove you in the water for now, and then I'll show you how to swim." Fortunately, she wasn't a swimming instructor, but it's a good analogy nonetheless. I feel like I've jumped into node.js and I've only found a ping pong ball to help, that is to say, most of the tutorials I've read stop shortly after the "Hello World" example and I've mostly been trying to make sense of copied and pasted code (or they assume I have knowledge of lower level HTTP and webserver concepts that have been done for me as an Apache/PHP developer). I have experience in both client-side Javascript and PHP, but node seems to be a beast all of its own. I don't quite have the low-level knowledge that seems necessary for creating a node server, and connect, which seems to be a nice module for simplifying things, seems quite sparsely explained, even in the docs on its Git. Where could I find some tutorials to help me in this situation? TL;DR - Are there any tutorials for node.js that go beyond "Hello World" but don't require much low-level knowledge? Or any tutorials that explain lower-level HTTP and webserver concepts that I would need to effectively create a node HTTP server? Thanks for any help. -Trey

    Read the article

  • Storyboard Bug - iOS 5.1 - Xcode 4.3.3

    - by user1505431
    In my iOS project (using xcode only), I continue to run into a problem where layout presented to me in the storyboard editor becomes automatically modified after some change (which I have not been able to specifically determine). The problem is as follows: The TabBarController has for whatever reason started displaying in landscape orientation. Some of the NavigationControllers have also done the same thing. I can no longer see or edit the navigation bar on my nested views. I can no longer see of edit the tab bar on the views of the resp. tab bar items. Everything works properly when I run the app in my simulator. If I had set it up prior to this change in default display settings, it still works just fine. Here is a screen shot of the problem: My storyboard has consistently presented me with this bug throughout the course of my project. I have fixed it once by resetting via git and another time by rebuilding the entire storyboard. Both solutions worked for an extended period of time, but I would rather have a permanent solution. Any input would be helpful.

    Read the article

  • Just 2 free months 2 learn or improve my skills

    - by microspino
    On the 30 of June I will leave my every day work to start as freelance developer. I'd like to set a period of 2 months apart to improve my dev skills. At work I code in C# and during my spare time I enjoyed building Ruby on Rails web applications and creating some Arduino prototypes. I'm something more than junior but I don't feel really a senior developer because I never had a big corporate project built and designed by me with help of other juniors (although I don't think this is really a good definiton of a "senior", It helps describing my feelings). Using a scale from 0 (ignorant) to 10 (proficient like a "samurai") the list below describes my skills that I would like to improve with just 2 months. I've already bought some nice and updated books on all the subjects hereunder: The order doesn't matter C = 1 C# & .Net = 6 Arduino & Processing = 2 Ruby = 5 Rails = 5 HTML/XHTML/CSS = 9 Javascript = 6 Objective-C/iPhone dev = 2 Python = 4 Django = 4 Desing Patterns = 3 Algorythms = 3 Git = 5 I haven't included SQL or Databases in general nor Networking because I spent 10 years working in the past with them and I feel pretty solid for now. As an aside, I've made up some interest in Redis, Node.js, HTML5 reading about them on the web. After two months, since I have to pay my bills, I could go searching for some new job. If learning and developing were really good maybe I could also invest on something I gave birth during them. Can You give me some piece of advice on which you think It's better to improve or develop a learning project on (something like a "summer of code" thing)? The all point Is to see my weeknesses and work on them.

    Read the article

  • What Test Environment Setup do Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • app can not run on Windows 2003

    - by Carlos_Liu
    I have created a application base on .net framework 2.0 in a windows XP machine, then I copied the app to another Windows 2003 server machine which has installed .net framework 3.5 but the app can't be launched and throught the event view i got the following errors: Event Type: Error Event Source: .NET Runtime 2.0 Error Reporting Event Category: None Event ID: 5000 Date: 5/15/2010 Time: 2:19:39 PM User: N/A Computer: AVCNDAECLIU4 Description: EventType clr20r3, P1 ftacsearchpopup.exe, P2 1.0.0.0, P3 4bee3c42, P4 ftacsearchpopup, P5 1.0.0.0, P6 4bee3c42, P7 11, P8 e, P9 system.io.fileloadexception, P10 NIL. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 63 00 6c 00 72 00 32 00 c.l.r.2. 0008: 30 00 72 00 33 00 2c 00 0.r.3.,. 0010: 20 00 66 00 74 00 61 00 .f.t.a. 0018: 63 00 73 00 65 00 61 00 c.s.e.a. 0020: 72 00 63 00 68 00 70 00 r.c.h.p. 0028: 6f 00 70 00 75 00 70 00 o.p.u.p. 0030: 2e 00 65 00 78 00 65 00 ..e.x.e. 0038: 2c 00 20 00 31 00 2e 00 ,. .1... 0040: 30 00 2e 00 30 00 2e 00 0...0... 0048: 30 00 2c 00 20 00 34 00 0.,. .4. 0050: 62 00 65 00 65 00 33 00 b.e.e.3. 0058: 63 00 34 00 32 00 2c 00 c.4.2.,. 0060: 20 00 66 00 74 00 61 00 .f.t.a. 0068: 63 00 73 00 65 00 61 00 c.s.e.a. 0070: 72 00 63 00 68 00 70 00 r.c.h.p. 0078: 6f 00 70 00 75 00 70 00 o.p.u.p. 0080: 2c 00 20 00 31 00 2e 00 ,. .1... 0088: 30 00 2e 00 30 00 2e 00 0...0... 0090: 30 00 2c 00 20 00 34 00 0.,. .4. 0098: 62 00 65 00 65 00 33 00 b.e.e.3. 00a0: 63 00 34 00 32 00 2c 00 c.4.2.,. 00a8: 20 00 31 00 31 00 2c 00 .1.1.,. 00b0: 20 00 65 00 2c 00 20 00 .e.,. . 00b8: 73 00 79 00 73 00 74 00 s.y.s.t. 00c0: 65 00 6d 00 2e 00 69 00 e.m...i. 00c8: 6f 00 2e 00 66 00 69 00 o...f.i. 00d0: 6c 00 65 00 6c 00 6f 00 l.e.l.o. 00d8: 61 00 64 00 65 00 78 00 a.d.e.x. 00e0: 63 00 65 00 70 00 74 00 c.e.p.t. 00e8: 69 00 6f 00 6e 00 20 00 i.o.n. . 00f0: 4e 00 49 00 4c 00 0d 00 N.I.L... 00f8: 0a 00 ..

    Read the article

  • What Test Environment Setup do Top Project Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • How can I abstract out the core functionality of several Rails applications?

    - by hornairs
    I'd like to develop a number of non-trivial Rails applications which all implement a core set of functionality but each have certain particular customizations, extensions, and aesthetic differences. How can I pull the core functionality (models, controllers, helpers, support classes, tests) common to all these systems out in such a way that updating the core will benefit every application based upon it? I've seen Rails Engines but they seem to be too detached, almost too abstracted to be built upon. I can seem them being useful for adding one component to an existing app, for example bolting on a blog engine to your existing e-commerce site. Since engines seem to be mostly self contained, it seems difficult and inconvenient to override their functionality and views while keeping DRY. I've also considered abstracting the code into a gem, but this seems a little odd. Do I make the gem depend on the Rails gems, and the define models & controllers inside it, and then subclass them in my various applications? Or do I define many modules inside the gem that I include in the different spots inside my various applications? How do I test the gem and then test the set of customizations and overridden functionality on top of it? I'm also concerned with how I'll develop the gem and the Rails apps in tandem, can I vendor a git repository of the gem into the app and push from that so I don't have to build a new gem every iteration? Also, are there private gem hosts/can I set my own gem source up? Also, any general suggestions for this kind of undertaking? Abstraction paradigms to adhere to? Required reading? Comments from the wise who have done this before? Thanks!

    Read the article

  • Just 2 free months to learn or improve my skills

    - by microspino
    On the 30 of June I will leave my every day work to start as freelance developer. I'd like to set a period of 2 months apart to improve my dev skills. At work I code in C# and during my spare time I enjoyed building Ruby on Rails web applications and creating some Arduino prototypes. I'm something more than junior but I don't feel really a senior developer because I never had a big corporate project built and designed by me with help of other juniors (although I don't think this is really a good definiton of a "senior", It helps describing my feelings). Using a scale from 0 (ignorant) to 10 (proficient like a "samurai") the list below describes my skills that I would like to improve with just 2 months. I've already bought some nice and updated books on all the subjects hereunder: The order doesn't matter C = 1 C# & .Net = 6 Arduino & Processing = 2 Ruby = 5 Rails = 5 HTML/XHTML/CSS = 9 Javascript = 6 Objective-C/iPhone dev = 2 Python = 4 Django = 4 Desing Patterns = 3 Algorythms = 3 Git = 5 I haven't included SQL or Databases in general nor Networking because I spent 10 years working in the past with them and I feel pretty solid for now. As an aside, I've made up some interest in Redis, Node.js, HTML5 reading about them on the web. After two months, since I have to pay my bills, I could go searching for some new job. If learning and developing were really good maybe I could also invest on something I gave birth during them. Can You give me some piece of advice on which you think It's better to improve or develop a learning project on (something like a "summer of code" thing)? The all point Is to see my weaknesses and work on them.

    Read the article

  • Visual SourceSafe (VSS): "Access to file (filename) denied" error

    - by tk-421
    Hi, can anybody help with the above SourceSafe error? I've spent hours trying to find a fix. I've also Googled the heck out of it but couldn't find a scenario matching mine, because in my case only a few files (not all) are affected. Here's what I found: only a few files in my project generate this error other files in the same directory (for example, App_Code has one of the problem files) work fine I've tried checking out from both the VSS client and Visual Studio another developer can check out the main problem file without any problems This sounds like a permission issue for my user, right? However: I found the location of one of the problem files in VSS's data directory (using VSS's naming format, as in 'fddaaaaa.a') and checked its permissions; everything looks fine and its permissions match those of other files I can check out successfully I can see no differences in the file properties between working and non-working files What else can I check? Has anyone encountered this problem before and found a solution? Thanks. P.S.: SourceGear, svn or git are not options, unfortunately. P.P.S.: Tried unsuccessfully to add tag "sourcesafe." EDIT: Hey Paddy, I tried to click 'add comment' to respond to your comment, but I'm getting a javascript error when loading this page in IE8 ("jquery undefined," etc.) so this isn't working. This is when checking out files, and yes, I've obliterated my local copy more times than I can remember. ;) EDIT 2: Thanks for the responses, guys (again I can't 'add comment' due to jQuery not loading, maybe blocked as discussed in Meta). If the problem was caused by antivirus or a bad disk, would other users still be able to check out the file(s)? That's the case here, which makes me think it's a permission issue specific to my account. However I've looked at the permissions and they match both other users' settings and settings on other files which I can check out.

    Read the article

  • [PHP] Local/Dev/Live deployment - best workflow

    - by Adam Kiss
    Hello, situation We our little company with 3 people, each has a localhost webserver and most projects (previous and current) are on one PC network shared disk. We have virtual server, where some of our clients' sites and our site. Our standard workflow is: Coder PC ? Programmer localhost ? dev domain (client.company.com) ? live version (client.com) It often happens, that there are two or three guys working on same projects at the same time - one is on dev version, two are on localhost. When finished, we try to synchronize the files on dev version and ideally not to mess up any files, which *knock knock * doesn't happen often. And then one of us deploys dev version on live webserver. question we are looking for a way to simplify this workflow while updating websites - ideally some sort of diff uploader or VCS probably (Git/SVN/VCS/...), but we are not completely sure where to begin or what way would be ideal, therefore I ask you, fellow stackoverflowers for your experience with website / application deployment and recommended workflow. We probably will also need to use Mac in process, so if it won't be a problem, that would be even better. Thank you

    Read the article

  • Using "from __future__ import division" in my program, but it isn't loaded with my program

    - by Sara Fauzia
    I wrote the following program in Python 2 to do Newton's method computations for my math problem set, and while it works perfectly, for reasons unbeknownst to me, when I initially load it in ipython with %run -i NewtonsMethodMultivariate.py, the Python 3 division is not imported. I know this because after I load my Python program, entering x**(3/4) gives "1". After manually importing the new division, then x**(3/4) remains x**(3/4), as expected. Why is this? # coding: utf-8 from __future__ import division from sympy import symbols, Matrix, zeros x, y = symbols('x y') X = Matrix([[x],[y]]) tol = 1e-3 def roots(h,a): def F(s): return h.subs({x: s[0,0], y: s[1,0]}) def D(s): return h.jacobian(X).subs({x: s[0,0], y: s[1,0]}) if F(a) == zeros(2)[:,0]: return a else: while (F(a)).norm() > tol: a = a - ((D(a))**(-1))*F(a) print a.evalf(10) I would use Python 3 to avoid this issue, but my Linux distribution only ships SymPy for Python 2. Thanks to the help anyone can provide. Also, in case anyone was wondering, I haven't yet generalized this script for nxn Jacobians, and only had to deal with 2x2 in my problem set. Additionally, I'm slicing the 2x2 zero matrix instead of using the command zeros(2,1) because SymPy 0.7.1, installed on my machine, complains that "zeros() takes exactly one argument", though the wiki suggests otherwise. Maybe this command is only for the git version.

    Read the article

  • Bug Triage

    In this blog post brain dump, I'll attempt to describe the process my team tries to follow when dealing with new bug reports (specifically, code defect reports). This is not official Microsoft policy, just the way we do things… if you do things differently and want to share, you can do so at the bottom in the comments (or on your blog).Feature Triage TeamA subset of the feature crew, the triage team (which has representations from the PM, Dev and QA disciplines), looks at all unassigned bugs at regular intervals. This can be weekly or daily (or other frequency) dependent on which part of the product cycle we are in and what the untriaged bug load looks like. They discuss each bug considering the evidence and make a decision of whether the bug goes from Not Yet Assigned to Assigned (plus the name of the DEV to fix this) or whether it goes from Active to Resolved (which means it gets assigned back to the requestor for closure or further debate if they were not present at the triage meeting). Close to critical milestones, the feature triage team needs to further justify bugs they take to additional higher-level triage teams.Bug Opened = Not Yet AssignedSomeone (typically an SDET from the QA team) creates the bug item (e.g. in TFS), ensuring they populate all the relevant fields including: Title, Description, Repro Steps (including the Actual Result at the end of the steps), attachments of code and/or screenshots, Build number that they observed the issue in, regression details if applicable, how it was found, if a test case exists or needs to be created etc. They also indicate their opinion on the Priority and Severity. The bug status is left as Not Yet Assigned."Issue" versus "Fix for issue"The solution to some bugs is easy to determine, e.g. "bug: the column name is misspelled". Obviously the fix is to correct the spelling – still, the triage team should be explicit and enter the correct spelling in the bug's Description. Note that a bad bug name here would be "bug: fix the spelling of the column" (it describes the solution, rather than the problem).Other solutions are trickier to establish, e.g. "bug: the column header is not accessible (can only be clicked on with the mouse, not reached via keyboard)". What is the correct solution here? The last thing to do is leave this undetermined and just assign it to a developer. The solution has to be entered in the description. Behind this type of a bug usually hides a spec defect or a new feature request.The person opening the bug should focus on describing the issue, rather than the solution. The person indicates what the fix is in their opinion by stating the Expected Result (immediately after stating the Actual Result). If they have a complex suggested solution, that should be split out in a separate part, but the triage team has the final say before assigning it. If the solution is lengthy/complicated to describe, the bug can be assigned to the PM. Note: the strict interpretation suggests that any bug with no clear, obvious solution is always a hole in the spec and should always go to the PM. This also ensures the spec gets updated.Not Yet Assigned - Not Yet Assigned (on someone else's plate)If the bug is observed in our feature, but the cause is actually another team, we change the Area Path (which is the way we identify teams in TFS) and leave it as Not Yet Assigned. The triage team may add more comments as appropriate including potentially changing the repro steps. In some cases, we may even resolve the bug in our area path and open a new bug in the area path of the other team.Even though there is no action on a dev on the team, the bug still needs to be tracked. One way of doing this is to implement some notification system that informs the team when the tracked bug changed status; another way is to occasionally run a global query (against all area paths) for bugs that have been opened by a member of the team and follow up with the current owners for stale bugs.Not Yet Assigned - ResolvedThis state transition can only be made by the Feature Triage Team.0. Sometimes the bug description is not clear and in that case it gets Resolved as More Information Needed, so the original requestor can provide it.After understanding what the bug item is about, the first decision is to determine whether it needs to go to a dev.1. If it is a known bug, it gets resolved as "Duplicate" and linked to the existing bug.2. If it is "By Design" it gets resolved as such, indicating that the triage team does not think this is a bug.3. If the bug does not repro on latest bits, it is resolved as "No Repro"4. The most painful: If it is decided that we cannot fix it for this release it gets resolved as "Postponed" or "Won't Fix". The former is typically due to resources and time constraints, while the latter is due to deciding that it is not important enough to consume our resources in any release (yes, not all bugs must be fixed!). For both cases, there are other factors that contribute to the decision such as: existence of a reasonable workaround, frequency we expect users to encounter the issue, dependencies on other team to offer a solution, whether it breaks a core scenario, whether it prohibits customer feedback on a major feature, is it a regression from a previous release, impact of the fix on other partner teams (e.g. User Education, User Experience, Localization/Globalization), whether this is the right fix, does the fix impact performance goals, and last but not least, severity of bug (e.g. loss of customer data, security threat, crash, hang). The bar for fixing a bug goes up as the release date approaches. The triage team becomes hardnosed about which bugs to take, while the developers are busy resolving assigned bugs thus everyone drives for Zero Bug Bounce (ZBB). ZBB is when you have 0 active bugs older than 48 hours.Not Yet Assigned - AssignedIf the bug is something we decide to fix in this release and the solution is known, then it is assigned to a DEV. This is either the developer that will do the work, or a Lead that can further assign it to one of his developer team based on a load balancing algorithm of their choosing.Sometimes, the triage team needs the dev to do some investigation work before deciding whether to take the fix; similarly, the checkin for the fix may be gated on code review by the triage team. In these cases, these instructions are provided in the comments section of the bug and when the developer is done they notify the triage team for final decision.Additionally, a Priority and Severity (from 0 to 4) has to be entered, e.g. a P0 means "drop anything you are doing and fix this now" whereas a P4 is something you get to after all P0,1,2,3 bugs are fixed.From a testing perspective, if the bug was found through ad-hoc testing or an external team, the decision is made whether test cases should be added to avoid future regressions. This is communicated to the QA team.Assigned - ResolvedWhen the developer receives the bug (they should be checking daily for new bugs on their plate looking at bugs in order of priority and from older to newer) they can send it back to triage if the information is not clear. Otherwise, they investigate the bug, setting the Sub Status to "Investigating"; if they cannot make progress, they set the Sub Status to "Blocked" and discuss this with triage or whoever else can help them get unblocked. Once they are unblocked, they set the Sub Status to "Working on Solution"; once they are code complete they send a code review request, setting the Sub Status to "Fix Available". After the iterative code review process is over and everyone is happy with the fix, the developer checks it in and changes the state of the bug from Active (and Assigned to them) to Resolved (and Assigned to someone else).The developer needs to ensure that when the status is changed to Resolved that it is assigned to a QA person. For example, maybe the PM opened the bug, but it should be a QA person that will verify the fix - the developer needs to manually change the assignee in that case. Typically the QA person will send an email to the original requestor notifying them that the fix is verified.Resolved - ??In all cases above, note that the final state was Resolved. What happens after that? The final step should be Closed. The bug is closed once the QA person verifying the fix is happy with it. If the person is not happy, then they change the state from Resolved to Active, thus sending it back to the developer. If the developer and QA person cannot reach agreement, then triage can be brought into it. An easy way to do that is change the status back to Not Yet Assigned with appropriate comments so the triage team can re-review.It is important to note that only QA can close a bug. That means that if the opener of the bug was a PM, when the bug gets resolved by the dev it may land on the PM's plate and after a quick review, the PM would re-assign to an SDET, which is the only role that can close bugs. One exception to this is if the person that filed the bug is external: in that case, we leave it Resolved and assigned to them and also send them a notification that they need to verify the fix. Another exception is if specialized developer knowledge is needed for verifying the bug fix (e.g. it was a refactoring suggestion bug typically not observable by the user) in which case it is fine to have a developer verify the fix, and ideally a different developer to the one that opened the bug.Other links on bug triageA quick search reveals that others have talked about this subject, e.g. here, here, here, here and here.Your take?If you have other best practices your team uses to deal with incoming bug reports, feel free to share in the comments below or on your blog. Comments about this post welcome at the original blog.

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the administrators of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on StackOverflow to get feedback from the programmer/user side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates Repository-dictated Configuration If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • Error installing scipy on Mountain Lion with Xcode 4.5.1

    - by Xster
    Environment: Mountain Lion 10.8.2, Xcode 4.5.1 command line tools, Python 2.7.3, virtualenv 1.8.2 and numpy 1.6.2 When installing scipy with pip install -e "git+https://github.com/scipy/scipy#egg=scipy-dev" on a fresh virtualenv. llvm-gcc: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:51:23: error: immintrin.h: No such file or directory In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vceilf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:53: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vfloorf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:54: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: ‘_MM_FROUND_TRUNC’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: (Each undeclared identifier is reported only once /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: for each function it appears in.) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vnintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: ‘_MM_FROUND_NINT’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: incompatible types in return In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:51:23: error: immintrin.h: No such file or directory In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vceilf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:53: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vfloorf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:54: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: ‘_MM_FROUND_TRUNC’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: (Each undeclared identifier is reported only once /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: for each function it appears in.) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vnintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: ‘_MM_FROUND_NINT’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: incompatible types in return error: Command "/usr/bin/llvm-gcc -fno-strict-aliasing -Os -w -pipe -march=core2 -msse4 -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC -I/Users/xiao/.virtualenv/lib/python2.7/site-packages/numpy/core/include -c scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c -o build/temp.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.o" failed with exit status 1 Is it supposed to be looking for headers from my system frameworks? Is the development version of scipy no longer good for the latest version of Mountain Lion/Xcode?

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow.

    Read the article

  • configure squid3 to set up a web proxy in ubuntu12.04

    - by Gnijuohz
    I am in a LAN and have to use a proxy given to access the web in a very limited way. I can't even use google, github.com or SE sites. However I can use ssh to log into a server, which I have root access so basically I can do anything I want with it. So I was thinking that maybe I could use that server as a proxy so I can visit sites through it. I tested it using ssh -vT [email protected] which gave a proper response. And In my computer I can't do this. Also I tried downloading something from the gun.org using wget, which can't be done in my computer too. And it succeeded on that server. I don't know if that's enough to say that this server have full access to the Internet. But I assumed so and I installed squid3 on it. After trying some while, I failed to get it working. I got this after I run squid3 -k parse 2012/07/06 21:45:18| Processing Configuration File: /etc/squid3/squid.conf (depth 0) 2012/07/06 21:45:18| Processing: acl manager proto cache_object 2012/07/06 21:45:18| Processing: acl localhost src 127.0.0.1/32 ::1 2012/07/06 21:45:18| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 2012/07/06 21:45:18| Processing: acl localnet src 10.1.0.0/16 # RFC1918 possible internal network 2012/07/06 21:45:18| Processing: acl SSL_ports port 443 2012/07/06 21:45:18| Processing: acl Safe_ports port 80 # http 2012/07/06 21:45:18| Processing: acl Safe_ports port 21 # ftp 2012/07/06 21:45:18| Processing: acl Safe_ports port 443 # https 2012/07/06 21:45:18| Processing: acl Safe_ports port 70 # gopher 2012/07/06 21:45:18| Processing: acl Safe_ports port 210 # wais 2012/07/06 21:45:18| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2012/07/06 21:45:18| Processing: acl Safe_ports port 280 # http-mgmt 2012/07/06 21:45:18| Processing: acl Safe_ports port 488 # gss-http 2012/07/06 21:45:18| Processing: acl Safe_ports port 591 # filemaker 2012/07/06 21:45:18| Processing: acl Safe_ports port 777 # multiling http 2012/07/06 21:45:18| Processing: acl CONNECT method CONNECT 2012/07/06 21:45:18| Processing: http_port 3128 transparent vhost vport 2012/07/06 21:45:18| Starting Authentication on port [::]:3128 2012/07/06 21:45:18| Disabling Authentication on port [::]:3128 (interception enabled) 2012/07/06 21:45:18| Disabling IPv6 on port [::]:3128 (interception enabled) 2012/07/06 21:45:18| Processing: cache_mem 1000 MB 2012/07/06 21:45:18| Processing: cache_swap_low 90 2012/07/06 21:45:18| Processing: coredump_dir /var/spool/squid3 2012/07/06 21:45:18| Processing: refresh_pattern ^ftp: 1440 20% 10080 2012/07/06 21:45:18| Processing: refresh_pattern ^gopher: 1440 0% 1440 2012/07/06 21:45:18| Processing: refresh_pattern -i (/cgi-bin/|?) 0 0% 0 2012/07/06 21:45:18| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 2012/07/06 21:45:18| Processing: refresh_pattern . 0 20% 4320 2012/07/06 21:45:18| Processing: ipcache_high 95 2012/07/06 21:45:18| Processing: http_access allow all I deleted some allow and deny rules and added http_access allow all so that all the request would be allowed. After configuring my computer, I got this error: Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect. And the log in the server showed that my TCP requests had all been denied. So, first of all, is what I am trying to do achievable? If so, how to configure the squid in the server so that I use it as a proxy to surf the Internet? My computer and the server both run Ubuntu11.04. Thanks for any help~

    Read the article

  • Mongodb: why is my mongo server using two PID's?

    - by Lucas
    I started my mongo with the following command: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data 2014-06-07T08:46:30.507+0000 [initandlisten] MongoDB starting : pid=6409 port=27017 dbpat h=/home/lucas/node/nodetest2/data 64-bit host=ecoinstance 2014-06-07T08:46:30.508+0000 [initandlisten] db version v2.6.1 2014-06-07T08:46:30.508+0000 [initandlisten] git version: 4b95b086d2374bdcfcdf2249272fb55 2c9c726e8 2014-06-07T08:46:30.508+0000 [initandlisten] build info: Linux build14.nj1.10gen.cc 2.6.3 2-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49 2014-06-07T08:46:30.509+0000 [initandlisten] allocator: tcmalloc 2014-06-07T08:46:30.509+0000 [initandlisten] options: { storage: { dbPath: "/home/lucas/n ode/nodetest2/data" } } 2014-06-07T08:46:30.520+0000 [initandlisten] journal dir=/home/lucas/node/nodetest2/data/ journal 2014-06-07T08:46:30.520+0000 [initandlisten] recover : no journal files present, no recov ery needed 2014-06-07T08:46:30.527+0000 [initandlisten] waiting for connections on port 27017 It appears to be working, as I can execute mongo and access the server. However, here are the process running mongo: [lucas@ecoinstance]~/node/testSite$ ps aux | grep mongo root 6540 0.0 0.2 33424 1664 pts/3 S+ 08:52 0:00 sudo mongod --dbpath /ho me/lucas/node/nodetest2/data root 6541 0.6 8.6 522140 52512 pts/3 Sl+ 08:52 0:00 mongod --dbpath /home/lu cas/node/nodetest2/data lucas 6554 0.0 0.1 7836 876 pts/4 S+ 08:52 0:00 grep mongo As you can see, there are two PID's for mongo. Before I ran sudo mongod --dbpath /home/lucas/node/nodetest2/data, there were none (besides the grep of course). How did my command spawn two PID's, and should I be concerned? Any suggestions or tips would be great. Additional Info In addition, I may have other issues that might suggest a cause. I tried running mongo with --fork --logpath /home/lucas..., but it did not work. More information below: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data --fork --logpath /home/lucas/node/nodetest2/data/ about to fork child process, waiting until server is ready for connections. forked process: 6578 ERROR: child process failed, exited with error number 1 [lucas@ecoinstance]~/node/nodetest2$ ls -l data/ total 163852 drwxr-xr-x 2 mongodb nogroup 4096 Jun 7 08:54 journal -rw------- 1 mongodb nogroup 67108864 Jun 7 08:52 local.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 08:52 local.ns -rwxr-xr-x 1 mongodb nogroup 0 Jun 7 08:54 mongod.lock -rw------- 1 mongodb nogroup 67108864 Jun 7 02:08 nodetest1.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 02:08 nodetest1.ns Also, my db path folder is not the original location. It was originally created under the default /var/lib/mongodb/ and moved to my local data folder. This was done after shutting down the server via /etc/init.d/mongod stop. I have a Debian Wheezy server, if it matters.

    Read the article

  • Puppet Directory and File ownership ignored

    - by Phil Sturgeon
    Puppet seems to be lying to me, which is not very nice. I am trying to set some files and directories included in /vagrant/src to be 666 and 777, and set the ownership group to the correct Apache user (using the PuppetLabs Apache module). Output from Puppet says yes. [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with /tmp/vagrant-puppet/manifests/default.pp... stdin: is not a tty No LSB modules are available. warning: require is a metaparam; this value will inherit to all contained resources warning: notify is a metaparam; this value will inherit to all contained resources notice: /Stage[main]//File[/vagrant/src/addons/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//Package[curl]/ensure: ensure changed 'purged' to 'present' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/uploads/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/mode: mode changed '0755' to '0777' notice: /Stage[main]/Apache/Service[httpd]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/mode: mode changed '0755' to '0777' notice: Finished catalog run in 2.29 seconds Output from ls -lah says no: $ ls -lah /vagrant/src/ total 36K drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 00:11 . drwxr-xr-x 1 vagrant vagrant 340 2012-07-03 08:08 .. drwxr-xr-x 1 vagrant vagrant 136 2012-07-03 00:11 addons drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 assets drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 07:45 .git -rw-r--r-- 1 vagrant vagrant 1.3K 2012-07-03 00:11 .gitignore -rwxr-xr-x 1 vagrant vagrant 1.4K 2012-07-03 00:11 .htaccess -rwxr-xr-x 1 vagrant vagrant 8.8K 2012-07-03 00:11 index.php drwxr-xr-x 1 vagrant vagrant 442 2012-07-03 00:11 installer -rwxr-xr-x 1 vagrant vagrant 2.8K 2012-07-03 00:11 LICENSE -rw-r--r-- 1 vagrant vagrant 1.1K 2012-07-03 00:11 phpdoc.dist.xml -rw-r--r-- 1 vagrant vagrant 3.3K 2012-07-03 00:11 README.md drwxr-xr-x 1 vagrant vagrant 204 2012-07-03 00:11 system -rw-r--r-- 1 vagrant vagrant 42 2012-07-03 00:11 .travis.yml drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 uploads Whats up with that? My entire config can be found here.

    Read the article

  • Suspend only works once after full power cycle with ASUS P7P55D-E Pro

    - by John Chadwick
    This one is strange. I can't seem to get suspend working more than once per power cycle. When I say "power cycle," I mean the only way to get one proper suspend is to cut power from the power supply and boot back up cold. After the proper suspend, I get a failed suspend, and after all reboots or cold boots until power is cut, suspends fail. I'm using an ASUS P7P55D-E Pro with a Sandy Bridge Core i7, running on Ubuntu Precise repositories and UEFI. I'm running Nouveau from repository (And Gallium3d compiled from git, but that does not come into this since I can avoid OpenGL and it still happens the same way) with a GTX 285 (nv50.) I had to build a custom kernel (3.3) in order for ACPI 5.0 to be supported and make suspend work at all. I compiled it using the latest Ubuntu kernel's config file with the additional entries set to the default options. All packages are up to date. I know these are relatively exotic settings, but I'm hoping maybe I can get some help anyways. The behavior when suspend fails is strange. Upon a proper suspend, all fans turn off and the only led left on, the power led, is blinking. Upon a failed suspend, 1. USB power remains. 2. The power led stays on solid. 3. All fans seem to still be on. 4. I can hear what I believe is the primary harddrive shutting off. 5. Despite USB power remaining, the USB powered keyboard does not respond to anything, and the indicator leds on it shut off. Pressing the power button does nothing, and of course I have not to date found a way to wake it up. When trouble shooting the first round of issues I got with suspend not too long ago, I ended up building a list of modules to disable upon sleeping. Here's my config file for them: In /etc/pm/config.d/01modules: SUSPEND_MODULES="uhci_hd ehci_hd button" All of my other pm configuration files are stock. In case it's any help, here are my relevant BIOS settings. Thanks.

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127  | Next Page >