Search Results

Search found 15906 results on 637 pages for 'scott and the dev team'.

Page 36/637 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • How to start working in a team on a project? [closed]

    - by Adio
    Ok, for the first time in my life I am going to work in a team. The problem is that all the team members including me are junior developers and for all of us this is the first time we are working in a team. The problem is that we do not know how to start the project. We know a little about scrum and agile development, we have our user stories in place and our created project - which does not have even one piece of code - and now we are stuck and don't know how and what to do? All scrum books talk about scrum development but lack of the details like how to start. How my team will know what object I am using? How we will push the first piece of code and what it will be? Any help will be great. Books, articles, advice, or anything.

    Read the article

  • Should developers do their own software releases (if there is a prod support team in place)?

    - by leora
    I know there are going to always be differences depending on the particular size, staff etc, but i wanted to get feedback in general around: In an environment where you have a production support team doing first line support and release management, is it better to simply have developers manage their own releases instead? In this case, its internal software at an insurance company but the question should be valid at any company, size, etc I think. Currently, we have our production team do releases but there is an argument that its inefficient and that if you allowed developers the ability to do it, they will focus more on making it simple and efficient and avoid basically passing on scripts, etc to run to another team. The counter argument is that if you don't have a check and balance, you could get a software team (or an individual) that doesn't a very hacky job about getting their software out there (making on the fly changes, not documenting the process, etc) and that by forcing the prod support team to do the actual release, it enforces consistency and proper checks and balances. I know this is not a black or white issue but I wanted to see what folks thought on this so the discipline and consistency is there but without the feeling that an inefficient process is in place.

    Read the article

  • Issues with duplicating TFS 2005 to a virtual server

    - by Hitchhiker
    We have a problem with our current TFS installation. For some reason, which I won't get into, the Sharepoint DBs (sts_content, sts_config) were upgraded to WSS 3 (MOSS). So now, none of our team-project sites work, we have no access to our documents and can't create new team projects. We can still work with the version control, though. We wanted to "play" with the server and try to fix it, without affecting the users. So we used p2v for VmWare, to duplicate the server to a virtual one. We now continued and changed all of the relevant configuration to point to the new server, as explained in the MSDN article. The step of rebuilding the warehouse (with setupwarehouse) failed. Also, we can't access the VersionControl web service (ourserver:8080/VersionControl/v1.0/repository.asmx). We are seeing errors in the EventLog: TF53002: Unable to obtain registration data for application Build. TF53005: Unable to retrieve the Team Foundation Server installed UI culture. TF53002: Unable to obtain registration data for application VersionControl. TF30040: The database is not correctly configured. Contact your Team Foundation Server administrator. The solutions suggested in this blog post did not assist. So now we're kind of stuck. Any assistance will be appreciated.

    Read the article

  • Do I need to be worried about these SMART drive temperatures?

    - by Steve Lorimer
    I have 5 hard drives in a machine sitting in a cupboard. /dev/sda is a 500GB Seagate drive, and is the boot disk. /dev/sd{b,c,d,e} are 2TB drives in a raid6 configuration. smartctl is showing significantly higher temperatures (like ~140 degrees celsius) on the raid drives than the boot drive. Do I need to be worried? /dev/sdb and /dev/sde are new Western Digital Black drives (new=1 week) /dev/sdc and /dev/sdd are 5 year old Hitachi drives /dev/sda [SAT], Temperature_Celsius changed from 40 to 39 /dev/sdc [SAT], Temperature_Celsius changed from 142 to 146 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 142 /dev/sdd [SAT], Temperature_Celsius changed from 142 to 146 /dev/sda [SAT], Airflow_Temperature_Cel changed from 61 to 62 /dev/sda [SAT], Temperature_Celsius changed from 39 to 38 /dev/sde [SAT], Temperature_Celsius changed from 107 to 108 /dev/sdb [SAT], Temperature_Celsius changed from 108 to 109 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 150 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 150 /dev/sda [SAT], Airflow_Temperature_Cel changed from 62 to 61 /dev/sda [SAT], Temperature_Celsius changed from 38 to 39 Update: Adding detailed drive information as per request: /dev/sda =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Seagate Pipeline HD 5900.2 Device Model: ST3500312CS Serial Number: 5VV47HXA LU WWN Device Id: 5 000c50 02aad5ad6 Firmware Version: SC13 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Rotation Rate: 5900 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 1.5 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdb =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD2003FZEX-00Z4SA0 Serial Number: WD-WMC1F1398726 LU WWN Device Id: 5 0014ee 003b8bd25 Firmware Version: 01.01A01 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdc =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Hitachi Deskstar 7K3000 Device Model: Hitachi HDS723020BLA642 Serial Number: MN1220F30WSTUD LU WWN Device Id: 5 000cca 369cc9f5d Firmware Version: MN6OA580 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdd =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Hitachi Deskstar 7K3000 Device Model: Hitachi HDS723020BLA642 Serial Number: MN1220F30WST4D LU WWN Device Id: 5 000cca 369cc9f48 Firmware Version: MN6OA580 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 6.0 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sde =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD2003FZEX-00Z4SA0 Serial Number: WD-WMC1F1483782 LU WWN Device Id: 5 0014ee 3002d235c Firmware Version: 01.01A01 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled

    Read the article

  • JDK bug migration: components and subcomponents

    - by darcy
    One subtask of the JDK migration from the legacy bug tracking system to JIRA was reclassifying bugs from a three-level taxonomy in the legacy system, (product, category, subcategory), to a fundamentally two-level scheme in our customized JIRA instance, (component, subcomponent). In the JDK JIRA system, there is technically a third project-level classification, but by design a large majority of JDK-related bugs were migrated into a single "JDK" project. In the end, over 450 legacy subcategories were simplified into about 120 subcomponents in JIRA. The 120 subcomponents are distributed among 17 components. A rule of thumb used was that a subcategory had to have at least 50 bugs in it for it to be retained. Below is a listing the component / subcomponent classification of the JDK JIRA project along with some notes and guidance on which OpenJDK email addresses cover different areas. Eventually, a separate incidents project to host new issues filed at bugs.sun.com will use a slightly simplified version of this scheme. The preponderance of bugs and subcomponents for the JDK are in library-related areas, with components named foo-libs and subcomponents primarily named after packages. While there was an overall condensation of subcomponents in the migration, in some cases long-standing informal divisions in core libraries based on naming conventions in the description were promoted to formal subcomponents. For example, hundreds of bugs in the java.util subcomponent whose descriptions started with "(coll)" were moved into java.util:collections. Likewise, java.lang bugs starting with "(reflect)" and "(proxy)" were moved into java.lang:reflect. client-libs (Predominantly discussed on 2d-dev and awt-dev and swing-dev.) 2d demo java.awt java.awt:i18n java.beans (See beans-dev.) javax.accessibility javax.imageio javax.sound (See sound-dev.) javax.swing core-libs (See core-libs-dev.) java.io java.io:serialization java.lang java.lang.invoke java.lang:class_loading java.lang:reflect java.math java.net java.nio (Discussed on nio-dev.) java.nio.charsets java.rmi java.sql java.sql:bridge java.text java.util java.util.concurrent java.util.jar java.util.logging java.util.regex java.util:collections java.util:i18n javax.annotation.processing javax.lang.model javax.naming (JNDI) javax.script javax.script:javascript javax.sql org.openjdk.jigsaw (See jigsaw-dev.) security-libs (See security-dev.) java.security javax.crypto (JCE: includes SunJCE/MSCAPI/UCRYPTO/ECC) javax.crypto:pkcs11 (JCE: PKCS11 only) javax.net.ssl (JSSE, includes javax.security.cert) javax.security javax.smartcardio javax.xml.crypto org.ietf.jgss org.ietf.jgss:krb5 other-libs corba corba:idl corba:orb corba:rmi-iiop javadb other (When no other subcomponent is more appropriate; use judiciously.) Most of the subcomponents in the xml component are related to jaxp. xml jax-ws jaxb javax.xml.parsers (JAXP) javax.xml.stream (JAXP) javax.xml.transform (JAXP) javax.xml.validation (JAXP) javax.xml.xpath (JAXP) jaxp (JAXP) org.w3c.dom (JAXP) org.xml.sax (JAXP) For OpenJDK, most JVM-related bugs are connected to the HotSpot Java virtual machine. hotspot (See hotspot-dev.) build compiler (See hotspot-compiler-dev.) gc (garbage collection, see hotspot-gc-dev.) jfr (Java Flight Recorder) jni (Java Native Interface) jvmti (JVM Tool Interface) mvm (Multi-Tasking Virtual Machine) runtime (See hotspot-runtime-dev.) svc (Servicability) test core-svc (See serviceability-dev.) debugger java.lang.instrument java.lang.management javax.management tools The full JDK bug database contains entries related to legacy virtual machines that predate HotSpot as well as retired APIs. vm-legacy jit (Sun Exact VM) jit_symantec (Symantec VM, before Exact VM) jvmdi (JVM Debug Interface ) jvmpi (JVM Profiler Interface ) runtime (Exact VM Runtime) Notable command line tools in the $JDK/bin directory have corresponding subcomponents. tools appletviewer apt (See compiler-dev.) hprof jar javac (See compiler-dev.) javadoc(tool) (See compiler-dev.) javah (See compiler-dev.) javap (See compiler-dev.) jconsole launcher updaters (Timezone updaters, etc.) visualvm Some aspects of JDK infrastructure directly affect JDK Hg repositories, but other do not. infrastructure build (See build-dev and build-infra-dev.) licensing (Covers updates to the third party readme, licenses, and similar files.) release_eng (Release engineering) staging (Staging of web pages related to JDK releases.) The specification subcomponent encompasses the formal language and virtual machine specifications. specification language (The Java Language Specification) vm (The Java Virtual Machine Specification) The code for the deploy and install areas is not currently included in OpenJDK. deploy deployment_toolkit plugin webstart install auto_update install servicetags In the JDK, there are a number of cross-cutting concerns whose organization is essentially orthogonal to other areas. Since these areas generally have dedicated teams working on them, it is easier to find bugs of interest if these bugs are grouped first by their cross-cutting component rather than by the affected technology. docs doclet guides hotspot release_notes tools tutorial embedded build hotspot libraries globalization locale-data translation performance hotspot libraries The list of subcomponents will no doubt grow over time, but my inclination is to resist that growth since the addition of each subcomponent makes the system as a whole more complicated and harder to use. When the system gets closer to being externalized, I plan to post more blog entries describing recommended use of various custom fields in the JDK project.

    Read the article

  • Gallio MbUnit and Team City problem

    - by Bernard Larouche
    I asked a question this morning about an integration problem between Gallio and Team City. I changed the msbuild file to use the proper syntax with the latest Gallio build script API. Thank you for that Jeff Brown but now when I tried to build the application on Team City I get the following error : An unexpected error occurred during execution of the Gallio task.[16:19:49]: [Project "CoderForTraders.msbuild.teamcity.patch.tcprojx" (RebuildSolution;RunTests target(s)):] C:\TeamCity\buildAgent\work\fa1d38b0af329d65\CoderForTraders.msbuild(9, 9): FilterParseException: Colon expected Here's line 9 : <Gallio IgnoreFailures="true" Filter="Type=SomeFixture" Files="@(TestFile)"> and here is the whole file : <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- This is needed by MSBuild to locate the Gallio task --> <UsingTask AssemblyFile="C:\Gallio\bin\Gallio.MSBuildTasks.dll" TaskName="Gallio" /> <!-- Specify the test files and assemblies --> <ItemGroup> <TestFile Include="C:\_CBL\CBL\CoderForTraders\Source\trunk\UnitTest\DomainModel.Tests\bin\Debug\CBL.CoderForTraders.DomainModel.Tests.dll" /> </ItemGroup> <Target Name="RunTests"> <Gallio IgnoreFailures="true" Filter="Type=SomeFixture" Files="@(TestFile)"> <!-- This tells MSBuild to store the output value of the task's ExitCode property into the project's ExitCode property --> <Output TaskParameter="ExitCode" PropertyName="ExitCode"/> </Gallio> <Error Text="Tests execution failed" Condition="'$(ExitCode)' != 0" /> </Target> <Target Name="RebuildSolution"> <Message Text="Starting to Build"/> <MSBuild Projects="CoderForTraders.sln" Properties="Configuration=Debug" Targets="Rebuild" /> </Target> </Project> Do you have an idea about the possible problem ?

    Read the article

  • emberjs on symfony2 dev enviroment dont work propertly

    - by rkmax
    I've builded a app with symfony2 the app expose an REST Api. now i build a simple client for consuming app.coffee - app.js App = Em.Application.create ready: -> @.entradas.load() Entrada: Em.Object.extend() entradas: Em.ArrayController.create content: [] load: -> url = 'http://localhost/api/1/entrada' me = @ $.ajax( url: url, method: 'GET', success: (data) -> me.set('content', []) for entrada in data.data.objects me.pushObject DBPlus.Entrada.create(entrada) ) MyBundle:Home:index.html.twig <script type="text/x-handlebars" src="{{ asset('js/templates/entradas.hbs') }}"></script> <script src="{{ asset('js/libs/jquery-1.7.2.min.js') }}"></script> <script src="{{ asset('js/libs/handlebars-1.0.0.beta.6.js') }}"></script> <script src="{{ asset('js/libs/ember-1.0.pre.min.js') }}"></script> <script src="{{ asset('js/app.js') }}"></script> the problem here is when i run on dev enviroment and link the template like <script type="text/x-handlebars" src="{{...}}"> the app dont work, nothing show but works fine over prod enviroment. he only way that works on dev enviroment is inline template MyBundle:Home:index.html.twig <script type="text/x-handlebars"> {% raw %} <ul class="entradas"> {{#each App.entradas}} <li class="entrada">{{nombre}}</li> {{/each}} </ul> {% endraw %} </script> can explain why this behavoir? Note: I disabled the debug profiler toolbar, and nothing

    Read the article

  • Looking for a good dev environment for OSGi bundles

    - by Riduidel
    Hi, I'm currently investigating in the field of dev environment for OSGi bundles. My goal is to find a way to develop, test and debug with ease the bundles I'll be coding. Besides, I have some "cultural" requirements. I want to be able to use java continuous integration servers (typically, Hudson) As a consequence of that first requirement, I want to have a repeatable, one-click build process. My typical tool for that is maven. And finally, being long-term Eclipse user, and having the m2eclipse at hand to merge my eclipse env with my maven one, I obviously want to be able to test and debug with that IDE. So far, here are the infos I know I can use (and have already tested) maven-bundle-plugin, maven-ipojo-plugin which both offer clean packaging facilities I have tested maven pax (and eclipse pax) and am not really satisfied with both : maven pax generates a very heavy project, where adding dependencies is very error-prone (the maven pax:import-bundle command line, with all its arguments, is a hell per se) I have taken a look at Karaf, which seems to have some nice direct maven provisionning, but I don't know how to integrate it with my Eclipse, besides using the traditionnal JPDA bridge. However, it seems to be more production-oriented than dev-oriented, and as such may require heavy configuration to fit my need (although the reading of its user manual doesn't revedal that). Have you got any ideas ? Some maven/eclipse plugins ?

    Read the article

  • Git from localhost to remotehost with a team of three

    - by Mark McDonnell
    Hi, I'm completely new to Git. I've only just worked out how to use Github in a basic way (e.g. push my local file changes to Github - so I've not done 'pulling' down of content from Github and 'merging' it into my localhost version or anything like that). I had a look over at this existing question - Git: localhost remote development remote production - but I think it may have been a bit advanced for me at this stage as I didn't quite understand the terminology that most of the people were using. What I would like to achieve is to have a local server set-up that my team of developers can all 'push' to/'pull' from etc. And then have that local server upload any updated files automatically to our web server so we could see the updates live in the browser. I'm happy to get a server set-up in the office running Mac OSX Server and then installing Git on it and then getting the devs to write a shell script to push to the remote server but only if it was fairly easy for the devs local git to push to this new local server. I'm not a network engineer so I don't know what would need to be set-up for that to work, I know obviously we could set-up the server to be accessible via a local ip address like 192.168.0.xxx but not sure how that works with pushing to a git repository on that server? Would that literally be something like doing this on my local machine: git remote add MyGitFile git://192.168.0.xxx/MyGitFile.git ? Any ideas or advice you can give to a total Git newbie trying to help his team get a better work flow. Kind regards, Mark

    Read the article

  • Debugging Post Request with Chrome Dev Tools

    - by benek
    I am trying to use Chrome Dev for debugging the following Angular post request : $http.post("http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader", flowHeader) After running the statement with right-click / evaluate, I can see the post in the network panel with a pending state. How can I get the result or "commit" the request and leave easily this "pending" state from the dev console ? I am not yet very familiar with JS callbacks, some code is expected. Thanks. EDIT I have tried to run from the console : $scope.$apply(function(){$http.post("http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader", flowHeader).success(function(data){console.log("error "+data)}).error(function(data){console.log("error "+data)})}) It returns : undefined EDIT The post I am trying to solve generate an HTTP 400. Here is the result : Request URL:http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader Request Method:POST Status Code:400 Mauvaise Requ?te Request Headersview source Accept:application/json, text/plain, / Accept-Encoding:gzip,deflate,sdch Accept-Language:fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Content-Length:5354 Content-Type:application/json;charset=UTF-8 Cookie:JSESSIONID=285AF523EA18C0D7F9D581CDB2286C56 Host:picjboss.puma.lan:8880 Origin:http://picjboss.puma.lan:8880 Referer:http://picjboss.puma.lan:8880/fluxpousse/ User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36 X-Requested-With:XMLHttpRequest Request Payloadview source {refHeader:IDSFP, idEntrepot:619, codeEntreprise:null, codeBanniere:null, codeArticle:7,…} cessionPrice: 78 codeArticle: "7" codeBanniere: null codeDateAppro: null codeDateDelivery: null codeDatePrepa: null codeEntreprise: null codeFournisseur: null codeUtilisateur: null codeUtilisateurLastUpdate: null createDate: null dateAppro: null dateDelivery: null datePrepa: null hasAssortControl: null hasCadenceForce: null idEntrepot: 619 isFreeCost: null labelArticle: "Mayonnaise de DIJON" labelFournisseur: null listDetail: [,…] pcbArticle: 12 pvc: 78 qte: 78 refCommande: "ref" refHeader: "IDSFP" state: "CREATED" stockArticle: 1200 updateDate: null Response Headersview source Connection:close Content-Length:996 Content-Type:text/html;charset=utf-8 Date:Fri, 08 Nov 2013 15:19:30 GMT Server:Apache-Coyote/1.1 X-Powered-By:Servlet 2.5; JBoss-5.0/JBossWeb-2.1

    Read the article

  • Is it a wise decision to go from dev to third line/Tier 3 dev support?

    - by dotnetdev
    Hi, I am an experienced, mid-level developer. However, I recently spotted a job for a company which is small but has a lot of emphasis on training(beyond the basic technical training, but also mentoring, leadership training, etc). The role is 3rd line so still very technical. It's in app support so it's post implementation development rather than pure out-and-out development like I do now (or don't, as the senior devs do all of the interesting work). However, and this is the question - is this sort of career move common? Also, wouldn't a tech support role be a big shock to the system because I've never dealt with customers? I therefore think it's a bad move? Working in dev, I am used to the lack of customer contact and it is all filtered through by my manager. But in tech support, contacting customers/rude customers could be scary. I don't mind fixing other people's mistakes (better than me making mistakes!) and doing post-implementation dev for production systems (will give me a lot of discipline), and I do get bored sitting in the same place looking/talking to the same people in suits (I work in a corporate environment). The company puts A LOT of emphasis on training and prospects, which I don't get in the current (big) company I work at. Any advice on how to handle tech support is appreciated! Thanks

    Read the article

  • Git on Windows 7 expecting Linux? /dev/null not found error

    - by Klikini
    I have installed git (not GitHub) on Windows 7 x64 Home Premium, and I cannot get it to work. Opening Git Bash outputs the following: Welcome to Git (version 1.9.4-preview20140815) Run 'git help git' to display the help index. Run 'get help <command>' to display help for specific commands. sh.exe": /dev/null: No such file or directory sh.exe": /dev/null: No such file or directory sh.exe": /dev/null: No such file or directory sh.exe": /dev/null: No such file or directory sh.exe": /dev/null: No such file or directory sh.exe": /dev/null: No such file or directory Andy@ANDY-DELL ~ $ If I open the Git GUI, I get a this box: Title: git-gui: fatal error Content: fatal: open /dev/null or dup failed: No such file or directory Git Gui requires Git 1.5.0 or later. I also tried GitHub for Windows, but I got an internet connection error when attempting to clone a repo, even though my connection is fine. Is this possibly related? I have learned so far that /dev/null is the Linux version of the Windows NUL, but why is it trying to do this on Windows? Thanks in advance.

    Read the article

  • Problem restoring from tar backup: why are there /dev/disk/by-id/ symlinks and how can I avoid them?

    - by SK.
    Hello, I'm trying to make a bare-bone backup system with the most basic tools available on openSUSE 11.3 (in this case: bash, fdisk, tar & grub legacy) Here's the workflow for my scripts: backup.sh: (Run from external system, e.g. LiveCD) make an fdisk script ($fscript) from fdisk -l's output [works] mount the partitions from the system's fstab [works] tar the crucial stuff in file.tgz [works] restore.sh: (Run from external system, e.g. LiveCD) run fdisk $dest < $fscript to restore partitioning [works] format and mount partitions from system's fstab [fails] extract from file.tgz [works when mounting manually] restore grub [fails] I have recently noticed that openSUSE (though I'm sure it has nothing to do with the distro) has different output in /etc/fstab and /boot/grub/menu.lst, more precisely the partition name is for example "/dev/disk/by-id/numbers-brandname-morenumbers-part2" instead of "/dev/sda2" -- but it basically is a simple symlink. My questions about this: what is the point of such symlinks, especially if we're restoring on a different disk? is there a way to cleanly prevent the creation of those symlinks and use the "true" /dev/sdx everywhere instead? if the previous is no, do you know a way to replace those symlinks on the fly in a text file? I tried this script but only works if the file starts with the symlink description (case of fstab, not menu.lst): ### search and replace /dev/disk/by-id/... to /dev/sdx while read oldVolume rest; do # get first element, ignore rest of line if [[ "$oldVolume" =~ ^/dev/disk/by-id/.*(-part[0-9]*$)? ]]; then newVolume=$(readlink $oldVolume) # replace pointer by pointee, returns "../../sdx" echo /dev/${newVolume##*/} $rest >> TMP # format to "/dev/sdx", write line else echo $oldVolume $rest >> TMP # nothing to do fi done < $file mv -f TMP $file # save changes I've had trouble finding a solution to this on google so I was hoping some of the members here could help me. Thank you.

    Read the article

  • dd clone hard drive: Input/Output Error though "chkdsk" says OK

    - by unknown (google)
    Hi, I've used dd to clone hard drives before using 'dd' and a live cd, but have run into a problem. The issue: dd fails with an "Input/Output Error" on /dev/sda3 , even though windows "check disk" (chkdsk) says it's ok. Context: Trying to replace my laptop hard drive w/ a faster one of the same size Laptop has NTFS on a 320gb hard drive Booting into knoppix Knoppix recognizes 'original' drive (/dev/sda) I am using a usb connection for ‘new' drive (irrelevant, but just an fyi) Knoppix recognizes the usb drive as /dev/sdb Using dd, as follows: dd if=/dev/sda of=/dev/sdb "dd" gives the I/O error above at 82Gb (out of 320Gb) I then tried checking each partition as follows and found it failed on /dev/sda3: dd if=/dev/sda1 of=/dev/null dd if=/dev/sda2 of=/dev/null dd if=/dev/sda3 of=/dev/null I have ran windows xp chkdsk on the offending drive in both "find only" and "find and fix" mode and it reports no errors Question How can I find and fix the error on my original hard drive partition (i.e. /dev/sda3) so that dd reads it successfully?

    Read the article

  • Various problems with software raid1 array built with Samsung 840 Pro SSDs

    - by Andy B
    I am bringing to ServerFault a problem that is tormenting me for 6+ months. I have a CentOS 6 (64bit) server with an md software raid-1 array with 2 x Samsung 840 Pro SSDs (512GB). Problems: Serious write speed problems: root [~]# time dd if=arch.tar.gz of=test4 bs=2M oflag=sync 146+1 records in 146+1 records out 307191761 bytes (307 MB) copied, 23.6788 s, 13.0 MB/s real 0m23.680s user 0m0.000s sys 0m0.932s When doing the above (or any other larger copy) the load spikes to unbelievable values (even over 100) going up from ~ 1. When doing the above I've also noticed very weird iostat results: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1589.50 0.00 54.00 0.00 13148.00 243.48 0.60 11.17 0.46 2.50 sdb 0.00 1627.50 0.00 16.50 0.00 9524.00 577.21 144.25 1439.33 60.61 100.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 1602.00 0.00 12816.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 And it keeps it this way until it actually writes the file to the device (out from swap/cache/memory). The problem is that the second SSD in the array has svctm and await roughly 100 times larger than the second. For some reason the wear is different between the 2 members of the array root [~]# smartctl --attributes /dev/sda | grep -i wear 177 Wear_Leveling_Count 0x0013 094% 094 000 Pre-fail Always - 180 root [~]# smartctl --attributes /dev/sdb | grep -i wear 177 Wear_Leveling_Count 0x0013 070% 070 000 Pre-fail Always - 1005 The first SSD has a wear of 6% while the second SSD has a wear of 30%!! It's like the second SSD in the array works at least 5 times as hard as the first one as proven by the first iteration of iostat (the averages since reboot): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 10.44 51.06 790.39 125.41 8803.98 1633.11 11.40 0.33 0.37 0.06 5.64 sdb 9.53 58.35 322.37 118.11 4835.59 1633.11 14.69 0.33 0.76 0.29 12.97 md1 0.00 0.00 1.88 1.33 15.07 10.68 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 1109.02 173.12 10881.59 1620.39 9.75 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.41 0.01 3.10 0.02 7.42 0.00 0.00 0.00 0.00 What I've tried: I've updated the firmware to DXM05B0Q (following reports of dramatic improvements for 840Ps after this update). I have looked for "hard resetting link" in dmesg to check for cable/backplane issues but nothing. I have checked the alignment and I believe they are aligned correctly (1MB boundary, listing below) I have checked /proc/mdstat and the array is Optimal (second listing below). root [~]# fdisk -ul /dev/sda Disk /dev/sda: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026d59 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sda2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sda3 4605952 814106623 404750336 fd Linux raid autodetect root [~]# fdisk -ul /dev/sdb Disk /dev/sdb: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dede Device Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sdb2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sdb3 4605952 814106623 404750336 fd Linux raid autodetect /proc/mdstat root # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204736 blocks super 1.0 [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 404750144 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 2096064 blocks super 1.1 [2/2] [UU] unused devices: Running a read test with hdparm root [~]# hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 664 MB in 3.00 seconds = 221.33 MB/sec root [~]# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 288 MB in 3.01 seconds = 95.77 MB/sec But look what happens if I add --direct root [~]# hdparm --direct -t /dev/sda /dev/sda: Timing O_DIRECT disk reads: 788 MB in 3.01 seconds = 262.08 MB/sec root [~]# hdparm --direct -t /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 534 MB in 3.02 seconds = 176.90 MB/sec Both tests increase but /dev/sdb doubles while /dev/sda increases maybe 20%. I just don't know what to make of this. As suggested by Mr. Wagner I've done another read test with dd this time and it confirms the hdparm test: root [/home2]# dd if=/dev/sda of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 38.0855 s, 282 MB/s root [/home2]# dd if=/dev/sdb of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 115.24 s, 93.2 MB/s So sda is 3 times faster than sdb. Or maybe sdb is doing also something else besides what sda does. Is there some way to find out if sdb is doing more than what sda does? UPDATE Again, as suggested by Mr. Wagner, I have swapped the 2 SSDs. And as he thought it would happen, the problem moved from sdb to sda. So I guess I'll RMA one of the SSDs. I wonder if the cage might be problematic. What is wrong with this array? Please help!

    Read the article

  • How to get rid of devices in the right panel of Nautilus ?

    - by Patryk
    I would like to rid of not mounted ntfs partition from Nautilus' right panel ( I just want 352 GB Filesystem - d drive to be there. First of all 352 GB Filesystem is in fact d so I do not know why it is duplicated. Secondly I have put Acer and SYSTEM RESERVED to be nouser mounts on purpose, so that I (or sombody else) will not format it (or else) by accident. So my /etc/fstab looks like this : #comments....... # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 UUID=1384cee0-6a71-4b83-b0d3-1338db925168 / ext4 noatime,errors=remount-ro 0 1 UUID=e3729117-b936-4c1d-9883-aee73dab6729 none swap sw 0 0 #------ MY WINDOWS D DRIVE---------- UUID=98E8B14DE8B12A80 /media/d ntfs defaults,errors=remount-ro,user 0 0 # #-------ACER---------------- UUID=01CBEA9D4476C2F0 /media/acer ntfs defaults,noauto,noexec,ro,nouser 0 0 # #-------SYSTEM RESERVED----- UUID=01CBEA95760F9330 /media/systemreserved ntfs defaults,noauto,noexec,ro,nouser 0 0 #UUID=58F9-C17E /boot/efi vfat defaults 0 1 blkid and fdisk -l root@XXX:/home/YYY# fdisk -l ... Device Boot Start End Blocks Id System /dev/sda1 4096 27262975 13629440 27 Hidden NTFS WinRE /dev/sda2 27262992 27467791 102400 7 HPFS/NTFS/exFAT /dev/sda3 27467792 232267775 102399992 7 HPFS/NTFS/exFAT /dev/sda4 232267793 976771071 372251639+ f W95 Ext'd (LBA) /dev/sda5 232267795 918867967 343300086+ 7 HPFS/NTFS/exFAT /dev/sda6 * 918870016 968044543 24587264 83 Linux /dev/sda7 968046592 976771071 4362240 82 Linux swap / Solaris root@XXX:/home/YYY# blkid /dev/sda1: LABEL="PQSERVICE" UUID="01CBEA95730D28A0" TYPE="ntfs" /dev/sda2: LABEL="SYSTEM RESERVED" UUID="01CBEA95760F9330" TYPE="ntfs" /dev/sda3: LABEL="Acer" UUID="01CBEA9D4476C2F0" TYPE="ntfs" /dev/sda5: UUID="98E8B14DE8B12A80" TYPE="ntfs" /dev/sda6: UUID="1384cee0-6a71-4b83-b0d3-1338db925168" TYPE="ext4" /dev/sda7: UUID="e3729117-b936-4c1d-9883-aee73dab6729" TYPE="swap"

    Read the article

  • How to re-add RAID-10 dropped drive?

    - by thiesdiggity
    I have a problem that I can't seem to solve. We have a Ubuntu server setup with RAID-10 and two of the drives dropped out of the array. When I try to re-add them using the following command: mdadm --manage --re-add /dev/md2 /dev/sdc1 I get the following error message: mdadm: Cannot open /dev/sdc1: Device or resource busy When I do a "cat /proc/mdstat" I get the following: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [r$ md2 : active raid10 sdb1[0] sdd1[3] 1953519872 blocks 64K chunks 2 near-copies [4/2] [U__U] md1 : active raid1 sda2[0] sdc2[1] 468853696 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdc1[1] 19530688 blocks [2/2] [UU] unused devices: <none> When I run "/sbin/mdadm --detail /dev/md2" I get the following: /dev/md2: Version : 00.90 Creation Time : Mon Sep 5 23:41:13 2011 Raid Level : raid10 Array Size : 1953519872 (1863.02 GiB 2000.40 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Oct 25 09:25:08 2012 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : c6d87d27:aeefcb2e:d4453e2e:0b7266cb Events : 0.6688691 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed 2 0 0 2 removed 3 8 49 3 active sync /dev/sdd1 Output of df -h is: Filesystem Size Used Avail Use% Mounted on /dev/md1 441G 2.0G 416G 1% / none 32G 236K 32G 1% /dev tmpfs 32G 0 32G 0% /dev/shm none 32G 112K 32G 1% /var/run none 32G 0 32G 0% /var/lock none 32G 0 32G 0% /lib/init/rw tmpfs 64G 215M 63G 1% /mnt/vmware none 441G 2.0G 416G 1% /var/lib/ureadahead/debugfs /dev/mapper/RAID10VG-RAID10LV 1.8T 139G 1.6T 8% /mnt/RAID10 When I do a "fdisk -l" I can see all the drives needed for the RAID-10. The RAID-10 is part of the /dev/mapper, could that be the reason why the device is coming back as busy? Anyone have any suggestions on what I can try to get the drives back into the array? Any help would be greatly appreciated. Thanks!

    Read the article

  • How to maintain base files for development environment central while allowing people to change their

    - by Ittai
    Hi, what I'd like to do is have files in a central location so that when I add people to my development team they can see the base version of these files but meanwhile have the ability for the rest of the team to work with their own local version. I know I can just put the files in source-control (we use Tortoiese-SVN) and have my team change the local versions but I'd rather not as the exclamation mark signaling the file has been changed and needs to be committed, quite frankly, irritates me greatly. I'll give two examples of what I mean: We use quite a few build.xml files which relate to a single properties files which contains many definitions. Some of them can be different between team-members (mainly temporary working directories) and I'd like a new team-member to have the ability to get the properties file with the base config but change it if they wish. Have the eclipse settings file in the SVN so that when a new team-member joins they can just retrieve the files from the server and have a base system running. If they wish they will be able to change some of these settings. Thanks, Ittai

    Read the article

  • What effects has working in rotating shifts on programming teams?

    - by eKek0
    I work in a bank, and the boss now want's that we, the programming team, work on rotating shifts. He wants that sometimes we work from 7am to 3pm, and sometimes on 11.30am to 7.30pm. He says that we will be more productive working this way, because he has worked with teams just like that and he just knows that. Nobody of the team wants this change, but we don't know how to effectively reject this new rule. I was trying to find some empirical (or almost) evidence about how rotating shifts affects performance of programming teams, and I couldn't. I had read something about rotating shifts, but not exactly about the effect of this on programming teams. Do you know any research about rotating shifts on programming teams? Did you have any experience with this kind of work? EDIT: Other teams of the company, like the database administrators team, the help desk team, the communication team or the network administrators team are already working in rotating shifts, and they don't like this but they do it anyway. I think the boss want that we work on rotating shifts too because of them, but since only we do programming I think the effects of rotating shifts could be, at least, different for us.

    Read the article

  • Dev Tools in IE6

    - by Dustin Digmann
    I would like to use dev tools like FireBug or the elements built into IE8 for my IE6 testing. IE9 would provide this. Before creating an IE9 environment, I thought I would check here. Do any of you have a solution for this type of problem?

    Read the article

  • Can Rational Team Concert and IntelliJ coexist?

    - by Paul McKenzie
    I am a long time intellij user. The company where I work is likely to introduce Rational Team Concert to our department shortly. I went to the RTC demo and it looks like a reasonable product, built around Eclipse, but I would rather not give up using IntelliJ. Does anyone have experience of using non-eclipse IDEs with RTC?

    Read the article

  • Map /dev/bus/usb node to /sys node on Linux

    - by Cody Brocious
    I'm using libusb to find and access a USB device, but once I get the information I need from there, I need to map it to a /sys node. This could be to the actual USB bus it's on, the /sys/bus/usb-serial node (which is where I'm going to get eventually), or effectively anywhere else since I can walk the tree from there. I can get to a /dev/bus/usb node easily enough, but I'm a bit lost from there. What would be the best route to perform this mapping?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >