Search Results

Search found 3281 results on 132 pages for 'repo man'.

Page 10/132 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • What's the best project management software for internal dev. 5 man shop

    - by P.Brian.Mackey
    I work for a large corporation, but we do small intranet web application development. Our project management tracking sucks. Its custom software built by a jr. intern. For what its worth, our development style is akin to agile, but there's nothing set in stone...very customer oriented approach. I need project tracking that meets the criteria: Intranet, internal products. Mostly maintenance, some new development. 5 developers 12 products 1 hands-off manager. He really just wants to know estimated man hours, due date for dev, QA and release. Along with a short description of the project. Free or super cheap. Bonus Simple pretty UI. Think pretty charts. Hope I covered everything. Please ask for any clarification. If you read dreaming in code, the company uses some project tracking software that sounds pretty sweet. Note, we do have Team Foundation Server. I already tried pushing its use as PM tracking, but its too complicated. I can't get people to sit and train. So this software has to be easy.

    Read the article

  • How does a one-man developer do its games' sounds?

    - by Gustavo Maciel
    Before anything, that's not a "oh, where can I find resources?" question. Well, I've been curious about one thing in the indie games industry. For the development of the game, such tasks like game design, art, sketches, code programming and etc can be easily done by just one person. You can just take up a paper and pencil and you're a game designer. You can just take software like Photoshop or Paint and you're an artist, a scanner and you're a sketcher, a compiler and you're a programmer. For sound it's different. You may tell me: Well, follow the line, take a lot of instruments and record it. But all we know that things don't work this way. I can list up some changes for us: External noises are a big problem, sound effects can't be made with instruments, it can't sound like a recorded and clipped sound. Well I can imagine how they do this in large companies, with such big studios and etc. But to summarise, my question is: What's the best way for a one-man indie to do all its sound? Does he have to synthesize everything? Record and buy some crazy program for editing sounds?

    Read the article

  • Museum of Modern Art Starts Video Game Collection; Acquires Myst, Pac-Man, and More

    - by Jason Fitzpatrick
    The Museum of Modern Art is weighing in on the video-games-as-art debate by starting a collection of iconic video games and putting them up for public display. Read on to see what games are included in the initial batch and the MoMA’s reasons behind starting a video game collection. Although the MoMA is slated to grow to over 40 titles, the seed batch is 14 titles including: Pac-Man, Tetris, Sim City 2000, Myst, Portal, and Dwarf Fortress. In the announcement they explain the motivation for building a video game collection: Are video games art? They sure are, but they are also design, and a design approach is what we chose for this new foray into this universe. The games are selected as outstanding examples of interaction design—a field that MoMA has already explored and collected extensively, and one of the most important and oft-discussed expressions of contemporary design creativity. Our criteria, therefore, emphasize not only the visual quality and aesthetic experience of each game, but also the many other aspects—from the elegance of the code to the design of the player’s behavior—that pertain to interaction design. In order to develop an even stronger curatorial stance, over the past year and a half we have sought the advice of scholars, digital conservation and legal experts, historians, and critics, all of whom helped us refine not only the criteria and the wish list, but also the issues of acquisition, display, and conservation of digital artifacts that are made even more complex by the games’ interactive nature. This acquisition allows the Museum to study, preserve, and exhibit video games as part of its Architecture and Design collection. The above quote is only a small snippet of a much lengthier look at the benefits of examining and preserving video games, hit up the link below to check out the full post including future titles the MoMA would like to include in their archive. Video Games: 14 in the Collection, for Starters [Inside/Out] How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices

    Read the article

  • Setting up a transparent SSL proxy

    - by badunk
    I've got a linux box set up with 2 network cards to inspect traffic going through port 80. One card is used to go out to the internet, the other one is hooked up to a networking switch. The point is to be able to inspect all HTTP and HTTPS traffic on devices hooked up to that switch for debugging purposes. I've written the following rules for iptables: nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On 192.168.2.1:1337, I've got a transparent http proxy using Charles (http://www.charlesproxy.com/) for recording. Everything's fine for port 80, but when I add similar rules for port 443 (SSL) pointing to port 1337, I get an error about invalid message through Charles. I've used SSL proxying on the same computer before with Charles (http://www.charlesproxy.com/documentation/proxying/ssl-proxying/), but have been unsuccessful with doing it transparently for some reason. Some resources I've googled say its not possible - I'm willing to accept that as an answer if someone can explain why. As a note, I have full access to the described set up including all the clients hooked up to the subnet - so I can accept self-signed certs by Charles. The solution doesn't have to be Charles-specific since in theory, any transparent proxy will do. Thanks! Edit: After playing with it a little, I was able to get it working for a specific host. When I modify my iptables to the following (and open 1338 in charles for reverse proxy): nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j DNAT --to-destination 192.168.2.1:1338 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 1338 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE I am able to get a response, but with no destination host. In the reverse proxy, if I just specify that everything from 1338 goes to a specific host that I wanted to hit, it performs the hand shake properly and I can turn on SSL proxying to inspect the communication. The setup is less than ideal because I don't want to assume everything from 1338 goes to that host - any idea why the destination host is being stripped? Thanks again

    Read the article

  • What program should I use for SSL stripping and re-encrypting

    - by Sparksis
    I'm trying to strip a HTTP over SSL connection down to SSL and then re-encrypt the channel (with a signed certificate(s) I can provide). Of course I want to be able to store captures of all the un-encrypted data. The purpose of this is to reverse engineer a HTTP handshake that is used by a SIP program on my machine. I've tried SSLstrip but it doesn't support what I need it too. Edit: I want something to this effect https://github.com/applidium/Cracking-Siri/blob/master/tcpProxy.rb only more generic and able to write to a pcap stream that wireshark will understand (I'm not sure if this does that). Edit2: upon further inspection this does not create pcap streams. I guess if need be I can write a compatible version but that is not the desired choice.

    Read the article

  • Open Source project that does SSL Inspection

    - by specs
    I've been assigned to research out and spec replacing our old and decrepit http content filtering system. There are several open source filtering packages available but I've not come across one that does SSL inspection. The new system will scale to many branches of different sizes, from say 10 users to a few hundred, so purchasing an appliance for each branch isn't desirable. When we're further along, we will do custom programming as we have a few unique needs in other aspects of filtering, so if the suggestion takes a bit of customization, it won't be a problem.

    Read the article

  • Permission denied error while import to remote repository via svn...

    - by Usman Ajmal
    Hi I am importing my project to another machine on my LAN to the directory: /srv/svn/repos/my-repo where my-repo was created via svnadmin create option The permissions of /srv/svn/repos/my-repo are drwxr-xr-x 6 svn svn 4096 2010-04-19 17:30 my-repo I executed following command to import myProject files to my-repo on remote system sudo svn import -m "First import" myProject svn+ssh://[email protected]/srv/svn/repos/my-repo This command started 'Adding' files but gave following error after 'Adding' 7 files svn: Can't open file '/srv/svn/repos/baltoros-valgrind/db/txn-current-lock': Permission denied Any idea whats going on...? Thanks a lot

    Read the article

  • Git clone/pull across local network

    - by Tom Sarduy
    I'm trying to clone/pull a repository in another PC using Ubuntu Quantal. I have done this on Windows before but I don't know what is the problem on ubuntu. I tried these: git clone file:////pc-name/repo/repository.git git clone file:////192.168.100.18/repo/repository.git git clone file:////user:pass@pc-name/repo/repository.git git clone smb://c-pc/repo/repository.git git clone //192.168.100.18/repo/repository.git Always I got: Cloning into 'intranet'... fatal: '//c-pc/repo/repository.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly or fatal: repository '//192.168.100.18/repo/repository.git' does not exist More: The other PC has username and password Is not networking issue, I can access and ping it. I just installed git doing apt-get install git (dependencies installed) I'm running git from the terminal (I'm not using git-shell) What is causing this and how to fix this? Any help would be great! UPDATE I have cloned the repo on Windows using git clone //192.168.100.18/repo/intranet.git without problems. So, the repo is accessible and exist! Maybe the problem is due user credentials?

    Read the article

  • New cloud development workflow using Github, Cloud9ide and CloudFoundry.

    - by weng
    So time is changing towards cloud development/computing. I'm trying to get the new "cloud" workflow based on the services I'm going to use: Github, Cloud9ide and CloudFoundry. Here is what is on my mind: Github acts like a central (main repo) just like yesterday's local filesystem. Every service will base it service upon this main repo. Workflow: Github: I create a new Github repo served as main repo for the project. Cloud9ide. I open my Github repo and write my tests and implementation (BDD/TDD). When I'm ready I save (commit) it to main repo on Github. X: A running instance of Jenkins detects someone has committed and fetches the latest commit, builds, deploys, tests (yeti and/or selenium) and reports if the tests were passed or not. If not, I make another commit til all tests are passing. X: I run the CloudFoundry commands to push the main Github repo to CloudFoundry's server and it will deploy my app automatically. What I'm still confused about is where this X environment will be. On a local server where I have to install Jenkins? Or could I install it on Cloud9ide (when java is supported) or will it be on another cloud service? Also, that X environment has to be able to fetch (clone) the Github repo and run the build scripts. And since the concept of Cloud9ide is very new and there haven't been any other predecessors I really wonder how the workflow will look like. We all know Github's workflow. We now know CloudFoundry's workflow (deploy/scale with a restful API/command line tool). But how Cloud9Ide will operate is still somewhat unclear to me. Someone on Cloud9ide mentioned that there will be buttons like deploy so I can deploy with one click. But that I guess will depend on what services that deploy process will hook up into etc. Could someone enlighten this cloud workflow topic and fill in the gaps. Thanks.

    Read the article

  • How do I share different files in a git repo with different people?

    - by David Faux
    In a single directory with a Git root folder, I have a bunch of files. I am working on one of those files, X.py, with my friend Alice. The other files I am working on with other people. I want Alice (and everyone else) to have access to X.py. I want Alice to only have access to X.py though. How can I achieve this with Git? Is there a way I can split a directory into two repos? That sounds rather cumbersome. Maybe I could add a remote repo that Alice can access containing X.py?

    Read the article

  • Where to put git "remote" repo on purely local git setup?

    - by Mittenchops
    I overwrote and lost some important scripts and would like to setup version control to protect my stuff. I've used git before, and am familiar with commands, but don't understand where I would put my "remote" repository on an install set up on my own machine---the place I push/pull to. I don't intend to share or access remotely, I just want a little source control for my files. I followed the instructions here for setting up my staging area: http://stackoverflow.com/questions/4249974/personal-git-repository But where do I put git "remote" repo on purely local git setup? How does the workflow work then? On the command in the above: git remote add origin ssh://myserver.com:/var/repos/my_repo.git Where should I put/name something like this? If I have multiple different projects, would they go in different places? I'm running 11.10.

    Read the article

  • Is there a way to show icons on a git repo folder like tortoiseGit?

    - by shengy
    Is there a way which could let me know all the file status by just looking at the folder view, like what TortoiseHg, TortoiseSVN, TortoiseGit did on windows? Now my git repo folder looks the same as other folders. If I want to view file status I have to type git status in the command line. I want some icons which could inform me the file/folder status at my first glance on the folder view. I'm using Ubuntu 12.04 EDIT I googled it, and what I'm asking for is called the overlay icon.

    Read the article

  • Find git branch that got pushed to a bare repository.

    - by Senthil A Kumar
    Lets have 2 repositories, one containing the actual data repo and a bare repository which is loaded with deltas from the actual data repository by doing a git push from data repo to bare repo. Hope you have understood the model that am using here. Am creating clones by cloning the bare repo, and i will be pushing from the branches in my local clone to the branches in bare repository. When am pushing data from my branch to bare repo, the data is automatically synced to the data repo by a hook. The question i have - is there a way to find from which branch a code has come to the bare repo. I can see the source and target branch during a git push, but after pushing can i see from logs or other way to identify from which branch and repository the data has been pushed from? If there are 5 developers pushing to bare repo, can i find in the bare repo from which branch and clone a code is pushed?

    Read the article

  • How do I get yum to see updates to a local repo without cleaning cache?

    - by Matt
    I have set up a local yum repository which I use to install test builds. For the testing purposes, my packages are versioned by <svn version number>.<date>.<time> (e.g. 12345.20110908.150404 The trouble is, once I make a new RPM, copy it to the repository directory and run createrepo $REPO_DIR, yum does not see the new RPM as being available. $ cd $REPO_DIR $ ls -1 repodata package-12345.20110908.150404-1.x86_64.rpm package-12345.20110908.174329-1.x86_64.rpm $ createrepo . # ...snip... $ rpm -q package package-12345.20110908.150404-1.x86_64 $ yum list --showduplicates package Installed Packages package.x86_64 12345.20110908.150404-1 @repo Available Packages package.x86_64 12345.20110908.150404-1 repo I can see the updates and grab them if I run yum clean all and then re-fetch the metadata, but I think this just means I need to be doing something else from the repo, as I don't have to do that for other yum repos. How do I need to set up my local repository so that I only need to run yum update from the client without having to clean my yum cache?

    Read the article

  • Pulling in changes from a forked repo without a request on GitHub?

    - by Alec
    I'm new to the social coding community and don't know how to proceed properly in this situation: I've created a GitHub Repository a couple weeks ago. Someone forked the project and has made some small changes that have been on my to-do. I'm thrilled someone forked my project and took the time to add to it. I'd like to pull the changes into my own code, but have a couple of concerns. 1) I don't know how to pull in the changes via git from a forked repo. My understanding is that there is an easy way to merge the changes via a pull request, but it appears as though the forker has to issue that request? 2) Is it acceptable to pull in changes without a pull request? This relates to the first one. I'd put the code aside for a couple of weeks and come back to find that what I was going to work on next was done by someone else, and don't want to just copy their code without giving them credit in some way. Shouldn't there be a to pull the changes in even if they don't explicitly ask you to? What's the etiquette here I may be over thinking this, but thanks for your input in advance. I'm pretty new to the hacker community, but I want to do what I can to contribute!

    Read the article

  • Fixing up Visual Studio&rsquo;s gitignore , using IFix

    - by terje
    Originally posted on: http://geekswithblogs.net/terje/archive/2014/06/13/fixing-up-visual-studiorsquos-gitignore--using-ifix.aspxDownload tool Is there anything wrong with the built-in Visual Studio gitignore ???? Yes, there is !  First, some background: When you set up a git repo, it should be small and not contain anything not really needed.  One thing you should not have in your git repo is binary files. These binary files may come from two sources, one is the output files, in the bin and obj folders.  If you have a  gitignore file present, which you should always have (!!), these folders are excluded by the standard included file (the one included when you choose Team Explorer/Settings/GitIgnore – Add.) The other source are the packages folder coming from your NuGet setup.  You do use NuGet, right ?  Of course you do !  But, that gitignore file doesn’t have any exclude clause for those folders.  You have to add that manually.  (It will very probably be included in some upcoming update or release).  This is one thing that is missing from the built-in gitignore. To add those few lines is a no-brainer, you just include this: # NuGet Packages packages/* *.nupkg # Enable "build/" folder in the NuGet Packages folder since # NuGet packages use it for MSBuild targets. # This line needs to be after the ignore of the build folder # (and the packages folder if the line above has been uncommented) !packages/build/ Now, if you are like me, and you probably are, you add git repo’s faster than you can code, and you end up with a bunch of repo’s, and then start to wonder: Did I fix up those gitignore files, or did I forget it? The next thing you learn, for example by reading this blog post, is that the “standard” latest Visual Studio gitignore file exist at https://github.com/github/gitignore, and you locate it under the file name VisualStudio.gitignore.  Here you will find all the new stuff, for example, the exclusion of the roslyn ide folders was commited on May 24th.  So, you think, all is well, Visual Studio will use this file …..     I am very sorry, it won’t. Visual Studio comes with a gitignore file that is baked into the release, and that is by this time “very old”.  The one at github is the latest.  The included gitignore miss the exclusion of the nuget packages folder, it also miss a lot of new stuff, like the Roslyn stuff. So, how do you fix this ?  … note .. while we wait for the next version… You can manually update it for every single repo you create, which works, but it does get boring after a few times, doesn’t it ? IFix Enter IFix ,  install it from here. IFix is a command line utility (and the installer adds it to the system path, you might need to reboot), and one of the commands is gitignore If you run it from a directory, it will check and optionally fix all gitignores in all git repo’s in that folder or below.  So, start up by running it from your C:/<user>/source/repos folder. To run it in check mode – which will not change anything, just do a check: IFix  gitignore --check What it will do is to check if the gitignore file is present, and if it is, check if the packages folder has been excluded.  If you want to see those that are ok, add the --verbose command too.  The result may look like this: Fixing missing packages Let us fix a single repo by adding the missing packages structure,  using IFix --fix We first check, then fix, then check again to verify that the gitignore is correct, and that the “packages/” part has been added. If we open up the .gitignore, we see that the block shown below has been added to the end of the .gitignore file.   Comparing and fixing with latest standard Visual Studio gitignore (from github) Now, this tells you if you miss the nuget packages folder, but what about the latest gitignore from github ? You can check for this too, just add the option –merge (why this is named so will be clear later down) So, IFix gitignore --check –merge The result may come out like this  (sorry no colors, not got that far yet here): As you can see, one repo has the latest gitignore (test1), the others are missing either 57 or 150 lines.  IFix has three ways to fix this: --add --merge --replace The options work as follows: Add:  Used to add standard gitignore in the cases where a .gitignore file is missing, and only that, that means it won’t touch other existing gitignores. Merge: Used to merge in the missing lines from the standard into the gitignore file.  If gitignore file is missing, the whole standard will be added. Replace: Used to force a complete replacement of the existing gitignore with the standard one. The Add and Replace options can be used without Fix, which means they will actually do the action. If you combine with --check it will otherwise not touch any files, just do a verification.  So a Merge Check will  tell you if there is any difference between the local gitignore and the standard gitignore, a Compare in effect. When you do a Fix Merge it will combine the local gitignore with the standard, and add what is missing to the end of the local gitignore. It may mean some things may be doubled up if they are spelled a bit differently.  You might also see some extra comments added, but they do no harm. Init new repo with standard gitignore One cool thing is that with a new repo, or a repo that is missing its gitignore, you can grab the latest standard just by using either the Add or the Replace command, both will in effect do the same in this case. So, IFix gitignore --add will add it in, as in the complete example below, where we set up a new git repo and add in the latest standard gitignore: Notes The project is open sourced at github, and you can also report issues there.

    Read the article

  • Cloning git repository from svn repository, results in file-less, remote-branch-less git repo.

    - by Tchalvak
    Working SVN repo I'm starting a git repo to interact with a svn repo. The svn repository is set and working fine, with a single commit of a basic README file in it. Checking it out works fine: tchalvak:~/test/svn-test$ svn checkout --username=myUsernameHere http://www.url.to/project/here/charityweb/ A charityweb/README Checked out revision 1. Failed git-svn clone of svn repo When I try to clone the repository in git, the first step shows no errors... tchalvak:~/test$ git svn clone -s --username=myUserNameHere http://www.url.to/project/here/charityweb/ Initialized empty Git repository in /home/tchalvak/test/charityweb/.git/ Authentication realm: <http://www.url.to/project/here:80> Charity Web Password for 'myUserNameHere': ...but results in a useless folder: tchalvak:~/test$ ls charityweb tchalvak:~/test$ cd charityweb/ tchalvak:~/test/charityweb$ ls tchalvak:~/test/charityweb$ ls -al total 12 drwxr-xr-x 3 tchalvak tchalvak 4096 2010-04-02 13:46 . drwxr-xr-x 4 tchalvak tchalvak 4096 2010-04-02 13:46 .. drwxr-xr-x 8 tchalvak tchalvak 4096 2010-04-02 13:47 .git tchalvak:~/test/charityweb$ git branch -av tchalvak:~/test/charityweb$ git status # On branch master # # Initial commit # nothing to commit (create/copy files and use "git add" to track) tchalvak:~/test/charityweb$ git fetch fatal: Where do you want to fetch from today? tchalvak:~/test/charityweb$ git rebase origin/master fatal: bad revision 'HEAD' fatal: Needed a single revision invalid upstream origin/master tchalvak:~/test/charityweb$ git log fatal: bad default revision 'HEAD' How do I get something I can commit back to? I expect I'm doing something wrong in this process, but what?

    Read the article

  • How to push a new feature to a central Mercurial repo?

    - by Sly
    I'm assigned the development of a feature for a project. I'm going to work on that feature for several days over a period of a few weeks. I'll clone the central repo. Then I'm going to work locally for 3 weeks. I'll commit my progress to my repo several times during that process. When I'm done, I'm going to pull/merge/commit before I push. What is the right way push my feature as a single changeset to the central repo? I don't want to push 14 "work in progress" changesets and 1 "merged" changeset to the central repo. I want other collaborators on the project to see only one changeset with a significant commit message (such as "Implemented feature ABC"). I'm new to Mercurial and DVCS so don't hesitate to provide guidance if you think I'm not approaching that the right way. <My own answer> So far I came up with a way of reducing 15 changeset to 2 changeset. Suppose changesets 10 to 24 are "work in progress" changesets. I can 'hg collapse -r 10:24 -m "Implemented feature ABC"' (14 changesets collapsed into 1). Then, I must 'hg pull' + 'hg merge' + 'hg commit -m "Merged with most recent changes"'. But now I'm stuck with 2 changesets. I can no longer 'hg collapse', because pull/merge/commit broke my changeset sequence. Of course 2 changesets is better then 15 but still, I'd rather have 1 changeset. </My own answer>

    Read the article

  • Eine komplette Virtualisierungslandschaft auf dem eigenen Laptop – So geht’s

    - by Manuel Hossfeld
    Eine komplette Virtualisierungslandschaftauf dem eigenen Laptop – So geht’s Wenn man sich mit dem Virtualisierungsprodukt Oracle VM in der aktuellen Version 3.x näher befassen möchte, bietet es sich natürlich an, eine eigene Umgebung zu Lern- und Testzwecken zu installieren. Doch leichter gesagt als getan: Bei näherer Betrachtung der Architektur wird man schnell feststellen, dass mehrere Rechner benötigt werden, um überhaupt alle Komponenten abbilden zu können: Zum einen gilt es, den oder die OVM Server selbst zu installieren. Das ist recht leicht und schnell erledigt, aber da Oracle VM ein „Typ 1 Hypervisor ist“ - also direkt auf dem Rechner („bare metal“) installiert wird – ist der eigenen Arbeits-PC oder Laptop dafür recht ungeeignet. (Eine Dual-Boot Umgebung wäre zwar denkbar, aber recht unpraktisch.) Zum anderen wird auch ein Rechner benötigt, auf dem der OVM Manager installiert wird. Im Gegensatz zum OVM Server erfolgt dessen Installation nicht „bare metal“, sondern auf einem bestehenden Oracle Linux. Aber was tun, wenn man gerade keinen Linux-Server griffbereit hat und auch keine extra Hardware dafür opfern will? Möchte man alle Funktionen von Oracle VM austesten, so sollte man zusätzlich über einen Shared Storag everüfugen. Dieser kann wahlweise über NFS oder über ein SAN (per iSCSI oder FibreChannel) angebunden werden. Zwar braucht man zum Testen nicht zwingend entsprechende „echte“ Storage-Hardware, aber auch die „Simulation“ entsprechender Komponenten erfordert zusätzliche Hardware mit entsprechendem freien Plattenplatz.(Alternativ können auch fertige „Software Storage Appliances“ wie z.B. OpenFiler oder FreeNAS verwendet werden). Angenommen, es stehen tatsächlich keine „echte“ Server- und Storage Hardware zur Verfügung, so benötigt man für die oben genannten drei Punkte  drei bzw. vier Rechner (PCs, Laptops...) - je nachdem ob man einen oder zwei OVM Server starten möchte. Erfreulicherweise geht es aber auch mit deutlich weniger Aufwand: Wie bereits kurz im Blogpost anlässlich des letzten OVM-Releases 3.1.1 beschrieben, ist die aktuelle Version in der Lage, selbst vollständig innerhalb von VirtualBox als Gast zu laufen. Wer bei dieser „doppelten Virtualisierung“ nun an das Prinzip der russischen Matroschka-Puppen denkt, liegt genau richtig. Oracle VM VirtualBox stellt dabei gewissermaßen die äußere Hülle dar – und da es sich bei VirtualBox im Gegensatz zu Oracle VM Server um einen „Typ 2 Hypervisor“ handelt, funktioniert dieser Ansatz auch auf einem „normalen“ Arbeits-PC bzw. Laptop, ohne dessen eigentliche Betriebsystem komplett zu überschreiben. Doch das beste dabei ist: Die Installation der jeweiligen VirtualBox VMs muss man nicht selber durchführen. Der OVM Manager als auch der OVM Server stehen bereits als vorgefertigte „VirtualBox Appliances“ im Oracle Technology Network zum Download zur Verfügung und müssen im Grunde nur noch importiert und konfiguriert werden. Das folgende Schaubild verdeutlicht das Prinzip: Die dunkelgrünen Bereiche stellen jeweils Instanzen der eben erwähnten VirtualBox Appliances für OVM Server und OVM Manager dar. (Hier im Bild sind zwei OVM Server zu sehen, als Minimum würde natürlich auch einer genügen. Dann können aber viele Features wie z.B. OVM HA nicht ausprobieren werden.) Als cleveren Trick zur Einsparung einer weiteren VM für Storage-Zwecke hat Wim Coekaerts (Senior Vice President of Linux and Virtualization Engineering bei Oracle), der „Erbauer“ der VirtualBox Appliances, die OVM Manager Appliance bereits so vorbereitet, dass diese gleichzeitig als NFS-Share (oder ggf. sogar als iSCSI Target) dienen kann. Dies beschreibt er auch kurz auf seinem Blog. Die hellgrünen Ovale stellen die VMs dar, welche dann innerhalb einer der virtualisierten OVM Server laufen können. Aufgrund der Tatsache, dass durch diese „doppelte Virtualisierung“ die Fähigkeit zur Hardware-Virtualisierung verloren geht, können diese „Nutz-VMs“ demzufolge nur paravirtualisiert sein (PVM). Die hier in blau eingezeichneten Netzwerk-Schnittstellen sind virtuelle Interfaces, welche beliebig innerhalb von VirtualBox eingerichtet werden können. Wer die verschiedenen Netzwerk-Rollen innerhalb von Oracle VM im Detail ausprobieren will, kann hier natürlich auch mehr als zwei dieser Interfaces konfigurieren. Die Vorteile dieser Lösung für Test- und Demozwecke liegen auf der Hand: Mit lediglich einem PC bzw. Laptop auf dem VirtualBox installiert ist, können alle oben genannten Komponenten installiert und genutzt werden – genügend RAM vorausgesetzt. Als Minimum darf hier 8GB gelten. Soll auf der „Host-Umgebung“ (also dem PC auf dem VirtualBox läuft) nebenbei noch gearbeiten werden und/oder mehrere „Nutz-VMs“ in dieser simulierten OVM-Server-Umgebung laufen, empfehlen sich natürlich eher 16GB oder mehr. Da die nötigen Schritte zum Installieren und initialen Konfigurieren der Umgebung ausführlich in einem entsprechenden Paper beschrieben sind, möchte ich im Rest dieses Artikels noch einige zusätzliche Tipps und Details erwähnen, welche einem das Leben etwas leichter machen können: Um möglichst entstpannt und mit zusätzlichen „Sicherheitsnetz“ an die Konfiguration der Umgebung herangehen zu können, empfiehlt es sich, ausgiebigen Gebrauch von der in VirtualBox eingebauten Funktionalität der VM Snapshots zu machen. Dies ermöglicht nicht nur ein Zurücksetzen falls einmal etwas schiefgehen sollte, sondern auch ein beliebiges Wiederholen von bereits absolvierten Teilschritten (z.B. um eine andere Idee oder Variante der Umgebung auszuprobieren). Sowohl bei den gerade erwähnten Snapshots als auch bei den VMs selbst sollte man aussagekräftige Namen verwenden. So ist sichergestellt, dass man nicht durcheinander kommt und auch nach ein paar Wochen noch weiß, welche Umgebung man da eigentlich vor sich hat. Dies beinhaltet auch die genaue Versions- und Buildnr. des jeweiligen OVM-Releases. (Siehe dazu auch folgenden Screenshot.) Weitere Informationen und Details zum aktuellen Zustand sowie Zweck der jeweiligen VMs kann in dem oft übersehenen Beschreibungsfeld hinterlegt werden. Es empfiehlt sich, bereits VOR der Installation einen Notizzettel (oder eine Textdatei) mit den geplanten IP-Adressen und Namen für die VMs zu erstellen. (Nicht vergessen: Auch der Server Pool benötigt eine eigene IP.) Dabei sollte man auch nochmal die tatsächlichen Netzwerke der zu verwendenden Virtualbox-Interfaces prüfen und notieren. Achtung: Es gibt im Rahmen der Installation einige Passworte, die vom Nutzer gesetzt werden können – und solche, die zunächst fest eingestellt sind. Zu letzterem gehört das Passwort für den ovs-agent sowie den root-User auf den OVM Servern, welche beide per Default „ovsroot“ lauten. (Alle weiteren Passwort-Informationen sind in dem „Read me first“ Dokument zu finden, welches auf dem Desktop der OVM Manager VM liegt.) Aufpassen muss man ggf. auch in der initialen „Interview-Phase“ welche die VirtualBox VMs durchlaufen, nachdem sie das erste mal gebootet werden. Zu diesem Zeitpunkt ist nämlich auf jeden Fall noch die amerikanische Tastaturbelegung aktiv, so dass man z.B. besser kein „y“ und „z“ in seinem selbst gewählten Passwort verwendet. Aufgrund der Tatsache, dass wie oben erwähnt der OVM Manager auch gleichzeitig den Shared Storage bereitstellt, sollte darauf geachtet werden, dass dessen VM vor den OVM Server VMs gestartet wird. (Andernfalls „findet“ der dem OVM Server Pool zugrundeliegende Cluster sein sog. „Server Pool File System“ nicht.)

    Read the article

  • Gravity stops when side-collision detected

    - by Adrian Marszalek
    Please, look at this GIF: The label on the animation says "Move button is pressed, then released". And you can see when it's pressed (and player's getCenterY() is above wall getCenterY()), gravity doesn't work. I'm trying to fix it since yesterday, but I can't. All methods are called from game loop. public void move() { if (left) { switch (game.currentLevel()) { case 1: for (int i = 0; i < game.lvl1.getX().length; i++) game.lvl1.getX()[i] += game.physic.xVel; break; } } else if (right) { switch (game.currentLevel()) { case 1: for (int i = 0; i < game.lvl1.getX().length; i++) game.lvl1.getX()[i] -= game.physic.xVel; break; } } } int manCenterX, manCenterY, boxCenterX, boxCenterY; //gravity stop public void checkCollision() { for (int i = 0; i < game.lvl1.getX().length; i++) { manCenterX = (int) game.man.getBounds().getCenterX(); manCenterY = (int) game.man.getBounds().getCenterY(); if (game.man.getBounds().intersects(game.lvl1.getBounds(i))) { boxCenterX = (int) game.lvl1.getBounds(i).getCenterX(); boxCenterY = (int) game.lvl1.getBounds(i).getCenterY(); if (manCenterY - boxCenterY > 0 || manCenterY - boxCenterY < 0) { game.man.setyPos(-2f); game.man.isFalling = false; } } } } //left side of walls public void colliLeft() { for (int i = 0; i < game.lvl1.getX().length; i++) { if (game.man.getBounds().intersects(game.lvl1.getBounds(i))) { if (manCenterX - boxCenterX < 0) { for (int i1 = 0; i1 < game.lvl1.getX().length; i1++) { game.lvl1.getX()[i1] += game.physic.xVel; game.man.isFalling = true; } } } } } //right side of walls public void colliRight() { for (int i = 0; i < game.lvl1.getX().length; i++) { if (game.man.getBounds().intersects(game.lvl1.getBounds(i))) { if (manCenterX - boxCenterX > 0) { for (int i1 = 0; i1 < game.lvl1.getX().length; i1++) { game.lvl1.getX()[i1] += -game.physic.xVel; game.man.isFalling = true; } } } } } public void gravity() { game.man.setyPos(yVel); } //not called from gameloop: public void setyPos(float yPos) { this.yPos += yPos; }

    Read the article

  • Guests can't access KVM host server by name although nslookup and dig returns correct record

    - by user190196
    So I have a KVM host that also runs an apache server with some yum repos. The VM guests are connected to the default virtual network, which is configured to offer DHCP and forwarding with NAT on virbr0 (192.168.12.1). The guests can successfully access the yum repos on the host by IP address, so for example curl 192.168.122.1/repo1 returns the content without problems. But I'd like to have the guests be able to reach the web server on the host by name rather IP address. I added the desired name record to the host's /etc/hosts file and libvirt's dnsmasq service seems to be serving that correctly to the guests since nslookup and dig successfully resolve the name on the guests: [root@localhost ~]# nslookup repo Server: 192.168.122.1 Address: 192.168.122.1#53 Name: repo Address: 192.168.122.1 [root@localhost ~]# dig repo ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> repo ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55938 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;repo. IN A ;; ANSWER SECTION: repo. 0 IN A 192.168.122.1 ;; Query time: 0 msec ;; SERVER: 192.168.122.1#53(192.168.122.1) ;; WHEN: Tue Sep 17 02:10:46 2013 ;; MSG SIZE rcvd: 38 But curl/ping/etc still fail: [root@localhost ~]# curl repo curl: (6) Couldn't resolve host 'repo' While a request via ip address works: [root@localhost ~]# curl 192.168.122.1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <html> <head> <title>Index of /</title> [...] Same with ping: [root@localhost ~]# ping repo ping: unknown host repo [root@localhost ~]# ping 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.110 ms 64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.146 ms 64 bytes from 192.168.122.1: icmp_seq=3 ttl=64 time=0.191 ms ^C --- 192.168.122.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2298ms rtt min/avg/max/mdev = 0.110/0.149/0.191/0.033 ms I tried adding repo 192.168.122.1 to the guests' /etc/hosts files but still no dice. Also tried changing guests' /etc/nsswitch.conf with both: hosts: files dns and hosts: dns files I've read the relevant libvirt documentation and I'm not sure where else to learn more about this and be able to move forward with it.

    Read the article

  • Should we use Nexus or Artifactory for a Maven Repo?

    - by John Stauffer
    We are using Maven for a large build process ( 100 modules). We have been storing our external dependencies in source control, and using that to update a local repo. However, we are ready to graduate to a local repo that can cache central so that we don't have to proactively download all 3rd parties (but we can still have a local repo to pull from). In addition we want to publish our internal build artifacts from a nightly build so that developers don't have to build the world. We are considering Nexus and Artifactory. What are the reasons for preferring one over the other? Are there others we should be considering?

    Read the article

  • Ich bin jetzt Oracle Certified Associate!

    - by britta.wolf
    Jan Peuker, Absolvent der Hochschule Augsburg und University of Melbourne, hat vor kurzem das Zertifikat Oracle Database 10g Administrator Certified Associate erworben. Er hat uns netterweise mit diesem kleinen Text versorgt: "Die Oracle Zertifizierung beginnt üblicherweise mit dem Oracle Certified Associate. Für diese Zertifizierung ist noch keine tiefgehende Praxiserfahrung notwendig. Um den Titel des Oracle Database 11g Administrator Certified Associate zu erlangen, muss man eine Prüfung zu SQL (z.B. 1Z0-051) sowie eine Prüfung zur Administration (1Z0-045) ablegen. Beide Prüfungen dauern 2 Stunden und haben ca. 80 Fragen von denen etwa drei Viertel richtig beantwortet werden müssen, um zu bestehen. Eine Note gibt es nicht. Die Prüfungen finden immer elektronisch statt, die Software erlaubt das Überspringen und Markieren von Fragen. Während meiner Arbeitszeit nach meinem ersten Studium hatte ich häufig mit dem Oracle Datenbanksystem zu tun. Als ich mein Aufbaustudium an der University of Melbourne absolvierte, wurde mir von der Studienberaterin vorgeschlagen, den Kurs „Advanced Database Administration" zu belegen. Dieser beruht vollständig auf den offiziellen Oracle Trainings-Unterlagen zur Prüfung in Oracle Administration und erlaubt daher die Teilnahme an der offiziellen Zertifizierung. Im Gegensatz zur SQL Prüfung, deren Inhalt man sich gut selbst aneignen kann, hilft bei der Administrator-Zertifizierung ein echter Kurs mit Seminar ungemein. Viele Konzepte lassen sich schwer aus einem Buch lernen. Die Bestandteile der SGA oder das Anlegen von Benutzern mögen leicht zugänglich sein, Redo- und Undo-Management sowie Backup und Recovery kann man nur verstehen, wenn man Beispiele hat und diese an einem Testsystem (keine "kleine" XE-Datenbank, sondern eine "richtige" Datenbank mit Enterprise Manager) ausprobieren kann. Übermäßig viel Zeit habe ich keinesfalls investiert, weil das Grundsystem sehr logisch ist. Für die weniger nachvollziehbaren Bereiche, besonders die neuen Features, habe ich mir Fachbegriffe auf Lernkarten geschrieben und die Trainingsunterlagen am System durchgespielt. Die Prüfung war für mich überraschend schwer, weil das einfache "Tagesgeschäft" deutlich unterrepräsentiert ist. In den Multiple-Choice-Fragen werden viele Besonderheiten und Use-Cases abgefragt (online findet man viele Beispielfragen). Da beide Tests in Englisch sind, sollte man nicht nur in der Terminologie des Oracle Datenbanksystems sondern auch in Fachbegriffen der Datenbankwelt allgemein bewandert sein. Oft machen einzelne Wörter (z.B. redundant oder synchronized, redo log oder redo log buffer) die richtige Antwort aus, ein signifikanter Anteil der Fragen beruht auf Zeichnungen oder Diagrammen, die beschrieben werden müssen. So muss man z.B. anhand eines Log-Auszugs beurteilen, warum die Datenbank nicht sauber geschlossen wurde. Allgemeines Wissen über Datenbanksysteme hilft leider nicht viel, da überproportional viele Fragen zu Oracle-spezifischen Themen gestellt werden, wie z.B. Optimierungs-Dienste (ADDM), Flashback, SQL Loader und ein wenig PL/SQL. Die SQL Prüfung ist dagegen sehr geradlinig - was aber nicht einfacher heißt. Hier kommt es mehr auf Auswendiglernen von Syntax an, was mir persönlich nicht liegt. Vor allem als Anwendungsprogrammierer kennt man oft proprietäre SQL-Funktionen nicht, es fällt schwer, sich einzelne Datumsberechnungsfunktionen, Typkonvertierungen, Namespaces oder krude Join-Methoden zu merken. Auf all dies wird in der Prüfung aber sehr viel Wert gelegt. Auch hier wird man wieder mit zweideutigen Multiple-Choice Fragen konfrontiert, bei denen sich z.B. nur die Reihenfolge der Parameter unterscheidet. Zudem sind die Parameter auch nicht ausgeschrieben, sondern in einem Entity-Relationship-Diagramm gegeben, wobei man auf die richtigen Datentypen achten muss. Mir persönlich war die Zeit fast zu knapp bemessen, weil man bei vielen Fragen erst ein Diagramm, einen Datenauszug oder einen längeren Text lesen muss, um dann die richtigen Statements zu finden. Hier helfen Lernkarten also nur bedingt - stattdessen üben, üben, üben. Durch den relativ niedrigen Pass-Score von 70% kann man es sich leisten, unsichere Fragen zuerst zu überspringen und erst nachdem alle sicheren beantwortet sind, zu überdenken. Die Prüfung ist auf jeden Fall fair. Ich habe durch das Oracle-Zertifizierungsprogramm viel gelernt. Die Datenbanken unter meiner Aufsicht laufen deutlich performanter und liefern höhere Verfügbarkeit, weil ich Probleme eliminieren konnte, die mir vorher nicht klar waren. Eine klassische Misskonfiguration, volle Archive Logs, weil diese mit zu lange gehaltenem Flashback-Speicher kollidieren, konnte ich bereits in einer der ersten Stunden meines Kurses an der Uni Melbourne mit Hilfe meines Professors klären. Beide Prüfungen waren problemlos parallel zu anderen Prüfungen zu absolvieren. Empfehlen kann ich eine gründliche Online-Recherche aber auch die Oracle Press-Bücher, welche mit Prüfungsfragen am Ende jedes Kapitels aufwarten. So spart man sich Zeit und ist trotzdem gut vorbereitet. Auch wenn ich keine Laufbahn als Administrator einschlagen werde, bin ich froh die zugrundeliegende Technologie vieler Anwendungen besser zu verstehen. Für meine tägliche Arbeit als Anwendungsentwickler hat es mir vor allem geholfen, Oracle-Konzepte z.B. im Bereich der Transaktionssteuerung und Wiederherstellung zu verstehen und damit viele Open Source Produkte jetzt sinnvoller bewerten und empfehlen zu können." Eine Übersicht der Zertifizierungspfade finden Sie auf der Oracle University Webseite (dann einfach "Deutschland""auswählen und anschließend auf den Punkt "Zertifizierungen" klicken).

    Read the article

  • How to set up a one-man research in the difference between BDD and Waterfall?

    - by Martijn van der Maas
    Earlier, I asked a question about how to measure the quality of a project. The outcome of that question was that the quality of the project can be divided into two parts: Internal quality (code quality, measurable by code quality metrics) External quality (Acceptance test, how well the software meets the requirements) So based on that, I want to set up some research and validate the outcome of the project. The problem is, I will conduct this research on my own, so it's not possible to run the project once in BDD style and the other one in waterfall by myself. It's also not possible to compare BDD and waterfall projects on a larger scale, due to the fact that there are not enough BDD projects that can be measured because of the age of BDD. So, my question is: did anybody face this problem? How could I execute my experiment in such a way that it is of scientific value?

    Read the article

  • git for personal (one-man) projects. Overkill?

    - by Anto
    I know, and use, two version control systems: Subversion and git. Subversion, as of now, gets used for personal projects where I am the only developer and git gets used for open source projects and projects where I believe others will also work on the project. This is mostly because of git's amazing forking and merging capabilities, where everyone may work on their own branch; very handy. Now, I use Subversion for personal projects, as I think git makes little sense there. It seems to be a little bit of overkill. It is OK for me if it is centralized (on my home server, usually) when I am the only developer; I take regular backups anyway. I don't need the ability to make my own branch, the main branch is my branch. Yes, SVN has simple support for branching, but much more powerful support for it makes no sense, I think. Merging can be a pain with it, or at least from my little experience. Is there any good reason for me to use git on personal projects, or is it just simply overkill?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >