Search Results

Search found 3182 results on 128 pages for 'git branching'.

Page 110/128 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • How to install Tor (Web Browser) in Ubuntu 12.10?

    - by Zignd
    I would like to install the Tor, but I'm having some problems. I know that someone will say "This question is a exactly duplication of How to install tor?", but it's not, because the another question can not be applied to Ubuntu 12.10 as the deb command is not available anymore. I did a research and even at the Tor's Official Website the available resource can not be applied to Ubuntu 12.10. I tried to use the deb command (as the above question says: deb http://deb.torproject.org/torproject.org <DISTRIBUTION> main) and the Terminal says deb: command not found and when I try to install it says E: Unable to locate package deb. I've also tried to use the ppa: ubun-tor, but it's not compatible with Quantal Quetzal, because it's too old. I've also tried to use sudo apt-get install tor, but browser icon don't shows up after installation and if you try to use the command tor in the Terminal I get the following error message: Nov 26 10:59:25.731 [notice] Tor v0.2.3.22-rc (git-4a0c70a817797420) running on Linux. Nov 26 10:59:25.731 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Nov 26 10:59:25.731 [notice] Read configuration file "/etc/tor/torrc". Nov 26 10:59:25.737 [notice] Initialized libevent version 2.0.19-stable using method epoll (with changelist). Good. Nov 26 10:59:25.737 [notice] Opening Socks listener on 127.0.0.1:9050 Nov 26 10:59:25.737 [warn] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running? Nov 26 10:59:25.737 [warn] Failed to parse/validate config: Failed to bind one of the listener ports. Nov 26 10:59:25.737 [err] Reading config failed--see warnings above. Thanks in advance.

    Read the article

  • Release notes for 9/25/2012

    Below are the release notes from today's deployment. 1. With today’s deployment we’ve made some significant changes to the source code experience. First of all, you’ll noticed that we moved the Source Code tab closer to the project home tab.   We believe that this will help make source code more discoverable and emphasizes our focus on developer collaboration. The next thing you’ll notice is that when you click on the Source Code tab, you will immediately be browsing code. We want to get you to the project source code in a minimum number of clicks, and this change helps get you there. The changeset history is still there, which brings us to the next change… We implemented an action bar in the source code section, which will make certain actions more discoverable, including forking, cloning, and downloading source code The popups in the action bar will help you perform the tasks you need to do when contributing to projects, as well as managing your own projects. Take a look at how easy it is to find the clone/connection URL now! 2. The second exciting thing we turned on this week is the ability to enable Windows Azure Web Sites to build and deploy your project source code (for Git source code projects). You can read more about how to do this in Mark's post here. 3. We also made some improvements in other areas this week: Made some improvements to screen reader accessibility Fixed some minor UI issues in the browse source code page We'd love to have your feedback on the new changes to the source code tab. Please let us know what you think on our suggestions page, send us a message on Twitter @codeplex, or you can reach Mark Groves directly @mgroves84

    Read the article

  • Subterranean IL: Exception handling 2

    - by Simon Cooper
    Control flow in and around exception handlers is tightly controlled, due to the various ways the handler blocks can be executed. To start off with, I'll describe what SEH does when an exception is thrown. Handling exceptions When an exception is thrown, the CLR stops program execution at the throw statement and searches up the call stack looking for an appropriate handler; catch clauses are analyzed, and filter blocks are executed (I'll be looking at filter blocks in a later post). Then, when an appropriate catch or filter handler is found, the stack is unwound to that handler, executing successive finally and fault handlers in their own stack contexts along the way, and program execution continues at the start of the catch handler. Because catch, fault, finally and filter blocks can be executed essentially out of the blue by the SEH mechanism, without any reference to preceding instructions, you can't use arbitary branches in and out of exception handler blocks. Instead, you need to use specific instructions for control flow out of handler blocks: leave, endfinally/endfault, and endfilter. Exception handler control flow try blocks You cannot branch into or out of a try block or its handler using normal control flow instructions. The only way of entering a try block is by either falling through from preceding instructions, or by branching to the first instruction in the block. Once you are inside a try block, you can only leave it by throwing an exception or using the leave <label> instruction to jump to somewhere outside the block and its handler. The leave instructions signals the CLR to execute any finally handlers around the block. Most importantly, you cannot fall out of the block, and you cannot use a ret to return from the containing method (unlike in C#); you have to use leave to branch to a ret elsewhere in the method. As a side effect, leave empties the stack. catch blocks The only way of entering a catch block is if it is run by the SEH. At the start of the block execution, the thrown exception will be the only thing on the stack. The only way of leaving a catch block is to use throw, rethrow, or leave, in a similar way to try blocks. However, one thing you can do is use a leave to branch back to an arbitary place in the handler's try block! In other words, you can do this: .try { // ... newobj instance void [mscorlib]System.Exception::.ctor() throw MidTry: // ... leave.s RestOfMethod } catch [mscorlib]System.Exception { // ... leave.s MidTry } RestOfMethod: // ... As far as I know, this mechanism is not exposed in C# or VB. finally/fault blocks The only way of entering a finally or fault block is via the SEH, either as the result of a leave instruction in the corresponding try block, or as part of handling an exception. The only way to leave a finally or fault block is to use endfinally or endfault (both compile to the same binary representation), which continues execution after the finally/fault block, or, if the block was executed as part of handling an exception, signals that the SEH can continue walking the stack. filter blocks I'll be covering filters in a separate blog posts. They're quite different to the others, and have their own special semantics. Phew! Complicated stuff, but it's important to know if you're writing or outputting exception handlers in IL. Dealing with the C# compiler is probably best saved for the next post.

    Read the article

  • How to promote an open-source project?

    - by Shehi
    First of all, I apologize if this is the wrong section of network to post this question. If it is, please feel free to move it to more appropriate location... Question: I would like to hear your ideas regarding the ways of open source projects being started and run. I have an open-source content management system project and here some questions arise: How should I act? Shall I come up with a viable pre-alpha edition with working front- and back-ends first and then announce the project publicly? Or shall I announce it right away from the scratch? As a developer I know that one should use versioning system like Git or SVN, which I do, no problems there. And the merit of unit-testing is also something to remember, which, to be frank, I am not into at all... Project management - I am a beginner in that, at best. Coding techniques and experiences such as Agile development is something I want to explore... In short, any ideas for a developer who is new to open-source world, is most welcome.

    Read the article

  • What is the best way for an experienced developer to work on a WordPress blog

    - by nanothief
    I'm beginning to work on my first WordPress blog, however I've noticed most tutorials just have you do modifications (such as theme changes, installing plugins) on the production site. This worries me for a few reasons: No backups No version control If you make a mistake, your production site is affected Developing remotely is slower than local development, especially when tweaking css files. I understand why WordPress works like this - it allows people with no development experience to manage their WordPress installation (or the one provided by their service provider). It also allows you to work on the WordPress installation without having ssh access to the server. However as I am confortable working with tools like git and ssh, and am using a virtual server for the blog, this isn't very important to me. So I was wondering what techniques experienced developers use when working on a WordPress blog. For example: Do you develop locally, then push the changes to the live site? How do you do this? How do you manage database changes and backups? What do you store under version control (if anything)? If a plugin changes the database, do you somehow track the changes it does in version control, so you can rollback the changes done by the plugin if you need to? Or maybe I'm just overcomplicating everything if working on the production site isn't as risky as I am thinking it would be. I would appreciate any answers either way.

    Read the article

  • Leveraging NuGet as a central repository for PowerShell modules

    - by cibrax
    We have been working a lot lately with PowerShell as part of our star product at Tellago Studios, “Moesion”. One of the main features we provide in Moesion is the ability to execute PowerShell commands remotely in a given server using a web mobile interface (You can read more in my previous post about Moesion). One of the things we realized in all this time is that PowerShell lacks of a central repository where IT guys or we, the developers, can easily grab and reuse commands.  All the commands or modules are basically spread across multiple places or websites, like personal blogs, TechNet or CodePlex projects to name a few making the search of them very hard. You are usually limited to use your favorite search engine and copy what you find. In addition, there is not an easy way to reuse, extend or version these commands, which also limits any contribution that you could make to the community.  My friend Jose wrote a great post the other day about the importance of reusing PowerShell modules, and what is the mechanism to reuse them. Jose, however, based his post in a custom implementation using a GIT repository for storing the modules. We have NuGet in the .NET platform for sharing and reusing existing libraries or code, so why can’t just leverage it for reusing PowerShell modules as well ?. Some teams in Microsoft are using NuGet for distributing libraries and binaries so it would be a great thing for all of us if they also distribute the scripting interfaces in PowerShell using NuGet. This applies to the .NET OS community as well. In fact, it looks like Andrew Nurse had the same idea and implemented a project for this in BitBucket, PsGet.

    Read the article

  • Two Cloudy Observations from Oracle OpenWorld

    - by GeneEun
    Now that the dust has settled from another amazing Oracle OpenWorld, I wanted to reflect back on a couple of key observations I made during the event. First, it was pretty clear that Cloud was again a big deal at this year's conference. Yes, the Oracle Database 12c announcement was also huge, but for most it was hard to not notice that Oracle continues to be "all-in" with respect to cloud computing. Just to give you an idea of the emphasis on Cloud, there were over 300 Cloud-related sessions at this year's OpenWorld. If you caught some of the demo booths in the Oracle Red Lounge, then you saw some of the great platform, application, and social services that are now part of Oracle Cloud, as well as numerous demos of private cloud products that Oracle offers. Second, during Thomas Kurian's keynote presentation on Oracle Cloud, he announced the Preview Availability of a new service called Oracle Developer Cloud Service. This new platform service will provide developers with instant access to environments to better manage the application development lifecycle in the cloud. It provides development project teams access to favorite tools like Hudson, Git, Github, wikis, and tasks to help make innovation faster, more collaborative, and more effective. There's also integration with IDEs like Eclipse, NetBeans, and JDeveloper. If you're a developer, it's an awesome addition to Oracle Cloud's platform services! Want more details about Oracle Developer Cloud Service? Click here.

    Read the article

  • Best Usage of Multiple Computers For a Developer

    - by whaley
    I have two Macbook Pros - both are comparable in hardware. One is a 17" and the other a 15". The 17" has a slightly swifter CPU clock speed, but beyond that the differences are completely negligible. I tried a setup a while back where I had the 17" hooked up to an external monitor in the middle of my desk with the 15" laptop immediately to the right of it, and was using teleport to control the 15" from my 17". All development, terminal usage, etc. etc. was being done on the 17" and the 15" was primarily used for email / IM / IRC... or anything secondary to what I was working on. I have a MobileMe account so preferences were synced, but otherwise I didn't really use anything else to keep the computers in sync (I use dropbox/git but probably not optimally). For reasons I can't put my finger on, this setup never felt quite right. A few things that irked me was the 15" was way under-utlized and the 17" was overutilized having 2 laptops and a 21" monitor all on one desk actually took up lots of desk space and it felt like I had too much to look at. I reverted back to just using the 17" and the external monitor and keeping the 15" around the house (and using it very sparingly). For those of you who are using multiple laptops (or just multiple machines for that matter), I'd like to see setups that work for you for when you have 2 or more machines that gives you optimal productivity and why. I'd like to give this one more shot but with a different approach than my previous - which was using the 15" as a machine for secondary things (communication, reading documentation, etc. etc).

    Read the article

  • Versioning APIs

    - by Sharon
    Suppose that you have a large project supported by an API base. The project also ships a public API that end(ish) users can use. Sometimes you need to make changes to the API base that supports your project. For example, you need to add a feature that needs an API change, a new method, or requires altering of one of the objects, or the format of one of those objects, passed to or from the API. Assuming that you are also using these objects in your public API, the public objects will also change any time you do this, which is undesirable as your clients may rely on the API objects remaining identical for their parsing code to work. (cough C++ WSDL clients...) So one potential solution is to version the API. But when we say "version" the API, it sounds like this also must mean to version the API objects as well as well as providing duplicate method calls for each changed method signature. So I would then have a plain old clr object for each version of my api, which again seems undesirable. And even if I do this, I surely won't be building each object from scratch as that would end up with vast amounts of duplicated code. Rather, the API is likely to extend the private objects we are using for our base API, but then we run into the same problem because added properties would also be available in the public API when they are not supposed to be. So what is some sanity that is usually applied to this situation? I know many public services such as Git for Windows maintains a versioned API, but I'm having trouble imagining an architecture that supports this without vast amounts of duplicate code covering the various versioned methods and input/output objects. I'm aware that processes such as semantic versioning attempt to put some sanity on when public API breaks should occur. The problem is more that it seems like many or most changes require breaking the public API if the objects aren't more separated, but I don't see a good way to do that without duplicating code.

    Read the article

  • The Home Stretch: NetBeans IDE 7.1 Release Candidate

    - by TinuA
    The first release candidate build of NetBeans IDE 7.1 is live and available for download, which means the big release (GA) is expected any day soon.NetBeans IDE 7.1 delivers support for JavaFX 2.0, enabling the full compile, debug and profile development cycle for JavaFX 2.0 applications and keeping developers in sync with the latest from the Java platform. Beyond JavaFX support, 7.1 also provides significant Swing GUI Builder enhancements, CSS3 support, and visual debugging tools for JavaFX and Swing user interfaces. And Git--a much anticipated featured--has been integrated into the IDE."The entire NetBeans team is tremendously excited about this release, which provides developers with more state-of-the-art tools for building front-end clients," says NetBeans Engineering Director John Jullion-Ceccarelli. "Whether you are doing JavaFX, HTML5, Swing, or JSF, NetBeans 7.1 will let you quickly and easily develop great-looking and full-featured clients for your Java or PHP-based applications."But there's one more task to check off before the general availability: The NetBeans team has launched a Community Acceptance Survey to get user feedback about the release candidate. Download the RC build, test it and take the survey to let the team know if NetBeans IDE 7.1 is ready for its debut!

    Read the article

  • Why can't tuxboot and ubuntu play well together?

    - by mmr
    I'm trying to get clonezilla to run off of a usb stick, and it seems that the right way to do that is via tuxboot. Tuxboot is not compilable on ubuntu. I used git to get it from the repository, and then when I run the 'install' script (because building it is apparently not allowed, since the build script just tries to install windows things). Qmake-linux wants my qmake executable to be in the same directory as the stuff I pulled down, and let's just say that if there's a way to do this easily, I ain't seein' it. So then I download the linux file, the most recent of which is tuxboot-linux-25. Try to run it, get a failure that libpng12.so.0 isn't found. OK, then I go to install that via the instructions I found on the web but firefox seems to have already deleted from my history (yay!) Then I add the /usr/local/lib directory to ldconfig via emacs (had to install that too, of course): http://ubuntuforums.org/showthread.php?t=369848 I still get the errors that libpng12.so.0 cannot be opened because 'No such file or directory'. ldconfig -p | grep libpng shows that the library is there, but it still doesn't seem to be findable. What to do next? (for the record, doing this in windows is painless-- download, click, and it's done. But I'm trying to be all linuxy and get away from Windows for this...)

    Read the article

  • Some PowerShell goodness

    - by KyleBurns
    Ever work somewhere where processes dump files into folders to maintain an archive?  Me too and Windows Explorer hates it.  Very often I find myself needing to organize these files into subfolders so that I can go after files without locking up Windows Explorer and my answer used to be to write a program in something like C# to do the job.  These programs will typically enumerate the files in a folder and move each file to a subdirectory named based on a datestamp.  The last such program I wrote had to use lower-level Win32 API calls to perform the enumeration because it appears the standard .Net calls make use of the same method of enumerating the directories that Windows Explorer chokes on when dealing with a large number of entries in a particular directory, so a simple task was accomplished with a lot of code. Of course, this little utility was just something I used to make my life easier and "not a production app", so it was in my local source folder and not source control when my hard drive died.  So... I was getting ready to re-create it and thought it might be a good idea to play with PowerShell a bit - something I had been wanting to do but had not yet met a requirement to make me do it.  The resulting script was amazingly succinct and even building the flexibility for parameterization and adding line breaks for readability was only about 25 lines long.  Here's the code with discussion following: param(     [Parameter(         Mandatory = $false,         Position = 0,         HelpMessage = "Root of the folders or share to archive.  Be sure to end with appropriate path separator"     )]     [String] $folderRoot="\\fileServer\pathToFolderWithLotsOfFiles\",       [Parameter(         Mandatory = $false,         Position = 1     )]     [int] $days = 1 ) dir $folderRoot|?{(!($_.PsIsContainer)) -and ((get-date) - $_.lastwritetime).totaldays -gt $days }|%{     [string]$year=$([string]$_.lastwritetime.year)     [string]$month=$_.lastwritetime.month     [string]$day=$_.lastwritetime.day     $dir=$folderRoot+$year+"\"+$month+"\"+$day     if(!(test-path $dir)){         new-item -type container $dir     }     Write-output $_     move-item $_.fullname $dir } The script starts by declaring two parameters.  The first parameter holds the path to the folder that I am going to be sorting into subdirectories.  The path separator is intended to be included in this argument because I didn't want to mess with determining whether this was local or UNC and picking the right separator in code, but this could be easily improved upon using Path.Combine since PowerShell has access to the full framework libraries.  The second parameter holds a minimum age in days for files to be removed from the root folder.  The script then pipes the dir command through a query to include only files (by excluding containers) and of those, only entries that meet the age requirement based on the last modified datestamp.  For each of those, the datestamp is used to construct a folder name in the format YYYY\MM\DD (if you're in an environment where even a day's worth of files need further divided, you could make this more granular) and the folder is created if it does not yet exist.  Finally, the file is moved into the directory. One of the things that was really cool about using PowerShell for this task is that the new-item command is smart enough to create the entire subdirectory structure with a single call.  In previous code that I have written to do this kind of thing, I would have to test the entire tree leading down to the subfolder I want, leading to a lot of branching code that detracted from being able to quickly look at the code and understand the job it performs. Overall, I have to say I'm really pleased with what has been done making PowerShell powerful and useful.

    Read the article

  • Project Management Software / 1 maybe 2 developers

    - by Ominus
    I am looking for software that I can use to "manage" multiple projects (5 - 10). Here are the features I would like but any recommendation is welcome. Bug/Feature tracking on a per project basis. Some way to keep all documents, diagrams, specs, requirements, in one place with the project. Better yet a tool where all these things or most of them could be authored. Task management during the development phase with milestones and estimates/actuals. Git integration I have been doing contract work and i have been doing really well for myself as far as getting projects but its becoming VERY hard to manage everything in an efficient manner. I am trying to learn about best practices when it comes to software programming methodologies and the more I read the more i realize that I am just managing these projects poorly. I am getting things done but the more I take on the less "solid" everything is. I am afraid if I don't get some good solid tools/practices in place I am going to do my customers and myself a disservice. The problem is that there are SO many options that its hard to weed through them all. I was at a point today where I had decided that I would just code my own (there is some irony here)! Obviously everyone has their likes dislikes I would love to hear from some of you lone programmers and how you manage everything since our needs aren't exactly the same thing that a large team might need. I also want a solution that can scale to 2 maybe 3 developers if I end up hiring some people to help with my work load. Thanks again for your usual insights!

    Read the article

  • Sneak Preview - New CodePlex UI

    We have been busy the last several months working to improve the overall experience for the CodePlex community. We have been working through some of the top requested items, such as our big announcement last week enabling Git. Something that is not explicitly on the feature request list are requests to update the web site look and user experience.  As Brian Harry mentioned, the Future of CodePlex is Bright, so it is time to start brightening up the place. Goals As with any sizeable change you need to decide the scope of changes you want to tackle. We decided that we would optimize on incremental improvements verses taking months to get a completely new experience released. Our goals with this user experience work is to refresh the look and feel of the site, introduce new visual elements and set up the site for future structural changes. So this is not the end, just the beginning. Early Views I want to set a few expectations first, these screen shots are not final, and we are still working through the content and final element placement. Feedback is always welcome, just take that in mind as you review the images. New CodePlex Home The navigation changed a good bit on the home page and we have moved the search to a more consistent location across the site.   User Profile Users Home Page The goal was to make it easier to find and take action on common tasks such as creating projects. Project Home Issue Tracker   This should give you a taste of where we are going with the new user experience.     As always we love the feedback, either comment below, find us on Twitter @codeplex or @mgroves84, or create or vote up suggestions.

    Read the article

  • Procedual level generation for a platformer game (tilebased) using player physics

    - by Notbad
    I have been searching for information about how to build a 2d world generator (tilebased) for a platformer game I am developing. The levels should look like dungeons with a ceiling and a floor and they will have a high probability of being just made of horizontal rooms but sometimes they can have exits to a top/down room. Here is an example of what I would like to achieve. I'm refering only to the caves part. I know level design won't be that great when generated but I think it is possible to have something good enough for people to enjoy the procedural maps (Note: Supermetrod Spoiler!): http://www.snesmaps.com/maps/SuperMetroid/SuperMetroidMapNorfair.html Well, after spending some time thinking about this I have some ideas to create the maps that I would like to share with you: 1) I have read about celular automatas and I would like to use them to carve the rooms but instead of carving just a tile at once I would like to carve full columns of tiles. Of course this carving system will have some restrictions like how many tiles must be left for the roof and the ceiling, etc... This way I could get much cleaner rooms than using the ussual automata. 2) I want some branching into the rooms. It will have little probability to happen but I definitely want it. Thinking about carving I came to the conclusion that I could be using some sort of path creation algorithm that the carving system would follow to create a path in the rooms. This could be more noticiable if we make the carving system to carve columns with the height of a corridor or with the height of a wide room (this will be added to the system as a param). This way at some point I could spawn a new automa beside the main one to create braches. This new automata should play side by side with the first one to create dead ends, islands (both paths created by the automatas meet at some point or lead to the same room. It would be too long to explain here all the tests I have done, etc... just will try to summarize the problems to see if anyone could bring some light to solve them (I don't mind sharing my successes but I think they aren't too relevant): 1) Zone reachability: How can I make sure that the player will be able to reach all zones I created (mainly when branches happen or vertical rooms are created). When branches are created I have to make sure that there will be a way to get onto the new created branch. I mean a bifurcation that the player could follow. Player will follow the main path or jump to a platform to get onto the other way). On the other hand if an island is created by the meeting of both branches I need to make sure the player will be able to get onto the island too. 2) When a branch is created and corridors are generated for each branch how can I make then both merge or repel to create an island or just make them separated corridors. 3) When I create a branch and an island is created becasue both corridors merge at somepoint or they lead to the same room, is there any way to detect this and randomize where to create the needed platforms to get onto the created isle? This platforms could be created at the start of the island or at the end. I guess part of the problem could be solved using some sort of graph following the created paths but I'm a bit lost in this sea of precedural content creation :). On the other hand I don't expect a solution to the problem but some information to get me moving forward again. Thanks in advance.

    Read the article

  • Reformatting and version control

    - by l0b0
    Code formatting matters. Even indentation matters. And consistency is more important than minor improvements. But projects usually don't have a clear, complete, verifiable and enforced style guide from day 1, and major improvements may arrive any day. Maybe you find that SELECT id, name, address FROM persons JOIN addresses ON persons.id = addresses.person_id; could be better written as / is better written than SELECT persons.id, persons.name, addresses.address FROM persons JOIN addresses ON persons.id = addresses.person_id; while working on adding more columns to the query. Maybe this is the most complex of all four queries in your code, or a trivial query among thousands. No matter how difficult the transition, you decide it's worth it. But how do you track code changes across major formatting changes? You could just give up and say "this is the point where we start again", or you could reformat all queries in the entire repository history. If you're using a distributed version control system like Git you can revert to the first commit ever, and reformat your way from there to the current state. But it's a lot of work, and everyone else would have to pause work (or be prepared for the mother of all merges) while it's going on. Is there a better way to change history which gives the best of all results: Same style in all commits Minimal merge work ? To clarify, this is not about best practices when starting the project, but rather what should be done when a large refactoring has been deemed a Good Thing™ but you still want a traceable history? Never rewriting history is great if it's the only way to ensure that your versions always work the same, but what about the developer benefits of a clean rewrite? Especially if you have ways (tests, syntax definitions or an identical binary after compilation) to ensure that the rewritten version works exactly the same way as the original?

    Read the article

  • How to join the World of Programming? [closed]

    - by litebread
    Name's Vlad and I am currently on my third year of Community College, studying Computer Science with emphasis on Programming in C++ and Networking. I have completed a few programming courses with general ease, but have not gained advanced understanding of programming through school. None of my friends are serious programmers working in the industry. Being an active lurker on many programming websites, and in general tech oriented sites I have noticed how little I know about the industry, the lingo and terminology. (I have no clue how Git hub works, but I generally understand what its for). So I am looking for help as to where I should look for information on the programming world and the industry in which I a very interested. By that I mean, what sites I should utilize to gain information on programming practices, introduction to advanced C++ and resources that simply introduce a 20some programming noob. I like programming, but I haven't dug my hands deep into it yet, I want to start to do so before I transfer to a University. All in all, where do I find information on becoming an actual programmer (Information that lays out a path). Thank you for reading. Have a great day!

    Read the article

  • Best practices in versioning

    - by Gerenuk
    I develop some scripts for data analysis in a small team. For the moment we use SVN, but not in a very structured way. We haven't even looked how to use branches even though we need this functionality. What do you suggest as the best practice to setup the following system: two code bases (core and plugins) versions can be incompatible to previous scripts sometimes individual features are being developed and not yet finished, while other fixes have to be done urgently to the code In the end we don't deliver the code as a package, but rather place the Python scripts in some directory (with version names?). Some other python script which serves as a configuration choses the desired version, sets the path to these libraries and then starts to import the modules. I saw stable releases to be named "trunk" so I did the same. However, no version numbers yet. Core and plugins are different repositories, however we have to match versions for compatibility. Can you suggest some best practices or reference to ease development and reduce chaos? :) Some suggested GIT. I haven't heard about it, but I'm free to change.

    Read the article

  • Are there any good examples of open source C# projects with a large number of refactorings?

    - by Arjen Kruithof
    I'm doing research into software evolution and C#/.NET, specifically on identifying refactorings from changesets, so I'm looking for a suitable (XP-like) project that may serve as a test subject for extracting refactorings from version control history. Which open source C# projects have undergone large (number of) refactorings? Criteria A suitable project has its change history publicly available, has compilable code at most commits and at least several refactorings applied in the past. It does not have to be well-known, and the code quality or number of bugs is irrelevant. Preferably the code is in a Git or SVN repository. The result of this research will be a tool that automatically creates informative, concise comments for a changeset. This should improve on the common development practice of just not leaving any comments at all. EDIT: As Peter argues, ideally all commit comments would be teleological (goal-oriented). Practically, if a comment is made at all it is often descriptive, merely a summary of the changes. Sadly we're a long way from automatically inferring developer intentions!

    Read the article

  • Release Notes for 6/14/2012

    Here are the notes for this week’s release: Diffs in Pull Requests and Commits We altered the way we display diffs across commits and pull requests to maximize the amount of vertical real estate devoted to the diff. Before, the viewport for diffs was always snapped to the height of the browser, which meant that on lower resolutions, the amount of space for viewing diffs could become very tiny. Now, the majority of the browser vertical space is devoted to viewing the diffs. Let us know what you think! Bug Fixes Fixed an issue where returning to the list of files changed from a diff would sometimes not show the list of files. Fixed the dialogs for approving and denying requests to join projects. Fixed various issues around validation of project details when publishing a project. Fixed an issue that caused the formatting of our tabs in pull requests to not display properly. Fixed an issue where users browsing Unicode files in a Git project would see error pages. Fixed various issues where the option to subscribe to notifications would not appear properly. Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • Web dev/programmer with 4.5 yrs experience. Better for career: self-study or master's degree? [closed]

    - by Anonymous Programmer
    I'm a 28 year-old web developer/programmer with 4.5 years of experience, and I'm looking to jump-start my career. I'm trying to decide between self-study and a 1-year master's program in CS at a top school. I'm currently making 65K in a high cost-of-living area that is NOT a hot spot for technology firms. I code almost exclusively in Ruby/Rails, PHP/CodeIgniter, SQL, and JavaScript. I've slowly gained proficiency with Git. Roughly half the time I am architecting/coding, and half the time I am pounding out HTML/CSS for static brochureware sites. I'd like to make more more money while doing more challenging/interesting work, but I don't know where to start. I have an excellent academic record (math major with many CS credits, 3.9+ GPA), GRE scores, and recommendations, so I am confident that I could be admitted to a great CS master's program. On the other hand, there is the tuition and opportunity cost to consider. I feel like there are a number of practical languages/tools/skills worth knowing that I could teach myself - shell scripting, .NET, Python, Node.js, MongoDB, natural language processing techniques, etc. That said, it's one thing to read about a subject and another thing to have experience with it, which structured coursework provides. So, on to the concrete questions: What programming skills/knowledge should I develop to increase my earning potential and make me competitive for more interesting jobs? Will a master's degree in CS from a top school help me develop the above skills/knowledge, and if so, is it preferable to self-study (possibly for other reasons, e.g., the degree's value as a credential)?

    Read the article

  • Which version management design methodology to be used in a Dependent System nodes?

    - by actiononmail
    This is my first question so please indicate if my question is too vague and not understandable. My question is more related to High Level Design. We have a system (specifically an ATCA Chassis) configured in a Star Topology, having Master Node (MN) and other sub-ordinate nodes(SN). All nodes are connected via Ethernet and shall run on Linux OS with other proprietary applications. I have to build a recovery Framework Design so that any software entity, whether its Linux, Ramdisk or application can be rollback to previous good versions if something bad happens. Thus I think of maintaining a State Version Matrix over MN, where each State(1,2....n) represents Good Kernel, Ramdisk and application versions for each SN. It may happen that one SN version can dependent on other SN's version. Please see following diagram:- So I am in dilemma whether to use Package Management Methodology used by Debian Distributions (Like Ubuntu) or GIT repository methodology; in order to do a Rollback to previous good versions on either one SN or on all the dependent SNs. The method should also be easier for upgrading SNs along with MNs. Some of the features which I am trying to achieve:- 1) Upgrade of even single software entity is achievable without hindering others. 2) Dependency checks must be done before applying rollback or upgrade on each of the SN 3) User Prompt should be given in case dependency fails.If User still go for rollback, all the SNs should get notification to rollback there own releases (if required). 4) The binaries should be distributed on SNs accordingly so that recovery process is faster; rather fetching every time from MN. 5) Release Patches from developer for bug fixes, feature enhancement can be applied on running system. 6) Each version can be easily tracked and distinguishable. Thanks

    Read the article

  • Installing datacommons from sunlight

    - by Newben
    I know strictly nothing from python and I am installing datacommons from the sunlightlabs. So I followed step by step the README.md https://github.com/sunlightlabs/datacommons First, it is said in the doc to add to the virtualenv dc_data, dc_matcchbox but I didn't find them. But I went to the final step to run ./manage.py runserver so I had the following message : (datacommons)newben@newben-VirtualBox:~/share-ubuntu/sunlightlabs-datacommons-e3ff1a3$ ./manage.py runserver fatal: Not a git repository (or any parent up to mount parent /home/mbenchoufi) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). Error: Can't find the file 'settings.py' in the directory containing './manage.py'. It appears you've customized things. You'll have to run django-admin.py, passing it your settings module. (If the file settings.py does indeed exist, it's causing an ImportError somehow.) In the 'sunlightlabs-datacommons-e3ff1a3' folder, I downloaded and put the files from github. By the way I didin't understand how to deal with the settings file. Could someone help me understand how to install datacommons ?

    Read the article

  • x.265 in ffmpeg

    - by Levan
    Today I found out that x265 is already present in ffmpeg so I compiled ffmpeg with this guide Sadly libx265 did not work on ubuntu, however on windows I tried the same thing with zeranoe ffmpeg build and it worked without a problem. So do you think i did something wrong or it is not yet implemented in linux build (using that guide)? The results of the command ffmpeg -codecs | grep -i hevc show: ffmpeg version 2.1.git Copyright (c) 2000-2014 the FFmpeg developers built on Feb 19 2014 19:00:17 with gcc 4.8 (Ubuntu/Linaro 4.8.1-10ubuntu9) configuration: --prefix=/home/levan/ffmpeg_build --extra-cflags=-I/home/levan/ffmpeg_build/include --extra-ldflags=-L/home/levan/ffmpeg_build/lib --bindir=/home/levan/bin --extra-libs=-ldl --enable-gpl --enable-libass --enable-libfdk-aac --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree --enable-x11grab libavutil 52. 64.100 / 52. 64.100 libavcodec 55. 52.102 / 55. 52.102 libavformat 55. 33.100 / 55. 33.100 libavdevice 55. 10.100 / 55. 10.100 libavfilter 4. 1.102 / 4. 1.102 libswscale 2. 5.101 / 2. 5.101 libswresample 0. 17.104 / 0. 17.104 libpostproc 52. 3.100 / 52. 3.100 D.V.L. hevc H.265 / HEVC (High Efficiency Video Coding) Thank you for your time

    Read the article

  • How should a team share/store game content during development?

    - by irwinb
    Other than Dropbox, what out there has been especially useful for storing and sharing game content like images during development (similar feature set to Dropbox like working offline, automatic syncing and support for windows/osx)? We are looking into hosting our own SharePoint server but it seems to be really focused on documents... Maybe Box.net would work? EDIT For code, we are using Git. To be more precise, I was looking for an easy, automatic way for content produced by artists/audio engineers to be available to everyone. Features like approvals of assets don't hurt either. Following the answer linked by Tetrad, Alienbrain looked pretty interesting but..is way out of our budget (may be something to invest in in the future). What ended up doing... We were going to go with Box.net but downloading the sync apps for desktop use required us to wait to be contacted by them for some reason. We did not have much time to wait so we ended up going with Dropbox Teams. Box.net has a nice feature set but we never really felt held back without them. Thanks for the help :).

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >