Search Results

Search found 6054 results on 243 pages for 'git extensions'.

Page 74/243 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Unshorten.it! Unpacks Shortened URLs and Provides Safety Rating

    - by Jason Fitzpatrick
    Shortened URLs sure are convenient and compact but they hide the destination URL. Unshorten.it! is a free Chrome/Firefox extension that not only shows you the full URL but will even give you a safety rating–no need to click blindly again. Install the extension for Chrome or Firefox and then, when you come across a shortened URL, simply right click on it and click “Unshorten this link” to see both the unpacked URL and a safety rating provided by Web of Trust and HPHosts. Unshorten.it! is a free extension, available for both Chrome and Firefox. Unshorten.it! [via Gizmo's Freeware] How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Bad idea to display mail server info in public github project?

    - by kentcdodds
    I have the project for work that requires me to send e-mails to people using our work mail server. The server doesn't require authentication. Part of my project is using a Java-Helper I'm developing on GitHub. I don't know if I completely understand how it all works, but I'm guessing it would be a bad idea to have the server information available on GitHub for the world to see. Is this correct? After thought: I'm not going to put it in the Java-Helper because that wouldn't be helpful for anyone but me. but I'm still curious to know the answer to this question :) Thanks!

    Read the article

  • Why "Fork me on github"?

    - by NoBugs
    I understand how Github works, but one thing I've been confused about is, why almost every OSS project lately has a "Fork me on Github" link on their homepage. For example, http://jqtjs.com/, http://www.daviddurman.com/flexi-color-picker/, and others. Why is this so common? Is it that they want/need code validation, checking for security/performance improvements that they may not know how to do? Is it meant to show that this is a collaborative project - you're welcome to add improvements? Do they work for Github, or want to promote their service? Oddly enough, I don't think I've seen a "Fork project on Bitbucket" logo recently. My first reaction to that logo was that the project probably needs to be modified (forked) in order to integrate it with anything useful - or that they are encouraging fragmented codebase, encouraging everyone to make their own fork of the project. But I don't think that is the intent.

    Read the article

  • Is it costly to leave the Console and Script features enabled in Firebug?

    - by parisminton
    For some time now, I've run Firebug constantly enabled to do quick DOM inspections, leaving the Console and Script panels disabled. I'm just starting to use these two features so I don't have to keep using alerts for testing and debugging. I enable them while I use them and turn them back off when I'm done. I'd like to know if these particular features can slow things down such that they shouldn't be left on round-the-clock. Like do they slow down page loads, use inordinate chunks of memory or something? I don't see anything about it in the Firebug wiki.

    Read the article

  • Is it a good practice to use branches to maintain different editions of the same software?

    - by Tamás Szelei
    We have a product that has a few different editions. The differences are minor: different strings here and there, very little additional logic in one, very little difference in logic in the other. When the software is being developed, most changes need to be added to each edition; however, there are a few that don't and a few that needs to differ. Is it a valid use of branches if I have release-editionA and release-editionB (..etc) branches? Are there any gotchas? Good practices? Update: Thanks for the insight everyone, lots of good answers here. The general consensus seems to be that it is a bad idea to use branches for this purpose. For anyone wondering, my final solution to the problem is to externalize strings as configuration, and externalize the differing logic as plugins or scripts.

    Read the article

  • Can DVCSs enforce a specific workflow?

    - by dukeofgaming
    So, I have this little debate at work where some of my colleagues (which are actually in charge of administrating our Perforce instance) say that workflows are strictly a process thing, and that the tools that we use (in this case, the version control system) have no take on it. In otherwords, the point that they make is that workflows (and their execution) are tool-agnostic. My take on this is that DVCSs are better at encouraging people in more flexible and well-defined ways, because of the inherent branching occurring in the background (anonymous branches), and that you can enforce workflows through the deployment model you establish (e.g. pull requests through repository management, dictator/liutenant roles with their machines setup as servers, etc.) I think in CVCSs you have to enforce workflows through policies and policing, because there is only one way to share the code, while in DVCSs you just go with the flow based on the infrastructure/permissions that were setup for you. Even when I have provided the earlier arguments, I'm still unable to fully convince them. Am I saying something the wrong way?, if not, what other arguments or examples do you think would be useful to convince them? Edit: The main workflow we have been focusing on, because it makes sense to both sides is the Dictator/Lieutenants workflow: My argument for this particular workflow is that there is no pipeline in a CVCS (because there is just sharing work in a centralized way), whereas there is an actual pipeline in DVCSs depending on how you deploy read/write permissions. Their argument is that this workflow can be done through branching, and while they do this in some projects (due to policy/policing) in other projects they forbid developers from creating branches.

    Read the article

  • Strategy for versioning on a public repo

    - by biril
    Suppose I'm developing a (javascript) library which is hosted on a public repo (e.g. github). My aim in terms of how version numbers are assigned and incremented is to follow the guidelines of semantic versioning. Now, there's a number of files in my project which compose the actual lib and a number of files that 'support it', the latter being docs, a test suite, etc. My perspective this far has been that version numbers should only apply to the actual lib - not the project as a whole - since the lib alone is 'the unit' that defines the public API. However I'm not satisfied with this approach as, for example, a fix in the test suite constitutes an 'improvement' in my project, which will not be reflected in the version number (or the docs which contain a reference to it). On a more practical level, various tools, such as package managers, may (understandably) not play along with this strategy. For example, when trying to publish a change which is not reflected in the version number, npm publish fails with the suggestion "Bump the 'version' field set the --force flag, or npm unpublish". Am I doing it wrong?

    Read the article

  • How can I refactor a code base while others rapidly commit to it?

    - by Incognito
    I'm on a private project that eventually will become open source. We have a few team members, talented enough with the technologies to build apps, but not dedicated developers who can write clean/beautiful and most importantly long-term maintainable code. I've set out to refactor the code base, but it's a bit unwieldy as someone in the team out in another country I'm not in regular contact with could be updating this totally separate thing. I know one solution is to communicate rapidly or adopt better PM practices, but we're just not that big yet. I just want to clean up the code and merge nicely into what he has updated. Would a branch be a suitable plan? A best-effort-merge? Something else?

    Read the article

  • Are forks are treated differently by GitHub?

    - by IQAndreas
    I found that GitHub does not allow you to use the "search" feature on forks (issues are still searchable, just not code). [screenshot] Are there any other cases where forks are treated as "inferior" or at least differently by GitHub? For instance, (assuming you haven't created a website specific to your fork), will forks still show up in Google search results, or will GitHub only provide results for the parent repository?

    Read the article

  • Easy way to deploy PHP sites from git

    - by Leopd
    I'm looking for recommendations on how to automate / simplify deployment from a git repository (github) to a hosting service. The hosting service supports FTP (yuck) / SSH / SFTP access. Any good tools out there to give push-button deployment of new revisions? I know it's not a hard script to write, but when you start thinking about things like roll-back and multiple sites, it gets complicated enough that I'd rather not re-invent the wheel.

    Read the article

  • Using gerrit (or similar tool) on a team where multiple devs work on a single feature

    - by Bacon
    We have a team of roughly ~8 devs who regularly work on the same feature over the course of a 3 week sprint. It isn't quite pair programming, but in our current workflow devs regularly push up incomplete code for a colleague to complete. This worked fine before we introduced Gerrit, but now our commits need to represent chunks of test-passing, complete, logical functionality, and so the model breaks. My only idea is to have everybody push up to a separate, untracked branch up until the functionality is ready for review, then squash everything into commits that make sense and push up. Is there another Gerrit-ized workflow that could work? I know this is a widely discussed topic on Google Groups, and that there has recently been some discussion of Gerrit topic reviews, but I wanted to see if there is anybody out there using Gerrit in this way, and what the suggested workflow would be.

    Read the article

  • Organisation GitHub account. Secure to use for personal projects?

    - by Mackey18
    So a large client of mine gave me access to their Organisation GitHub account. With it came a login for myself (on github.companyname.com) and of course access to certain repos on their company account (by switching the user to the company via the button in the top left). Now I was wondering, since I can create private repos for myself, is it safe for me to use these for non-related projects or can the company administrators access my user's repos despite being private? My understanding of Github is limited as it is, so this extra layer of complexity from the organisation account isn't helping too much. Thanks,Mike

    Read the article

  • git-daemon fails on VM suspend and resume

    - by fuzzy lollipop
    I have Gitorious running on a Centos 5.3 install on a VMWare virtual machine under VMWare Server. Everytime we take down the server via suspend to back up the image, and resume the VM, the git-daemon dies. All my other processes continue to function without any problems, this one process dies and has to be manually be restarted. Does anyone have any ideas why this might be happening, or how to make sure this process never dies off?

    Read the article

  • Reactive Extensions vs FileSystemWatcher

    - by Joel Mueller
    One of the things that has long bugged me about the FileSystemWatcher is the way it fires multiple events for a single logical change to a file. I know why it happens, but I don't want to have to care - I just want to reparse the file once, not 4-6 times in a row. Ideally, there would be an event that only fires when a given file is done changing, rather than every step along the way. Over the years I've come up with various solutions to this problem, of varying degrees of ugliness. I thought Reactive Extensions would be the ultimate solution, but there's something I'm not doing right, and I'm hoping someone can point out my mistake. I have an extension method: public static IObservable<IEvent<FileSystemEventArgs>> GetChanged(this FileSystemWatcher that) { return Observable.FromEvent<FileSystemEventArgs>(that, "Changed"); } Ultimately, I would like to get one event per filename, within a given time period - so that four events in a row with a single filename are reduced to one event, but I don't lose anything if multiple files are modified at the same time. BufferWithTime sounds like the ideal solution. var bufferedChange = watcher.GetChanged() .Select(e => e.EventArgs.FullPath) .BufferWithTime(TimeSpan.FromSeconds(1)) .Where(e => e.Count > 0) .Select(e => e.Distinct()); When I subscribe to this observable, a single change to a monitored file triggers my subscription method four times in a row, which rather defeats the purpose. If I remove the Distinct() call, I see that each of the four calls contains two identical events - so there is some buffering going on. Increasing the TimeSpan passed to BufferWithTime seems to have no effect - I went as high as 20 seconds without any change in behavior. This is my first foray into Rx, so I'm probably missing something obvious. Am I doing it wrong? Is there a better approach? Thanks for any suggestions...

    Read the article

  • File extensions and MIME Types in .NET

    - by Marc Climent
    I want to get a MIME Content-Type from a given extension (preferably without accessing the physical file). I have seen some questions about this and the methods described to perform this can be resumed in: Use registry information. Use urlmon.dll's FindMimeFromData. Use IIS information. Roll your own MIME mapping function. Based on this table, for example. I've been using no.1 for some time but I realized that the information provided by the registry is not consistent and depends on the software installed on the machine. Some extensions, like .zip don't use to have a Content-Type specified. Solution no.2 forces me to have the file on disk in order to read the first bytes, which is something slow but may get good results. The third method is based on Directory Services and all that stuff, which is something I don't like much because I have to add COM references and I'm not sure it's consistent between IIS6 and IIS7. Also, I don't know the performance of this method. Finally, I didn't want to use my own table but at the end seems the best option if I want a decent performance and consistency of the results between platforms (even Mono). Do you think there's a better option than using my own table or one of other described methods are better? What's your experience?

    Read the article

  • Trying to exclude certain extensions doing a recursive copy (MSBuild)

    - by Kragen
    I'm trying to use MSBuild to read in a list of files from a text file, and then perform a recursive copy, copying the contents of those directories files to some staging area, while excluding certain extensions (e.g. .tmp files) I've managed to do most of the above quite easily using CreateItem and the MSBuild copy task, whatever I do the CreateItem task just ignores my Exclude parameter: <PropertyGroup> <RootFolder>c:\temp</RootFolder> <ExcludeFilter>*.tmp</ExcludeFilter> <StagingDirectory>staging</StagingDirectory> </PropertyGroup> <ItemGroup> <InputFile Include="MyFile.txt" /> </ItemGroup> <Target Name="Build"> <ReadLinesFromFile File="@(InputFile)"> <Output ItemName="AllFolders" TaskParameter="Lines" /> </ReadLinesFromFile> <CreateItem Include="$(RootFolder)\%(AllFolders.RelativeDir)**" Exclude="$(ExcludeFilter)"> <Output ItemName="AllFiles" TaskParameter="Include" /> </CreateItem> <Copy SourceFiles="@(AllFiles)" DestinationFolder="$(StagingDirectory)\%(RecursiveDir)" Example contents of 'MyFile.txt': somedirectory\ someotherdirectory\ (I.e. the paths are relative to $(RootFolder) - mention this because I read somewhere that it might be relevant) I've tried loads of different combinations of Exclude filters, but I never seem to be able to get it to correctly exclude my .tmp files - is there any way of doing this with MSBuild without resorting to xcopy?

    Read the article

  • MonoRail: Testing, Route Extensions, Folder Structures

    - by Kezzer
    I've got a few questions related to the use of MonoRail Testing Does everyone tend to use NUnit for their testing? I haven't worked enough with testing to know if this is a good testing framework to use. I'm just looking to get more into testing my applications a lot more than before and wanted to know if there's any general guidelines. Are you supposed to copy the controller over to a test area and just rename it with test in the name and re-run it? How do you ensure your test project and main project coincide with one another? Is it just a case of copying everything over again or are there tools available to do it for you? Route Extensions MonoRail tends to use <action>.rails, can you omit the .rails part if you configure your routing correctly? Why does this seem to be the standard? Folder Structures I haven't found anywhere which really points out your standard folder structure. Sure, you have Controllers, Models, and Views. But your Models folder should contain your data access objects as well. I've seen some have something like -> Models -> DaoClasses -> Entities But what about custom structures used to get data out of views? And if you're using NHibernate, where's a good place to stick the mappings? I know it's entirely dependent on the developer, but I haven't really seen any standard approach. Cheers

    Read the article

  • iPhone app rejection for using ICU (Unicode extensions)

    - by nickbit
    I received the following mail form Apple, considering my application: *Thank you for submitting your update to ??µ??es?a to the App Store. During our review of your application we found it is using private APIs, which is in violation of the iPhone Developer Program License Agreement section 3.3.1; "3.3.1 Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs." While your application has not been rejected, it would be appropriate to resolve this issue in your next update. The following non-public APIs are included in your application: u_isspace ubrk_close ubrk_current ubrk_first ubrk_next ubrk_open If you have defined methods in your source code with the same names as the above mentioned APIs, we suggest altering your method names so that they no longer collide with Apple's private APIs to avoid your application being flagged with future submissions. Please resolve this issue in your next update to ??µ??es?a. Sincerely, iPhone App Review Team* The functions mentioned in this mail are used in the ICU library (International Components for Unicode). Although my app is not rejected at this point, I don't feel very secure for the future of my app, because it relies heavily on the Unicode protocol and on this components in particular. Another thing is that I do not call these functions directly, but they are called by a custom 'sqlite' build (with FTS3 extensions enabled). Am I missing something here? Any suggestions?

    Read the article

  • DBD::CSV: Problem with file-name-extensions

    - by sid_com
    In this script I have problems with file-name-extensions: if I use /home/mm/test_x it works, with file named /home/mm/test_x.csv it doesn't: #!/usr/bin/env perl use warnings; use strict; use 5.012; use DBI; my $table_1 = '/home/mm/test_1.csv'; my $table_2 = '/home/mm/test_2.csv'; #$table_1 = '/home/mm/test_1'; #$table_2 = '/home/mm/test_2'; my $dbh = DBI->connect( "DBI:CSV:" ); $dbh->{RaiseError} = 1; $table_1 = $dbh->quote_identifier( $table_1 ); $table_2 = $dbh->quote_identifier( $table_2 ); my $sth = $dbh->prepare( "SELECT a.id, a.name, b.city FROM $table_1 AS a NATURAL JOIN $table_2 AS b" ); $sth->execute; $sth->dump_results; $dbh->disconnect; Output with file-name-extention: DBD::CSV::st execute failed: Execution ERROR: No such column '"/home/mm/test_1.csv".id' called from /usr/local/lib/perl5/site_perl/5.12.0/x86_64-linux/DBD/File.pm at 570. Output without file-name-extension: '1', 'Brown', 'Laramie' '2', 'Smith', 'Watertown' 2 rows Is this a bug?

    Read the article

  • Creating a REST client API using Reactive Extensions (Rx)

    - by Jonas Follesø
    I'm trying to get my head around the right use cases for Reactive Extensions (Rx). The examples that keeps coming up are UI events (drag and drop, drawing), and suggestions that Rx is suitable for asynchronous applications/operations such as web service calls. I'm working on an application where I need to write a tiny client API for a REST service. I need to call four REST end-points, three to get some reference data (Airports, Airlines, and Statuses), and the fourth is the main service that will give you flight times for a given airport. I have created classes exposing the three reference data services, and the methods look something like this: public Observable<Airport> GetAirports() public Observable<Airline> GetAirlines() public Observable<Status> GetStatuses() public Observable<Flights> GetFlights(string airport) In my GetFlights method I want each Flight to hold a reference the Airport it's departing from, and the Airline operating the flight. To do that I need the data from GetAirports and GetAirlines to be available. My initial thinking was something like this: Write a Rx Query that will Subscribe on the three reference services (Airports, Airlines and Statuses) Add results into a Dictionary (airline code and Airline object) When all three GetAirports, GetAirlines and GetStatuses are complete, then return the GetFlights IObservable. Is this a reasonable scenario for Rx? I'm developing on the Windows Phone 7, so I'm not sure if there are major differences in the Rx implementations across the different platforms.

    Read the article

  • SSE (SIMD extensions) support in gcc

    - by goldenmean
    Hi, I see a code as below: include "stdio.h" #define VECTOR_SIZE 4 typedef float v4sf __attribute__ ((vector_size(sizeof(float)*VECTOR_SIZE))); // vector of four single floats typedef union f4vector { v4sf v; float f[VECTOR_SIZE]; } f4vector; void print_vector (f4vector *v) { printf("%f,%f,%f,%f\n", v->f[0], v->f[1], v->f[2], v->f[3]); } int main() { union f4vector a, b, c; a.v = (v4sf){1.2, 2.3, 3.4, 4.5}; b.v = (v4sf){5., 6., 7., 8.}; c.v = a.v + b.v; print_vector(&a); print_vector(&b); print_vector(&c); } This code builds fine and works expectedly using gcc (it's inbuild SSE / MMX extensions and vector data types. this code is doing a SIMD vector addition using 4 single floats. I want to understand in detail what does each keyword/function call on this typedef line means and does: typedef float v4sf __attribute__ ((vector_size(sizeof(float)*VECTOR_SIZE))); What is the vector_size() function return; What is the __attribute__ keyword for Here is the float data type being type defined to vfsf type? I understand the rest part. thanks, -AD

    Read the article

  • .htacces Rewrite Rule to Keep .php File Extensions

    - by user2672112
    I'm upgrading my static website that had .php extensions on the content pages. I've created my own simple cms which will start retrieving data from mysql database from now on, keeping the url structure same as the old once. The cms has get function to retrieve url structure from the database. Overall it started working fine with .html when i tested. But when i change the .html extension to .php in my .htaccess code the content pages starts reflecting "Internal Server Error. The server encountered an internal error or misconfiguration and was unable to complete your request." Here is my .htaccess code which i've used: RewriteBase / Options +FollowSymLinks RewriteEngine On RewriteRule ^([^?]*).php$ content.php?pid=$1 Perhaps there is a conflict, here is the code with .html extension that actually works fine. RewriteBase / Options +FollowSymLinks RewriteEngine On RewriteRule ^([^?]*).html$ content.php?pid=$1 So basically, content pages with .html are working & .php are not working. But i need my content pages to be with .php Please help. Thanks in advance... :)

    Read the article

  • GitHub push to AWS Elastic Beanstalk

    - by nute
    I am using GitHub for code management. I am using Amazon AWS Elastic Beanstalk as a server. Amazon announced that you can use Git to push code to the application server. However, to do this I'd have to let go of GitHub as they are essentially replacing the git server. Is there any way to have the best of both worlds? I don't necessarily need to "deploy" everytime I push, but I'd like to have it uploaded as a "Version", and then I can deploy the version I want anytime.

    Read the article

  • How Can I Point My Local Testing Server at My GitHub Repository?

    - by Goober
    Up until a few days ago, I had a particular setup that was as follows. Using SVN, all of the websites that I developed were committed to a source control drop box on a local testing server. Then using IIS, a new website was set up to point at the last revision of each particular website I developed and display it to the outside world using a specific URL. I have just moved over to using git and github, meaning all of my source controlled code is now no longer stored on a local testing server. As a result of this, I am not sure how I can go about doing a similar thing to what I did with the SVN setup, however I need to be able to essentially have that same setup again, just using Git. So basically, how can I go about getting my local testing server to point at the GitHub repository for that site? Help greatly appreciated.

    Read the article

  • redmine repository management

    - by Alex
    We are trying to setup a redmine installation for our group which should work with both SVN and Git repos. Since we want to keep the repos on the server and avoid the whole privileges and hosting mess (root access, local repos, ...), we want configure redmine to manage repo creation and destruction by itself. In short, redmine should create a repository automatically for a new project and delete it if the project is deleted, with no extra setup steps from our admin. So far I found reposman for SVN and redmine_git_hosting for Git, but I am unsure if match our requirements. Are these the tools we are looking for or is there any other alternative? Thank you

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >