Search Results

Search found 111890 results on 4476 pages for 'git update server info'.

Page 17/4476 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • set a ftp repository with git

    - by enboig
    I want to change my repository from bazaar to git. I installed Git (winXP) and tortoise with no problem, I set path variables, etc... I have initialized my repository with: git init copied it using cd .. git clone --bare project.git uploaded it to FTP, and when trying to access: git clone *ftp_address* Initialized empty Git repository in D:/project/.git/ Password: error: Access denied: 530 while accessing *ftp_address*/info/refs fatal: HTTP request failed I checked and .../project.git/info/refs does not exists. What am I missing? thanks PD: *ftp_address* = 'ftp://user%[email protected]/git/project.git'

    Read the article

  • SQL SERVER – PHP on Windows and SQL Server Training Kit

    - by pinaldave
    The PHP on Windows and SQL Server Training Kit includes a comprehensive set of technical content including demos and hands-on labs to help you understand how to build PHP applications using Windows, IIS 7.5 and SQL Server 2008 R2. This release includes the following: PHP & SQL Server Demos Integrating SQL Server Geo-Spatial with PHP SQL Server Reporting Services and PHP PHP & SQL Server Hands On Labs Introduction to Using SQL Server with PHP Using SQL Server Full-Text Search and FILESTREAM Storage with PHP New: Getting Started with SQL Server Migration Assistant for MySQL Download SQL Server PHP on Windows and SQL Server Training Kit Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Determine if SSRS 2012 is Installed on your SQL Server

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. Determine if SSRS 2012 is Installed on your SQL Server You may already have SSRS, or you may need to install it. Before doing any installation it makes sense to know where you are now. If you happened to install SQL Server with all features, you have the tools you need. There are two tools you need: SQL Server Data Tools and Reporting Services installed in Native Mode. To find out if SQL Server Data Tools (SSDT) is installed, click the Start button, go to All Programs, and expand SQL Server 2012. Look for SQL Server Data Tools   Now, let’s check to see if SQL Server Reporting Services is installed. Click the Start > All Programs > SQL Server 2012 > Configuration Tools > SQL > Server Configuration Manager   Once Configuration Manager is running, select SQL Server Services. Look for SQL Server Reporting Services in the list of services installed. If you have both SQL Server Reporting Services service and SQL Server Developer tools installed, you will not have to install them again. You may have SQL Server installed, but are missing the Data Tools or the SSRS service or both. In tomorrow blog post we will go over how to install based on where you are now.   Tomorrow’s Post Tomorrow’s blog post will show how to install and configure SSRS. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • This update does not come from a source that supports changelogs

    - by blade19899
    When I get an update via update-manager for a software like blender/vlc, I like to see what has been fixed/changed. I added the ppa for blender/vlc (this only applies to the software I added a ppa for) sudo add-apt-repository ppa:cheleb/blender-svn sudo apt-get update sudo apt-get install blender And vlc like this. sudo add-apt-repository ppa:videolan/stable-daily sudo apt-get update sudo apt-get install vlc And when i run update-manager, or update manager pop-ups I see that vlc/blender have updates but, I can't see what has been changed/fixed this is the message I get, the screenshot below is for mupen but it's the same thing. (I updated vlc and blender, didn't wanna wait for the next update) This update does not come from a source that supports changelogs. (by the way I have a dutch Ubuntu, so the above text i Google translated it!) It only shows which version you have and to which version you be upgrading to. So my question is, how do I get the change-log tab of update-manager working. if it's even possible?

    Read the article

  • Can I recover lost commits in a SVN repository using a local tracking git-svn branch?

    - by Ian Stevens
    A SVN repo I use git-svn to track was recently corrupted and a backup was recovered. However, a week's worth of commits were lost in the recovery. Is it possible to recover those lost commits using git-svn dcommit on my local git repo? Is it sufficient to run git-svn dcommit with the SHA1 of the last recovered commit in SVN? eg. > svn info http://tracked-svn/trunk | sed -n "s/Revision: //p" 252 > git log --grep="git-svn-id:.*@252" --format=oneline | cut -f1 -d" " 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a > git svn dcommit 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a Or will the git-svn-id need to be stripped from the intended commits? I tried this using --dry-run but couldn't tell whether it would try to submit all commits: > git svn dcommit --verbose --dry-run 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a Committing to http://tracked-svn/trunk ... dcommitted on a detached HEAD because you gave a revision argument. The rewritten commit is: 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a Thanks for your help.

    Read the article

  • Is there a way to lock a branch in GIT

    - by Senthil A Kumar
    I have an idea of locking a repository from users pushing files into it by having a lock script in the GIT update hook since the push can only recognize the userid as arguments and not the branches. So i can lock the entire repo which is just locking a directory. Is there a way to lock a specific branch in GIT? Or is there a way an Update Hook can identify from which branch the user is pushing and to which branch the code is pushed?

    Read the article

  • Find the git branch or branches from commit id

    - by Senthil A Kumar
    Hi All, Actually am try to get a report on merge conflicts. I used 'git blame' to see who has changed what line, but i couldn't find the branch and repository name information. Is there a way to find the repository name, branch name and author name of a file from 'git blame' or from commit ids' so that whenever a merge conflict occurs i can send an email to the authors who have touched that file/lines to resolve it. Thnaks Senthil A Kumar

    Read the article

  • GIT <> SVN interchangeable patch-files

    - by pagid
    Hi, I maintain a subproject which is running on the project's SVN server. I personally prefer to work with Git - the problem is that the entire community uses SVN, expects RFCs with a SVN compatible patch-file and people are familiar with SVN and send bugfixes agains that SVN repository too. Therefore my only problem is to create patch files which are compatible with Git and SVN at the same time. Is there some kind of smart shell-script or even a buildin feature I'm not aware of? Cheers

    Read the article

  • How to sysprep SQL Server Express?

    - by Jim
    We plan to deploy Hyper-V VHD with Windows Server 2008 R2 and SQL Server 2012 Express installed to multiple hosts. From my understanding, the correct way to do this is to install SQL Server in prepartion mode, sysprep Windows, then complete SQL Server installation when the VHD is deployed. I mostly followed the process in this blog post: http://sethusrinivasan.com/category/sysprep/ However, after the VHD is deployed, I'm unable to complete the SQL Server installation. It keeps saying "Upgrade matrix is incorrect". It seems that it's trying to upgrade itself to Enterprise edition (I was asked for product key during install, but I skipped it). Could anyone share their experience in deploying VHDs with SQL Server (we're fine with either SQL Server 2008 R2 or 2012)? I think the source of my issue is because I can't select "Express Edition" when entering the product key at the completion stage, so the installation is trying to do an upgrade to Enterprise Edition. I have no idea why the drop down list is empty.

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Replication: SQL Server 2008 Publisher with SQL Server Express 2005 Subscriber

    - by Jeremy
    Here is the setup: SQL Server 2008 Enterprise Server with a Merge Publication. SQL Server 2005 Express with pull subscription. There is no web or ftp setup. This is direct merge replication. Using the RMO objects from C#, I get a "class cannot be found." COM Error when accessing the MergePullSubscription.SynchronizationAgent property. I've tried with both the 2008 RMO dll's (version 10 dll's) and the 2005 RMO dll's (version 9 dll's). When trying to use replmerge.exe, I get the following: 2010-04-10 04:12:05.263 Microsoft SQL Server Merge Agent 9.00.1399.06 2010-04-10 04:12:05.294 Copyright (c) 2000 Microsoft Corporation 2010-04-10 04:12:05.294 2010-04-10 04:12:05.294 The timestamps prepended to the output lines are express ed in terms of UTC time. 2010-04-10 04:12:05.294 User-specified agent parameter values: -Publisher SUN -PublisherDB PRIMROSE -PublisherSecurityMode 1 -Publication PRIMROSE -Distributor SUN -DistributorSecurityMode 1 -Subscriber PVILLE\SQLEXPRESS -SubscriberSecurityMode 1 -SubscriberDB PRIMROSE -SubscriptionType 1 -DistributorLogin sa -DistributorPassword ********** -DistributorSecurityMode 0 -PublisherLogin sa -PublisherPassword ********** -PublisherSecurityMode 0 -SubscriberLogin sa -SubscriberPassword ********** -SubscriberSecurityMode 0 2010-04-10 04:12:05.325 Connecting to Subscriber 'PVILLE\SQLEXPRESS' 2010-04-10 04:12:05.481 Connecting to Distributor 'SUN' 2010-04-10 04:12:05.513 The version of SQL Server running at the Distributor(10. 0.2531.??????????????????) is not compatible with the version of SQL Server runn ing at the Subscriber(9.00.1399.???????L?L?LHL?L?L?L?,?). 2010-04-10 04:12:05.513 Category:NULL Source: Merge Process Number: -2147200979 Message: The version of SQL Server running at the Distributor(10.0.2531.???????? ??????????) is not compatible with the version of SQL Server running at the Subs criber(9.00.1399.???????L?L?LHL?L?L?L?,?). Any ideas?

    Read the article

  • sql-server: Can I update two table with Single Query?

    - by RedsDevils
    How can I write single UPDATE query to change value of COL1 to ‘X’ if COL2 < 10 otherwise change it to ‘Y’, where the following two tables are linked by ID CREATE TABLE TEMP(ID TINYINT, COL1 CHAR(1)) INSERT INTO TEMP(ID,COL1) VALUES (1,'A') INSERT INTO TEMP(ID,COL1) VALUES (2,'B') INSERT INTO TEMP(ID,COL1) VALUES (11,'A') INSERT INTO TEMP(ID,COL1) VALUES (17,'B') CREATE TABLE TEMP2(ID TINYINT, COL2 TINYINT) INSERT INTO TEMP2(ID,COL2) VALUES (1,1) INSERT INTO TEMP2(ID,COL2) VALUES (2,5) INSERT INTO TEMP2(ID,COL2) VALUES (11,10) INSERT INTO TEMP2(ID,COL2) VALUES (17,15) Thanks in advance!

    Read the article

  • SQL Server Master class winner

    - by Testas
     The winner of the SQL Server MasterClass competition courtesy of the UK SQL Server User Group and SQL Server Magazine!    Steve Hindmarsh     There is still time to register for the seminar yourself at:  www.regonline.co.uk/kimtrippsql     More information about the seminar     Where: Radisson Edwardian Heathrow Hotel, London  When: Thursday 17th June 2010  This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will: Debunk many of the ingrained misconceptions around SQL Server's behaviour    Show you disaster recovery techniques critical to preserving your company's life-blood - the data    Explain how a common application design pattern can wreak havoc in the database Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to change  Sessions Abstracts  KEYNOTE: Bridging the Gap Between Development and Production    Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.   SESSION ONE: SQL Server Mythbusters  It's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained.   SESSION TWO: Database Recovery Techniques Demo-Fest  Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted!   SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward   Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys!   SESSION 4: Essential Database Maintenance  In this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well. Speaker Biographies     Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.   Speaker Testimonials  "To call them good trainers is an epic understatement. They know how to deliver technical material in ways that illustrate it well. I had to stop Paul at one point and ask him how long it took to build a particular slide because the animations were so good at conveying a hard-to-describe process." "These are not beginner presenters, and they put an extreme amount of preparation and attention to detail into everything that they do. Completely, utterly professional." "When it comes to the instructors themselves, Kimberly and Paul simply have no equal. Not only are they both ultimate authorities, but they have endless enthusiasm about the material, and spot on delivery. If either ever got tired they never showed it, even after going all day and all week. We witnessed countless demos over the course of the week, some extremely involved, multi-step processes, and I can’t recall one that didn’t go the way it was supposed to." "You might think that with this extreme level of skill comes extreme levels of egotism and lack of patience. Nothing could be further from the truth. ... They simply know how to teach, and are approachable, humble, and patient." "The experience Paul and Kimberly have had with real live customers yields a lot more information and things to watch out for than you'd ever get from documentation alone." “Kimberly, I just wanted to send you an email to let you know how awesome you are! I have applied some of your indexing strategies to our website’s homegrown CMS and we are experiencing a significant performance increase. WOW....amazing tips delivered in an exciting way!  Thanks again” 

    Read the article

  • Git: Fixing a bug affecting two branches

    - by Aram Kocharyan
    I'm basing my Git repo on http://nvie.com/posts/a-successful-git-branching-model/ and was wondering what happens if you have this situation: Say I'm developing on two feature branches A and B, and B requires code from A. The X node introduces an error in feature A which affects branch B, but this is not detected at node Y where feature A and B were merged and testing was conducted before branching out again and working on the next iteration. As a result, the bug is found at node Z by the people working on feature B. At this stage it's decided that a bugfix is needed. This fix should be applied to both features, since the people working on feature A also need the bug fixed, since its part of their feature. Should a bugfix branch be created from the latest feature A node (the one branching from node Y) and then merged with feature A? After which both features are merged into develop again and tested before branching out? The problem with this is that it requires both branches to merge to fix the issue. Since feature B doesn't touch code in feature A, is there a way to change the history at node Y by implementing the fix and still allowing the feature B branch to remain unmerged yet have the fixed code from feature A? Mildly related: Git bug branching convention

    Read the article

  • How was Git designed?

    - by Mark Canlas
    My workplace recently switched to Git and I've been loving (and hating!) it. I really do love it, and it is extremely powerful. The only part I hate is that sometimes it's too powerful (and maybe a bit terse/confusing). My question is... How was Git designed? Just using it for a short amount of time, you get the feel that it can handle many obscure workflows that other version control systems could not. But it also feels elegant underneath. And fast! This is no doubt in part to Linus's talent. But I'm wondering, was the overall design of git based off of something? I've read about BitKeeper but the accounts are scant on technical details. The compression, the graphs, getting rid of revision numbers, emphasizing branching, stashing, remotes... Where did it all come from? Linus really knocked this one out of the park and on pretty much the first try! It's quite good to use once you're past the learning curve.

    Read the article

  • How to structure git repositories for project?

    - by littledynamo
    I'm working on a content synchronisation module for Drupal. There is a server module, which sits on ona website and exposes content via a web service. There is a also a client module, which sits on a different site and fetches and imports the content at regular intervals. The server is created on Drupal 6. The client is created on Drupal 7. There is going to be a need for a Druapl 7 version of the server. And then there will be a need for a Drupal 8 version of both the client and the server once it is released next year. I'm fairly new to git and source control, so I was wondering what is the best way to setup the git repositories? Would it be a case of having a separate repository for each instance, i.e: Drupal 6 server = 1 repository Drupal 6 client = 1 repository Drupal 7 server = 1 repository Drupal 7 client = 1 repository etc Or would it make more sense to have one repository for the server and another for the client then create branches for each Drupal version? Currently I have 2 repositories - one for the client and another for the server.

    Read the article

  • Tracking contributions from contributors not using git

    - by alex.jordan
    I have a central git repo located on a server. I have many contributors that are not tech savvy, do not have server access, and do not know anything about git. But they are able to contribute via the project's web side. Each of them logs on via a web browser and contributes to the project. I have set things up so that when they log on, each user's contributions are made into a cloned repo on the server that is specifically for that user. Periodically, I log on to the server, visit each of their repos, and do a git diff to make sure they haven't done anything bad. If all is well, I commit their changes and push them to the central repo. Of course I need to manually look at their changes so that I can add an appropriate commit message. But I would also like to track who made the changes. I am making the commit, and I (and the web server) are the only users that are actually writing anything to the server. I could track this in the commit messages. While this strikes me as wrong, if this is my only option, is there a way to make userx's cloned repo always include "userx: " before each commit message that I add, so that I do not have to remind myself which user's repo I am in? Or even better, is there an easy way for me to make the commit, but in such a way as I credit the user whose cloned repo I am in?

    Read the article

  • Bad idea to display mail server info in public github project?

    - by kentcdodds
    I have the project for work that requires me to send e-mails to people using our work mail server. The server doesn't require authentication. Part of my project is using a Java-Helper I'm developing on GitHub. I don't know if I completely understand how it all works, but I'm guessing it would be a bad idea to have the server information available on GitHub for the world to see. Is this correct? After thought: I'm not going to put it in the Java-Helper because that wouldn't be helpful for anyone but me. but I'm still curious to know the answer to this question :) Thanks!

    Read the article

  • Single-developer GIT workflow (moving from straightforward FTP)

    - by melat0nin
    I'm trying to decide whether moving to VCS is sensible for me. I am a single web developer in a small organisation (5 people). I'm thinking of VCS (Git) for these reasons: version control, offsite backup, centralised code repository (can access from home). At the moment I work on a live server generally. I FTP in, make my edits and save them, then reupload and refresh. The edits are usually to theme/plugin files for CMSes (e.g. concrete5 or Wordpress). This works well but provides no backup and no version control. I'm wondering how best to integrate VCS into this procedure. I would envisage setting up a Git server on the company's web server, but I'm not clear how to push changes out to client accounts (usually VPSes on the same server) - at the moment I simply log into SFTP with their details and make the changes directly. I'm also not sure what would sensibly represent a repository - would each client's website get their own one? Any insights or experience would be really helpful. I don't think I need the full power of Git by any means, but basic version control and de facto cloud access would be really useful.

    Read the article

  • git networking for small team

    - by takeshin
    I'm trying to set up git for my programming team. My setup is: 1. example.com (Ubuntu server) IP: 192.168.1.2 (public: xxx.yyy.yyy.zzz) main git repository in /var/www/testgit user: mot (root) 2. host2, Ubuntu IP: 192.168.1.101 git clone of main repo in ~/public_html/testgit1 user: nairda 3. host3, Ubuntu IP: 192.168.1.102 git clone of main repo in ~/www/testgit2 user: mot 4. host4, Windows Vista, Samba, msysgit IP: 192.168.1.103 git clone of main repo in c:\shared\testgit3 user: ataga I start a new main repo: cd /var/www/testgit1 git init Now, a lot of questions: Which groups and users do I have to create? How to set up required ssh keys? (I'm playing with gitosis, but with no success by now.) How to make the main repo visible to other hosts? How to clone this repo on the hosts? How to pull changes from others to main repo?

    Read the article

  • How to get --detect-branches to work with git-p4?

    - by Michael Brennan
    My p4 repository has a structure similar to: //depot/project/branch1 //depot/project/branch2 //depot/project/branch3 ... etc However, when I use git-p4 to clone "project", all 3 branches are not considered as branches and all get cloned into the single master branch. This is how I'm invoking git-p4: git-p4 clone --detect-branches //depot/project I was expecting git-p4 to create a git database for "project" with three branches, and the root of the project would be mapped to the portion of the path after the branch name (for example: if //depot/project/branch1 has a subdirectory called "lib" (//depot/project/branch1/lib) then my local file system should be something like /git_project/lib with 3 git branches). Is what I'm expecting wrong? Am I invoking git-p4 incorrectly?

    Read the article

  • To update or to not update?

    - by Massimo
    Since starting working where I am working now, I've been in an endless struggle with my boss and coworkers in regard to updating systems. I of course totally agree that any update (be it firmware, O.S. or application) should not be applied carelessly as soon as it comes out, but I also firmly believe that there should be at least some reason if the vendor released it; and the most common reason is usually fixing some bug... which maybe you're not experiencing now, but you could be experiencing soon if you don't keep up with . This is especially true for security fixes; as an examle, had anyone simply applied a patch that had already been available for months, the infamous SQL Slammer worm would have been harmless. I'm all for testing and evaluating updates before deployng them; but I strongly disagree with the "if it's not broken then don't touch it" approach to systems management, and it genuinely hurts me when I find production Windows 2003 SP1 or ESX 3.5 Update 2 systems, and the only answer I can get is "it's working, we don't want to break it". What do you think about this? What is your policy? And what is your company policy, if it doesn't match your own?

    Read the article

  • How to store etckeeper repositories on a central server via git

    - by andreash
    Hello, I would like to have one central git repository for all my servers' etckeeper .git repos. Here the suggestion was to use a file in /etc/etckeeper/commit.d, which basically looks like this, assuming that a git repo had been set up in somedir on somehost: #!/bin/sh cd /etc git push faruser@farhost:somedir The problem with this is, that it would be really nice to have all servers in the same repo on the central server. I tried git push faruser@farhost:somedir/server1 but that failed. As you can see, I've never worked with git before ... Any ideas on how this can be accomplished is greatly appreciated :) Cheers, Andreas

    Read the article

  • using git on DOS command line asks for password - but not when using TortoiseGit or gitBash

    - by Sandy
    I would like to use the DOS command line to enter the command: git clone "git_path.git" myDir It asks me to enter a password which I would like to avoid. I usually use TortoiseGit to do all git related operations. I would like to setup cruisecontrol using ant with a custom git task. Therefore I need to perform git clone on the command line in Windows 7. But it only works using git bash and not DOS. According to other forum entries, I tried to convert the key with puttyGen and put the file id_rsa in c:/Users/myName/.ssh I also added an authorized_keys file but it still asks for a password. Any ideas? Thanks

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >