Search Results

Search found 3761 results on 151 pages for 'revision history'.

Page 33/151 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Get tortoisesvn to give me filenames with build number in the filename

    - by EricJLN
    I am on a Windows 7 box, and I have tortoisesvn on my machine. After getting a little familiar with svn and tortoisesvn on a code repository, I set up a local repository to manage revisions of some word and powerpoint documents. I want to figure out some scripted way to output a set of files with the build/revision number embedded in the filename. I will then email the files to some business people to review. For example, say I have a group of files in my working directory: PresentA.pptx PresentA-notes.docx PresentB.pptx and TortoiseSVN repo browser tells me that I am currently at revision 21 for PresentA.pptx and PresentA-notes.docx but at revision 25 for PresentB.pptx, I would like some way to get 3 files with the following names: PresentA-r21.pptx PresentA-notes-r21.docx PresentB-r25.pptx Alternatively, if revision 25 is the current value for the repository, having all the names appended with -r25 would work, too.

    Read the article

  • Windows programs to create timeline charts?

    - by justshams
    I would like to create a chart for my source control depicting the trunk and all the branches, with various details, like creation date, merge date, created revision, merge revision, close revision etc. I want it to look like this: I have looked into an appliation called SmartDraw, but unable to the required kind of output from it. It would be awesome if the data can be generated by reading an Excel file input. It would be required that the software runs on Windows XP SP3.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #003

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 This was the first year of my blogging and lots of new things I was learning as I go. I was indeed an infant in blogging a few years ago. However, as time passed by I have learned a lot. This year was year of experiments and new learning. 2007 Working as a full time DBA I often encoutered various errors and I started to learn how to avoid those error and document the same. ERROR Msg 5174 Each file size must be greater than or equal to 512 KB Whenever I see this error I wonder why someone is trying to attempt a database which is extremely small. Anyway, it does not matter what I think I keep on seeing this error often in industries. Anyway the solution of the error is equally interesting – just created larger database. Dilbert Humor This was very first encounter with database humor and I started to love it. It does not matter how many time we read this cartoon it does not get old. Generate Script with Data from Database – Database Publishing Wizard Generating schema script with data is one of the most frequently performed tasks among SQL Server Data Professionals. There are many ways to do the same. In the above article I demonstrated that how we can use the Database Publishing Wizard to accomplish the same. It was new to me at that time but I have not seen much of the adoption of the same still in the industry. Here is one of my videos where I demonstrate how we can generate data with schema. 2008 Delete Backup History – Cleanup Backup History Deleting backup history is important too but should be done carefully. If this is not carried out at regular interval there is good chance that MSDB will be filled up with all the old history. Every organization is different. Some would like to keep the history for 30 days and some for a year but there should be some limit. One should regularly archive the database backup history. South Asia MVP Open Days 2008 This was my very first year Microsoft MVP. I had Indeed big blast at the event and the fun was incredible. After this event I have attended many different MVP events but the fun and learning this particular event presented was amazing and just like me many others are not able to forget the same. Here are other links related to the event: South Asia MVP Open Day 2008 – Goa South Asia MVP Open Day 2008 – Goa – Day 1 South Asia MVP Open Day 2008 – Goa – Day 2 South Asia MVP Open Day 2008 – Goa – Day 3 2009 Enable or Disable Constraint  This is very simple script but I personally keep on forgetting it so I had blogged it. Till today, I keep on referencing this again and again as sometime a very little thing is hard to remember. Policy Based Management – Create, Evaluate and Fix Policies This article will cover the most spectacular feature of SQL 2008 – Policy-based management and how the configuration of SQL Server with policy-based management architecture can make a powerful difference. Policy based management is loaded with several advantages. It can help you implement various policies for reliable configuration of the system. It also provides additional administrative assistance to DBAs and helps them effortlessly manage various tasks of SQL Server across the enterprise. SQLPASS 2009 – My Very First SQPASS Experience Just Brilliant! I never had an experience such a thing in my life. SQL SQL and SQL – all around SQL! I am listing my own reasons here in order of importance to me. Networking with SQL fellows and experts Putting face to the name or avatar Learning and improving my SQL skills Understanding the structure of the largest SQL Server Professional Association Attending my favorite training sessions Since last time I have never missed a single time this event. This event is my favorite event and something keeps me going. Here are additional post related SQLPASS 2009. SQL PASS Summit, Seattle 2009 – Day 1 SQL PASS Summit, Seattle 2009 – Day 2 SQL PASS Summit, Seattle 2009 – Day 3 SQL PASS Summit, Seattle 2009 – Day 4 2010 Get All the Information of Database using sys.databases Even though we believe that we know everything about our database, we do not know a lot of things about our database. This little script enables us to know so many details about databases which we may not be familiar with. Run this on your server today and see how much you know your database. Reducing CXPACKET Wait Stats for High Transactional Database While engaging in a performance tuning consultation for a client, a situation occurred where they were facing a lot of CXPACKET Waits Stats. The client asked me if I could help them reduce this huge number of wait stats. I usually receive this kind of request from other client as well, but the important thing to understand is whether this question has any merits or benefits, or not. I discusses the same in this article – a bit long but insightful for sure. Error related to Database in Use There are so many database management operations in SQL Server which requires exclusive access to the database and it is not always possible to get it. When any database is online in SQL Server it either applications or system thread often accesses them. This means database can’t have exclusive access and the operations which required this access throws an error. There is very easy method to overcome this minor issue – a single line script can give you exclusive access to the database. Difference between DATETIME and DATETIME2 Developers have found the root reason of the problem when dealing with Date Functions – when data time values are converted (implicit or explicit) between different data types, which would lose some precision, so the result cannot match each other as expected. In this blog post I go over very interesting details and difference between DATETIME and DATETIME2 History of SQL Server Database Encryption I recently met Michael Coles and Rodeney Landrum the author of one of the kind book Expert SQL Server 2008 Encryption at SQLPASS in Seattle. During the conversation we ended up how Microsoft is evolving encryption technology. The same discussion lead to talking about history of encryption tools in SQL Server. Michale pointed me to page 18 of his book of encryption. He explicitly gave me permission to re-produce relevant part of history from his book. 2011 Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY Some time an interesting feature and smart audience make a total difference in places. From last two days, I have been writing on SQL Server 2012 feature FIRST_VALUE and LAST_VALUE. I created a puzzle which was very interesting and got many people attempt to resolve it. It was based on following two articles: Introduction to FIRST_VALUE and LAST_VALUE Introduction to FIRST_VALUE and LAST_VALUE with OVER clause I even provided the hint about how one can solve this problem. The best part was many people solved the problem without using hints! Try your luck!  A Real Story of Book Getting ‘Out of Stock’ to A 25% Discount Story Available This is a great problem and everybody would love to have it. We had it and we loved it. Our book got out of stock in 48 hours of releasing and stocks were empty. We faced many issues and learned many valuable lessons. Some we were able to avoid in the future and some we are still facing it as those problems have no solutions. However, since that day – our books never gone out of stock. This inspiring learning story for us and I am confident that you will love to read it as well. Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function LEAD() and LAG(). This function accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. I had a fantastic time writing this blog post and I am very confident when you read it, you will like the same. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Ivy resolve not working with dynamic artifact

    - by richever
    I've been using Ivy a bit but I seem to still have a lot to learn. I have two projects. One is a web app and the other is a library upon which the web app depends. The set up is that the library project is compiled to a jar file and published using Ivy to a directory within the project. In the web app build file, I have an ant target that calls the Ivy resolve ant task. What I'd like to do is have the web app using the dynamic resolve mode during development (on developer's local machines) and default resolve mode for test and production builds. Previously I was appending a time stamp to the library archive file so that Ivy would notice changes in file when the web app tried to resolve its dependency on it. Within Eclipse this is cumbersome because, in the web app, the project had to be refreshed and the build path tweaked every time a new library jar was published. Publishing a similarly named jar file every time would, I figure, only require developers to refresh the project. The problem is that the web app is unable to retrieve the dynamic jar file. The output I get looks something like this: resolve: [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:configure] :: loading settings :: file = /Users/richard/workspace/webapp/web/WEB-INF/config/ivy/ivysettings.xml [ivy:resolve] :: resolving dependencies :: com.webapp#webapp;[email protected] [ivy:resolve] confs: [default] [ivy:resolve] found com.webapp#library;latest.integration in local [ivy:resolve] :: resolution report :: resolve 142ms :: artifacts dl 0ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 1 | 0 | 0 | 0 || 0 | 0 | --------------------------------------------------------------------- [ivy:resolve] [ivy:resolve] :: problems summary :: [ivy:resolve] :::: WARNINGS [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :: UNRESOLVED DEPENDENCIES :: [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :: com.webapp#library;latest.integration: impossible to resolve dynamic revision [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :::: ERRORS [ivy:resolve] impossible to resolve dynamic revision for com.webapp#library;latest.integration: check your configuration and make sure revision is part of your pattern [ivy:resolve] [ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS BUILD FAILED /Users/richard/workspace/webapp/build.xml:71: impossible to resolve dependencies: resolve failed - see output for details The web app resolve target looks like this: <target name="resolve" depends="load-ivy"> <ivy:configure file="${ivy.dir}/ivysettings.xml" /> <ivy:resolve file="${ivy.dir}/ivy.xml" resolveMode="${ivy.resolve.mode}"/> <ivy:retrieve pattern="${lib.dir}/[artifact]-[revision].[ext]" type="jar" sync="true" /> </target> In this case, ivy.resolve.mode has a value of 'dynamic' (without quotes). The web app's Ivy file is simple. It looks like this: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.webapp" module="webapp"/> <dependencies> <dependency name="library" rev="${ivy.revision.default}" revConstraint="${ivy.revision.dynamic}" /> </dependencies> </ivy-module> During development, ivy.revision.dynamic has a value of 'latest.integration'. While, during production or test, 'ivy.revision.default' has a value of '1.0'. Any ideas? Please let me know if there's any more information I need to supply. Thanks!

    Read the article

  • SVN: Release branch headaches, how to merge in website revisions as and when cleared to go live?

    - by Pete Duncanson
    I need a sanity check here if we can, any ideas on correcting/changing the following are very welcome! We've been getting ourselves in knots of late with our SVN and are trying to correct it by putting a Trunk/Release system in place. We have a large website that we develop on and we store it all in SVN. Heres what we had in mind: We have trunk and a release branch All work gets checked into Trunk. When a feature is deemed ready for the next release it is merged into a Release branch. We only have one release branch and just tag "Latest" when we do a push to live We hope to be able to get all the files changed from Latest to Head to give us a zip that we can upload (any ideas on an easy way to do this via scripting?) So we set all this up and where very happy with ourselves. Except its not working and heres why. We work on lots a different features/fixes/problems at once and they don't all get nicely checked in feature complete (but always working at least). Then sometimes you have to wait for Clients to sign off. As a result you end up with revisions which are "ready for live" being scattered with ones which are "still being worked on" in trunk. That means that the completed revisions are not getting merged in sequentially but out of order. I thought SVN could handle this, clever little thing it is, but apparently not. Heres an example: Pete changes some CSS to make a new button look pretty (Revision 1) Dave add some CSS to the bottom of the same CSS file as Pete's for a new feature (Revision 2) Dave's mod gets the nod so he merges it into Release and commits it with a log message mentioning revision number and bug tracking id. Pete adds more buttons to finish this mod, no CSS changes here though (Revision 3) Pete then merges his mods (Revision 1 and 3) into the Head of Release (which has Daves merge in it) but this over-writes Daves CSS additions which now dissapear completely. This leads to the site being broken and the Release branch being pretty much useless. So we tried some other ideas like reverting Release back to "Latest" and then just merging in all the Revisions 1,2 and 3 in order. This worked fine until we had Revision 4 which was not ready for live and Revision 5 which was. Suddenly we are getting ourselves in knots again with exactly the same problem! Ok so take three. Revert to Latest, merge in Revision 5, then do any update back to Head. Tree conflicts galore! So thats a no no. I cracked in the end and built it all up manaually but its not something I want to do regular, ideally I want to script our deployment but can't while Release is in such a mess. HELP! What the heck are we doing wrong? I can't seem to find any solutions to this problem of wanting different none sequential Revisions in Release. If its not possible thats fine but how the heck are we meant to get stuff live easily. We can't branch for every single change, the site takes 30 minutes+ to check out it would take too long. Side note, we are using TortoiseSVN so can we keep command line examples to a minimum in any answers? Latest version of TSVN and SVN Version 1.6 so we have the funky merge tracking etc. EDIT: An excellent blog post which deals with the dev/release cycle (although using GIT but still relivant) thought everyone would like to read it if they found this question interesting. (http://nvie.com/git-model) EDIT 2: I wrote a blog post on how to show which branch you are working on in your website which others have asked me about (http://www.offroadcode.com/2010/5/14/which-svn-branch-are-you-working-on.aspx). Hope that helps. In the meantime we are looking at Kiln and hoping to make the switch next month (gulp!)

    Read the article

  • WP7 listbox binding not working properly

    - by Marco
    A noob error for sure (I started yesterday afternoon developing in WP7), but I'm wasting a lot time on it. I post my class and a little part of my code: public class ChronoLaps : INotifyPropertyChanged { private ObservableCollection<ChronoLap> laps = null; public int CurrentLap { get { return lap; } set { if (value == lap) return; // Some code here .... ChronoLap newlap = new ChronoLap() { // Some code here ... }; Laps.Insert(0, newlap); lap = value; NotifyPropertyChanged("CurrentLap"); NotifyPropertyChanged("Laps"); } } public ObservableCollection<ChronoLap> Laps { get { return laps; } set { if (value == laps) return; laps = value; if (laps != null) { laps.CollectionChanged += delegate { MeanTime = Laps.Sum(p => p.Time.TotalMilliseconds) / (Laps.Count * 1000); NotifyPropertyChanged("MeanTime"); }; } NotifyPropertyChanged("Laps"); } } } MainPage.xaml.cs public partial class MainPage : PhoneApplicationPage { public ChronoLaps History { get; private set; } private void butStart_Click(object sender, EventArgs e) { History = new ChronoLaps(); // History.Laps.Add(new ChronoLap() { Distance = 0 }); LayoutRoot.DataContext = History; } } MainPage.xaml <phone:PhoneApplicationPage> <Grid x:Name="LayoutRoot" Background="Transparent"> <Grid Grid.Row="2"> <ScrollViewer Margin="-5,13,3,36" Height="758"> <ListBox Name="lbHistory" ItemContainerStyle="{StaticResource ListBoxStyle}" ItemsSource="{Binding Laps}" HorizontalAlignment="Left" Margin="5,25,0,0" VerticalAlignment="Top" Width="444"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding Lap}" Width="40" /> <TextBlock Text="{Binding Time}" Width="140" /> <TextBlock Text="{Binding TotalTime}" Width="140" /> <TextBlock Text="{Binding Distance}" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </ScrollViewer> </Grid> </Grid> </phone:PhoneApplicationPage> Problem is that when I add one or more items to History.Laps collection, my listbox is not refreshed and these items don't appear. But if I remove comment on // History.Laps.Add(new ChronoLap()... line, this item appear and so every other inserted later. More: if I remove that comment and then write History.Laps.Clear() (before or after setting binding) binding is not working anymore. It's like it gets crazy if collection is empty. I really don't understand the reason... UPDATE AND SOLUTION: If i move History = new ChronoLaps(); LayoutRoot.DataContext = History; from butStart_Click to public MainPage() everything works as expected. Can someone explain me the reason?

    Read the article

  • 8 Backup Tools Explained for Windows 7 and 8

    - by Chris Hoffman
    Backups on Windows can be confusing. Whether you’re using Windows 7 or 8, you have quite a few integrated backup tools to think about. Windows 8 made quite a few changes, too. You can also use third-party backup software, whether you want to back up to an external drive or back up your files to online storage. We won’t cover third-party tools here — just the ones built into Windows. Backup and Restore on Windows 7 Windows 7 has its own Backup and Restore feature that lets you create backups manually or on a schedule. You’ll find it under Backup and Restore in the Control Panel. The original version of Windows 8 still contained this tool, and named it Windows 7 File Recovery. This allowed former Windows 7 users to restore files from those old Windows 7 backups or keep using the familiar backup tool for a little while. Windows 7 File Recovery was removed in Windows 8.1. System Restore System Restore on both Windows 7 and 8 functions as a sort of automatic system backup feature. It creates backup copies of important system and program files on a schedule or when you perform certain tasks, such as installing a hardware driver. If system files become corrupted or your computer’s software becomes unstable, you can use System Restore to restore your system and program files from a System Restore point. This isn’t a way to back up your personal files. It’s more of a troubleshooting feature that uses backups to restore your system to its previous working state. Previous Versions on Windows 7 Windows 7′s Previous Versions feature allows you to restore older versions of files — or deleted files. These files can come from backups created with Windows 7′s Backup and Restore feature, but they can also come from System Restore points. When Windows 7 creates a System Restore point, it will sometimes contain your personal files. Previous Versions allows you to extract these personal files from restore points. This only applies to Windows 7. On Windows 8, System Restore won’t create backup copies of your personal files. The Previous Versions feature was removed on Windows 8. File History Windows 8 replaced Windows 7′s backup tools with File History, although this feature isn’t enabled by default. File History is designed to be a simple, easy way to create backups of your data files on an external drive or network location. File History replaces both Windows 7′s Backup and Previous Versions features. Windows System Restore won’t create copies of personal files on Windows 8. This means you can’t actually recover older versions of files until you enable File History yourself — it isn’t enabled by default. System Image Backups Windows also allows you to create system image backups. These are backup images of your entire operating system, including your system files, installed programs, and personal files. This feature was included in both Windows 7 and Windows 8, but it was hidden in the preview versions of Windows 8.1. After many user complaints, it was restored and is still available in the final version of Windows 8.1 — click System Image Backup on the File History Control Panel. Storage Space Mirroring Windows 8′s Storage Spaces feature allows you to set up RAID-like features in software. For example, you can use Storage Space to set up two hard disks of the same size in a mirroring configuration. They’ll appear as a single drive in Windows. When you write to this virtual drive, the files will be saved to both physical drives. If one drive fails, your files will still be available on the other drive. This isn’t a good long-term backup solution, but it is a way of ensuring you won’t lose important files if a single drive fails. Microsoft Account Settings Backup Windows 8 and 8.1 allow you to back up a variety of system settings — including personalization, desktop, and input settings. If you’re signing in with a Microsoft account, OneDrive settings backup is enabled automatically. This feature can be controlled under OneDrive > Sync settings in the PC settings app. This feature only backs up a few settings. It’s really more of a way to sync settings between devices. OneDrive Cloud Storage Microsoft hasn’t been talking much about File History since Windows 8 was released. That’s because they want people to use OneDrive instead. OneDrive — formerly known as SkyDrive — was added to the Windows desktop in Windows 8.1. Save your files here and they’ll be stored online tied to your Microsoft account. You can then sign in on any other computer, smartphone, tablet, or even via the web and access your files. Microsoft wants typical PC users “backing up” their files with OneDrive so they’ll be available on any device. You don’t have to worry about all these features. Just choose a backup strategy to ensure your files are safe if your computer’s hard disk fails you. Whether it’s an integrated backup tool or a third-party backup application, be sure to back up your files.

    Read the article

  • Using git subtree to clone a subdirectory of a project with versioning history then merge it back af

    - by D W
    I am a graduate student with many scripts, bibliography data in bibtex, thesis draft in latex, presentations in open office, posters in scribus, and figures and result data. I would like to put everything in one project under version control. Then when I need to work on a portion such as the bibliography data, I would like to check that subdirectory out, modify it as necessary and merge it back.I would like the ability to check out one version to my home computer, and a different one to my work computer and make changes to each independently and eventually merge them back. I would also like to be able to check out a piece of code from this big project and import it with versioning into a separate project. If I may changes I'd like to be able to merge them back to the original project. Based on my understanding git subtree can do this. http://github.com/apenwarr/git-subtree There is an example that is along the lines of what I'm trying to do at: http://psionides.jogger.pl/2010/02/04/sharing-code-between-projects-with-git-subtree/ This code is from that site: git clone git://git2.kernel.org/pub/scm/git/git.git newtree=$(git subtree split --prefix=gitweb --annotate='(split) ' \ 0a8f4f0^.. --onto=1130ef3 --rejoin) git branch latest_gitweb $newtree gitk latest_gitweb Say the trunk of my project contained the directories: (bib bin cfg data fig src todo). How would I use git-subtree to split off the bib (bibliography) directory with versioning? When I use git-subtree split --prefix=bib I get 884842f6f4e9896e2e4e9402ee0ef762cd617257 as output, but I don't know where to go from there.

    Read the article

  • Is the Scala 2.8 collections library a case of "the longest suicide note in history" ?

    - by oxbow_lakes
    First note the inflammatory subject title is a quotation made about the manifesto of a UK political party in the early 1980s. This question is subjective but it is a genuine question, I've made it CW and I'd like some opinions on the matter. Despite whatever my wife and coworkers keep telling me, I don't think I'm an idiot: I have a good degree in mathematics from the University of Oxford and I've been programming commercially for almost 12 years and in Scala for about a year (also commercially). I have just started to look at the Scala collections library re-implementation which is coming in the imminent 2.8 release. Those familiar with the library from 2.7 will notice that the library, from a usage perspective, has changed little. For example... > List("Paris", "London").map(_.length) res0: List[Int] List(5, 6) ...would work in either versions. The library is eminently useable: in fact it's fantastic. However, those previously unfamiliar with Scala and poking around to get a feel for the language now have to make sense of method signatures like: def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Repr, B, That]): That For such simple functionality, this is a daunting signature and one which I find myself struggling to understand. Not that I think Scala was ever likely to be the next Java (or /C/C++/C#) - I don't believe its creators were aiming it at that market - but I think it is/was certainly feasible for Scala to become the next Ruby or Python (i.e. to gain a significant commercial user-base) Is this going to put people off coming to Scala? Is this going to give Scala a bad name in the commercial world as an academic plaything that only dedicated PhD students can understand? Are CTOs and heads of software going to get scared off? Was the library re-design a sensible idea? If you're using Scala commercially, are you worried about this? Are you planning to adopt 2.8 immediately or wait to see what happens? Steve Yegge once attacked Scala (mistakenly in my opinion) for what he saw as its overcomplicated type-system. I worry that someone is going to have a field day spreading fud with this API (similarly to how Josh Bloch scared the JCP out of adding closures to Java). Note - I should be clear that, whilst I believe that Josh Bloch was influential in the rejection of the BGGA closures proposal, I don't ascribe this to anything other than his honestly-held beliefs that the proposal represented a mistake.

    Read the article

  • How can I graph the Lines of Code history for git repo?

    - by dbr
    Basically I want to get the number of lines-of-code in the repository after each commit. The only (really crappy) ways I have found is to use git filter-branch to run "wc -l *", and a script that run git reset --hard on each commit, then ran wc -l To make it a bit clearer, when the tool is run, it would output the lines of code of the very first commit, then the second and so on.. This is what I want the tool to output (as an example): me@something:~/$ gitsloc --branch master 10 48 153 450 1734 1542 I've played around with the ruby 'git' library, but the closest I found was using the .lines() method on a diff, which seems like it should give the added lines (but does not.. it returns 0 when you delete lines for example) require 'rubygems' require 'git' total = 0 g = Git.open(working_dir = '/Users/dbr/Desktop/code_projects/tvdb_api') last = nil g.log.each do |cur| diff = g.diff(last, cur) total = total + diff.lines puts total last = cur end

    Read the article

  • SQL SERVER – ?Finding Out What Changed in a Deleted Database – Notes from the Field #041

    - by Pinal Dave
    [Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject. In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words. Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?” The First Place to Check – Schema Changes History Report: Pinal has recently written on the Schema Changes History report and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions. There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off. But the Database Has Been Deleted! Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost. One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_: Therefore, you can read the trace files like any other. Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error. With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database. Testing with a Deleted Database: Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report. CREATE DATABASE DeleteMe; GO USE DeleteMe; GO CREATE SCHEMA Test AUTHORIZATION dbo; GO CREATE TABLE Test.Foo (FooID INT); GO USE MASTER; GO DROP DATABASE DeleteMe; GO This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available. Finding the Information: I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there: Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Can I version dotfiles within a project without merging their history into the main line?

    - by istrasci
    I'm sure this title is fairly obscure. I'm wondering if there is some way in git to tell it that you want a certain file to use different versions of a file when moving between branches, but to overall be .gitignored from the repository. Here's my scenario: I've got a Flash Builder project (for a Flex app) that I control with git. Flex apps in Flash Builder projects create three files: .actionScriptProperties, .flexProperties, and .project. These files contain lots of local file system references (source folders, output folders, etc.), so naturally we .gitignore them from our repo. Today, I wanted to use a new library in my project, so I made a separate git branch called lib, removed the old version of the library and put in the new one. Unfortunately, this Flex library information gets stored in one of those three dot files (not sure which offhand). So when I had to switch back to the first branch (master) earlier, I was getting compile errors because master was now linked to the new library (which basically negated why I made lib in the first place). So I'm wondering if there's any way for me to continue to .gitignore these files (so my other developers don't get them), but tell git that I want it to use some kind of local "branch version" so I can locally use different versions of the files for different branches.

    Read the article

  • How do I use cookies to store users' recent site history(PHP)?

    - by ggfan
    I decided to make a recent view box that allows users to see what links they clicked on before. Whenever they click on a posting, the posting's id gets stored in a cookie and displays it in the recent view box. In my ad.php, I have a definerecentview function that stores the posting's id (so I can call it later when trying to get the posting's information such as title, price from the database) in a cookie. How do I create a cookie array for this? **EXAMPLE:** user clicks on ad.php?posting_id='200' //this is in the ad.php function definerecentview() { $posting_id=$_GET['posting_id']; //this adds 30 days to the current time $Month = 2592000 + time(); $i=1; if (isset($posting_id)){ //lost here for($i=1,$i< ???,$i++){ setcookie("recentviewitem[$i]", $posting_id, $Month); } } } function displayrecentviews() { echo "<div class='recentviews'>"; echo "Recent Views"; if (isset($_COOKIE['recentviewitem'])) { foreach ($_COOKIE['recentviewitem'] as $name => $value) { echo "$name : $value <br />\n"; //right now just shows the posting_id } } echo "</div>"; } How do I use a for loop or foreach loop to make it that whenever a user clicks on an ad, it makes an array in the cookie? So it would be like.. 1. clicks on ad.php?posting_id=200 --- setcookie("recentviewitem[1]",200,$month); 2. clicks on ad.php?posting_id=201 --- setcookie("recentviewitem[2]",201,$month); 3. clicks on ad.php?posting_id=202 --- setcookie("recentviewitem[3]",202,$month); Then in the displayrecentitem function, I just echo however many cookies were set? I'm just totally lost in creating a for loop that sets the cookies. any help would be appreciated

    Read the article

  • Parallel programming, are we not learning from history again?

    - by mezmo
    I started programming because I was a hardware guy that got bored, I thought the problems being solved in the software side of things were much more interesting than those in hardware. At that time, most of the electrical buses I dealt with were serial, some moving data as fast as 1.5 megabit!! ;) Over the years these evolved into parallel buses in order to speed communication up, after all, transferring 8/16/32/64, whatever bits at a time incredibly speeds up the transfer. Well, our ability to create and detect state changes got faster and faster, to the point where we could push data so fast that interference between parallel traces or cable wires made cleaning the signal too expensive to continue, and we still got reasonable performance from serial interfaces, heck some graphics interfaces are even happening over USB for a while now. I think I'm seeing a like trend in software now, our processors were getting faster and faster, so we got good at building "serial" software. Now we've hit a speed bump in raw processor speed, so we're adding cores, or "traces" to the mix, and spending a lot of time and effort on learning how to properly use those. But I'm also seeing what I feel are advances in things like optical switching and even quantum computing that could take us far more quickly that I was expecting back to the point where "serial programming" again makes the most sense. What are your thoughts?

    Read the article

  • How to define a history chart in crystal reports .net (2008)?

    - by hp
    Hi, I want to display a Bar Chart in a Report that shows the sum of a measure grouped by month for the last 24 month. The months that do not have any tuples do not show up in the graph. I do not want that. I want exactly 24 groups/bars that are 0 if there are no tuples. What is the best way to do this? thanks

    Read the article

  • Are compound command's second and subsequent lines not effected by HISTCONTROL in bash?

    - by UniMouS
    When consulting bash's man page, it read this sentence about bash history: The second and subsequent lines of a multi-line compound command are not tested, and are added to the history regardless of the value of HISTCONTROL. But I have tried this: $ HISTCONTROL=ignorespace $ if [ -f /var/log/messages ] > then > echo "/var/log/message exists." > fi $ history | tail -2 18 HISTCONTROL=ignorespace 19 history | tail -2 Note that the if is leaded by a space. Why the second line of this if compound command still not appear in the history?

    Read the article

  • Backbone.js: How to utilize router.navigate to manipulate browser history?

    - by Xavier_Ex
    I am writing something like a registration process containing several steps, and I want to make it a single-page like system so after some studying Backbone.js is my choice. Every time the user completes the current step they will click on a NEXT button I create and I use the router.navigate method to update the url, as well as loading the content of the next page and doing some fancy transition with javascript. Result is, URL is updated which the page is not refreshed, giving a smooth user experience. However, when the user clicks on the back button of the browser, the URL gets updated to that of a previous step, but the content stays the same. My question is through what way I can capture such an event and currently load the content of the previous step and present that to the user? Or even better, can I rely on browser cache to load that previously loaded page? EDIT: in particular, I'm trying something like mentioned in this article.

    Read the article

  • How to migrate a codebase from one svn repo to another preserving history?

    - by chotchki
    I have a branch in a badly structured svn repo that needs to be stripped out and moved to another svn repository. (I'm trying to clean it up some). If I do an 'svn log' and not stop on copy/rename I can see all 3427 commits that I care about. Is there some way to dump the revisions out, short of writing some major scripts? I would follow the advice in this question but this branch has been moved all over the place and I would like to preserve the moves as well. Any ideas?

    Read the article

  • Combining Scrum, TFS2010 and Email to keep everyone in the loop

    - by Martin Hinshelwood
    Often you will receive rich information from your Product Owner (Customer) about tasks. That information can be in the form of Word documents, HTML Emails and Pictures, but you generally receive them in the context of an Email. You need to keep these so your Team can refer to it later, and so you can send a “done” when the task has been completed. This preserves the “history” of the task and allows you to keep relevant partied included in any future conversation. At SSW we keep the original email so that we can reply Done and delete the email. But keeping it in your email does not help other members of the team if they complete the task and need to send the “done”. Worse yet, the description field in Team Foundation Server 2010 (TFS 2010) does not support HTML and images, nor does the default task template support an “interested parties” or CC field. You can attach this content manually, but it can be time consuming. Figure: Description only supports plain text, and History supports HTML with no images   What should we do? At SSW we always follow the rules, and it just so happened that we have rules to both achieve this, and to make it easier. You should follow the existing Rules to Better Project Management  and attach the email to your task so you can refer to and reply to it later when you close the task: Do you know what Outlook add-ins you need? Describe the work item request in an email Use Outlook Add-in to move the email to a TFS Work Item When replying to an email with “done” you should follow: Do you update Team Companion template, so the email "subject" doesn't change? Do you update Team Companion template, so you can generate a proper "done" mail? Following these simple rules will help your Product Owner understand you better, and allow your team to more effectively collaborate with each other. An added bonus is that as we are keeping the email history in sync with TFS. When you “reply all” to the email all of the interested partied to the Task are also included. This notified those that may have been blocked by your task to keep up to date with its status. This has been published as Do you know to ensure that relevant emails are attached to tasks in our Rules to Better Scrum using TFS. What could we do better? I would like to see this process automated so that we capture the information correctly in the task without the need to use email. This would require a change to the process template in Team Foundation Server to add an “Interested Parties” field. Each reply to the email would need to be automatically processed into a Work Item. This could be done by adding a task identifier as the first item in the “Relates to” email header, and copying in an email address that you watch. This would then parse out the relevant information and add the new message to the history, update the “Interested parties” field and attach the Images. Upon reflection, it may even be possible, but more difficult to do this using ONLY the History field and including some of the header information in there to the build a done email with history. This would not currently deal with email “forks” well, but I think it would be adequate for our needs. It would be nice if we could find time to implement this, but currently it is but a pipe dream. Maybe Microsoft could implement something in the next version of Team Foundation Server, and in the mean time we have a process that works well. Technorati Tags: Scrum,SSW Rules,TFS 2010,TFS 2008

    Read the article

  • What would be the best approach to make revisions of user content?

    - by Kevin Simper
    I have searched and could not find any information about it. What is the best approach to storing revisions? I have a website where the user can write a document which can be fairly long (200-300 lines). How do you determine when to make a revision? Is it not a scalable solution to make a new one whenever the save, because that would be useless to the user when the want to look back, and it would require quite a lot of space. You could use time and say for every 15 minute they are working on it there would be a revision, but that would sometimes be nothing or the whole document have completely changed. I could make a diff from the previous revision, and compare by line and look at how many percent of the lines have been changed. What are other doing revisions?

    Read the article

  • Should Git be used for documentation and project management? Should the code be in a separate repository?

    - by EmpireJones
    I'm starting up a Git repository for a group project. Does it make sense to store documents in the same Git repository as code - it seems like this conflicts with the nature of the git revision flow. Here is a summary of my question(s): Is the Git revisioning style going to be confusing if both code and documents are checked into the same repository? Experiences with this? Is Git a good fit for documentation revision control? I am NOT asking if a Revision Control System in general should or shouldn't be used for documentation - it should. Thanks for the feedback so far!

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >