Search Results

Search found 2767 results on 111 pages for 'history'.

Page 29/111 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • How to migrate from SourceGear Fortress 1.1.x to TFS 2010?

    - by Jaxidian
    I found this question but it was only similar and, more importantly, dated by over a year. I'm hoping there is something I can't find out there and that is better than what that answer points to. Requirements: Preserve source code history (even if only loosely via text only since all of our prior users may not be created in the TFS repository) Preserve our item tracking history (again, even if just loosely since Fortress wasn't all that great about this). Ultimately, I want some searchable history of what we have done in the past and why. I don't necessarily need all of the hooks in place tying work items to source code or anything like that, but I do need discussions and decisions associated with the work items kept around.

    Read the article

  • Access current_user in model

    - by LearnRails
    I have 3 tables items (columns are: name , type) history(columns are: date, username, item_id) user(username, password) When a user say "ABC" logs in and creates a new item, a history record gets created with the following after_create filter. How to assign this username ‘ABC’ to the username field in history table through this filter. class Item < ActiveRecord::Base has_many :histories after_create :update_history def update_history histories.create(:date=Time.now, username= ?) end My login method in session_controller def login if request.post? user=User.authenticate(params[:username]) if user session[:user_id] =user.id redirect_to( :action='home') flash[:message] = "Successfully logged in " else flash[:notice] = "Incorrect user/password combination" redirect_to(:action="login") end end end I am not using any authentication plugin. I would appreciate if someone could tell me how to achieve this without using plugin(like userstamp etc.) if possible.

    Read the article

  • SVN: is it possible to delete a branch that was copied removed etc for good?

    - by dimus
    I have to remove a branch from svn history for good. Normally I would use svnadmin dump /path/to/repo |svndumpfilter --drop-empty-revs --renumber-revs exclude /branches/bad_branch However this branch was not just created, but also moved and then removed and dump script fails to process downstream information with messages like: Invalid copy source path '/branches/bad_branch' So I imagine 2 ways to cope with the problem keep only last few revisions of the history and put current repository as an archive on the web make a dump up to the revision where the 'bad_branch' was created and apply the rest of the changes as a patch, therefore losing history of a few recent commits. Is there a better, cleaner way to deal with this?

    Read the article

  • I need to calculate the date / time difference between one date time column

    - by Stringz
    Details. I have the notes table having the following columns. ID - INT(3) Date - DateTime Note - VARCHAR(100) Tile - Varchar(100) UserName - Varchar(100) Now this table will be having NOTES along with the Titles entered by UserName on the specified date / time. I need to calculate the DateTimeDifference between the TWO ROWS in the SAME COLUMN For example the above table has this peice of information in the table. 64, '2010-03-26 18:16:13', 'Action History', 'sending to Level 2.', 'Salman Khwaja' 65, '2010-03-26 18:19:48', 'Assigned By', 'This is note one for the assignment of RF.', 'Salman Khwaja' 66, '2010-03-27 19:19:48', 'Assigned By', 'This is note one for the assignment of CRF.', 'Salman Khwaja' Now I need to have the following resultset in query reports using MYSQL. TASK - TIME Taken ACTION History - 2010-03-26 18:16:13 Assigned By - 00:03:35 Assigned By - 25:00:00 More smarter approach would be TASK - TIME Taken ACTION History - 2010-03-26 18:16:13 Assigned By - 3 minutes 35 seconds Assigned By - 1 day, 1 hour. I would appreciate if one could give me the PLAIN QUERY along with PHP code to embed it too.

    Read the article

  • How to get use text columns in a trigger

    - by Jeremy
    I am trying to use an update trigger in sql 2000 so that when an update occurs, I insert a row into a history table, so I retain all history on a table: CREATE Trigger trUpdate_MyTable ON MyTable FOR UPDATE AS INSERT INTO [MyTableHistory] ( [AuditType] ,[MyTable_ID] ,[Inserted] ,[LastUpdated] ,[LastUpdatedBy] ,[Vendor_ID] ,[FromLocation] ,[FromUnit] ,[FromAddress] ,[FromCity] ,[FromProvince] ,[FromContactNumber] ,[Comment]) SELECT [AuditType] = 'U', D.* FROM deleted D JOIN inserted I ON I.[ID] = D.[ID] GO Of course, I get an error "Cannot use text, ntext, or image columns in the 'inserted' and 'deleted' tables." I tried joining to MyTable instead of deleted, but because the insert triger fires after the insert, it ends up inserting the new record into the history table, when I want the original record. How can I do this and still use text columns?

    Read the article

  • Does a version control database storage engine exist?

    - by Zak
    I was just wondering if a storage engine type existed that allowed you to do version control on row level contents. For instance, if I have a simple table with ID, name, value, and ID is the PK, I could see that row 354 started as (354, "zak", "test")v1 then was updated to be (354, "zak", "this is version 2 of the value")v2 , and could see a change history on the row with something like select history (value) where ID = 354. It's kind of an esoteric thing, but it would beat having to keep writing these separate history tables and functions every time a change is made...

    Read the article

  • Revision histories and documenting changes

    - by jasonline
    I work on legacy systems and I used to see revision history of files or functions being modified every release in the source code, for example: // // Rev. No Date Author Description // ------------------------------------------------------- // 1.0 2009/12/01 johnc <Some description> // 1.1 2009/12/24 daveb <Some description> // ------------------------------------------------------- void Logger::initialize() { // a = b; // Old code, just commented and not deleted a = b + c; // New code } I'm just wondering if this way of documenting history is still being practiced by many today? If yes, how do you apply modifications on the source code - do you comment it or delete it completely? If not, what's the best way to document these revisions? If you use version control systems, does it follow that your source files contain pure source codes, except for comments when necessary (no revision history for each function, etc.)?

    Read the article

  • tsql proc logic help

    - by bacis09
    I am weak in SQL and need some help working through some logic with my proc. Three pieces: store procedure, table1, table2 Table 1 stores most recent data for specific IDs Customer_id status_dte status_cde app_dte 001 2010-04-19 Y 2010-04-19 Table 2 stores history of data for specific customer IDs: For example: Log_id customer_Id status_dte status_cde 01 001 2010-04-20 N 02 001 2010-04-19 Y 03 001 2010-04-19 N 04 001 2010-04-19 Y The stored proecure currently throws an error if the status date from table1 is < than app_date in table1. If @status_dte < app_date Error Note: @status_dte is a variable stored as the status_dte from table1 However, I want it to throw an error when the EARLIEST status_dte from table 2 with a status_cde of 'Y' is less than the app_dte column in table 1. Keep in mind that this earliest date is not stored anywhere, the history of data changes per customer. Another customer might have the following history. Log_id customer_Id status_dte status_cde 01 002 2010-04-20 N 02 002 2010-04-18 N 03 002 2010-04-19 Y 04 002 2010-04-19 Y Any ideas on how I can approach this?

    Read the article

  • How to handle splitting a file under source control?

    - by sharptooth
    I have a .cpp file and .h file containing a class. Class.cpp contains the implementation and Class.h contains the definition. The class is overcomplicated so I want to separate some code and move it into a separate class. So I create NewClass.cpp and NewClass.h and move the code there. How do I handle this when the files are under SVN? I can simply "svn add" the two new files, but then they will appear as new and will have no history. I could instead "svn copy and rename" the two initial files and edit the the two old files and the two new files - then the two new files will have common history. Which approach is better from the point of version control? Should the new files share history with the old files or should they appear as new?

    Read the article

  • Segfault when calling a method c++

    - by shuttle87
    I am fairly new to c++ and I am a bit stumped by this problem. I am trying to assign a variable from a call to a method in another class but it always segfaults. My code compiles with no warnings and I have checked that all variables are correct in gdb but the function call itself seems to cause a segfault. The code I am using is roughly like the following: class History{ public: bool test_history(); }; bool History::test_history(){ std::cout<<"test"; //this line never gets executed //more code goes in here return true; } class Game{ private: bool some_function(); public: History game_actions_history; bool local_variable; }; bool Game::some_function(){ local_variable = game_actions_history.test_history(); if (local_variable == true){ return true; } else{ return false; } } Any tips or advice is greatly appreciated!

    Read the article

  • converting ID to column name and also replacing NULL with last known value.

    - by stackoverflowuser
    TABLE_A Rev ChangedBy ----------------------------- 1 A 2 B 3 C TABLE_B Rev Words ID ---------------------------- 1 description_1 52 1 history_1 54 2 description_2 52 3 history_2 54 Words column datatype is ntext. TABLE_C ID Name ----------------------------- 52 Description 54 History OUTPUT Rev ChangedBy Description History ------------------------------------------------ 1 A description_1 history_1 2 B description_2 history_1 3 C description_2 history_2 Description and History column will have the previous known values if they dont have value for that Rev no. i.e. Since for Rev no. 3 Description does not have an entry in TABLE_B hence the last known value description_2 appears in that column for Rev no. 3 in the output.

    Read the article

  • Hibernate Annotation for Entity existing in more than 1 catalog

    - by user286395
    I have a Person entity mapped by Hibernate to a database table in a database catalog "Active". After a period of time, records in this database table in the "Active" catalog are archived/moved to an exact copy of the table in a database Catalog "History". I have the need to retrieve from both the Active and History Catalogs. Is there a better way to model this with Hibernate annotations than making an abstract class that 2 classes extend from. This is what I have now. @MappedSuperclass public abstract class Person { @Id private Integer id; private String name; } @Entity @Table(name="Person", catalog="Active") public class PersonActive extends Person { } @Entity @Table(name="Person", catalog="History") public class PersonHistory extends Person { }

    Read the article

  • Online video tutorials for HTML 5

    - by Albers
    Here are some of the best introductory HTML5 videos I have found online/for free. Mix 2011: HTML5 for Skeptics - Scott Stansfield channel9.msdn.com/Events/MIX/MIX11/EXT21 Filling the HTML5 Gaps with Polyfills and Shims - Ray Bango channel9.msdn.com/Events/MIX/MIX11/HTM04 50 Performance Tricks to Make Your HTML5 Web Sites Faster - Jason Weber channel9.msdn.com/Events/MIX/MIX11/HTM01 TechEd 2011 HTML5 and CSS3 Techniques You Can Use Today - Todd Anglin channel9.msdn.com/Events/TechEd/NorthAmerica/2011/DEV334 Google IO HTML5 Showcase for Web Developers: The Wow and the How www.youtube.com/watch?v=WlwY6_W4VG8 css-tricks localStorage for Forms - Chris Coyier css-tricks.com/video-screencasts/96-localstorage-for-forms/ Best Practices with Dynamic Content - Chris Coyier This one talks about Hash Tags - take a look at the History API too css-tricks.com/video-screencasts/85-best-practices-dynamic-content/ localStorage for Forms - Chris Coyier css-tricks.com/video-screencasts/96-localstorage-for-forms/ Overview of HTML5 Forms Types, Attributes, and Elements - Chris Coyier css-tricks.com/video-screencasts/99-overview-of-html5-forms-types-attributes-and-elements/ Bruce Lawson - HTML5: Who, What, When, Why www.ubelly.com/2011/10/bruce-lawson-html5-who-what-when-why/ Bruce Lawson is an evangelist for Opera, and in this video he provides an overview including the history & philosophy of HTML5.

    Read the article

  • D Bitly Shortens Links on Android Phones

    - by Jason Fitzpatrick
    If you share a lot of links from your Android phone (or would share more if it was easier) D Bitly is an unofficial Bitly client that makes short work of URL shrinking. Not only can you shorten URLs with D Bitly but you can also access your URL shortening history at Bit.ly. Shared a link via IM or email earlier in the day and want to share it right now from your Android device? You can pull it up and one-click share it from D Bitly. Want to shorten a new URL? You can shorten it, share it, and add it to your shortened URL history. Hit up the link below to grab a free copy and take it for a test drive. D Bitly [Android Market via Addictive Tips] HTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)What is a Histogram, and How Can I Use it to Improve My Photos?

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Beat detection and FFT

    - by Quincy
    So I am working on a platformer game which includes music with beat detection. I am currently using a simple if the energy that is stored in the history buffer is smaller then the current energy there is a beat. The problem with this is that ofcourse if you use songs like rock songs where you have a pretty steady amplitude this isn't going to work. So I looked further and found algorithms splitting the sound into multiple bands using FFT. I then found this : http://en.literateprograms.org/Cooley-Tukey_FFT_algorithm_(C) The only problem I'm having is that I am quite new to audio and I have no idea how to use that to split the signal up into multiple signals. So my question is : How do you use a FFT to split a signal into multiple bands ? Also for the guys interested, this is my algorithm in c# : // C = threshold, N = size of history buffer / 1024 public void PlaceBeatMarkers(float C, int N) { List<float> instantEnergyList = new List<float>(); short[] samples = soundData.Samples; float timePerSample = 1 / (float)soundData.SampleRate; int sampleIndex = 0; int nextSamples = 1024; // Calculate instant energy for every 1024 samples. while (sampleIndex + nextSamples < samples.Length) { float instantEnergy = 0; for (int i = 0; i < nextSamples; i++) { instantEnergy += Math.Abs((float)samples[sampleIndex + i]); } instantEnergy /= nextSamples; instantEnergyList.Add(instantEnergy); if(sampleIndex + nextSamples >= samples.Length) nextSamples = samples.Length - sampleIndex - 1; sampleIndex += nextSamples; } int index = N; int numInBuffer = index; float historyBuffer = 0; //Fill the history buffer with n * instant energy for (int i = 0; i < index; i++) { historyBuffer += instantEnergyList[i]; } // If instantEnergy / samples in buffer < instantEnergy for the next sample then add beatmarker. while (index + 1 < instantEnergyList.Count) { if(instantEnergyList[index + 1] > (historyBuffer / numInBuffer) * C) beatMarkers.Add((index + 1) * 1024 * timePerSample); historyBuffer -= instantEnergyList[index - numInBuffer]; historyBuffer += instantEnergyList[index + 1]; index++; } }

    Read the article

  • Benefits and features of different requirements-management systems and tools available?

    - by Gnark
    I am looking for a good comparision of different available professionial requirement managment tools. I am especially interested in the the features available within the different software solutions. Additionally to the "obvious" features I am looking for a proffesional Requirement Management System that supports for: multi-lingual customizable generation of documentation & history (graphs) search features (e.g. fulltext for comments), ordering, priorities version history bi-directional traceability of changes, artefacts, requirements, changes in requirements, etc. Any kind of integration of V-Model XT would be a really-nice-to-have-feature... Besides, I'd like to hear any personal motivated recommendations and/or experiences with different requirement management systems. Any input is highly appreciated. content consulted : similar question reqm tool with v-model nice, but too old paper (pdf) Tools Journal

    Read the article

  • Benefits and features of different requirements-management systems and tools available?

    - by DevDevDave
    I am looking for a good comparision of different available professional requirement management tools. I am especially interested in the the features available within the different software solutions. Additionally to the "obvious" features I am looking for a proffesional Requirement Management System that supports for: multi-lingual customizable generation of documentation & history (graphs) search features (e.g. fulltext for comments), ordering, priorities version history bi-directional traceability of changes, artefacts, requirements, changes in requirements, etc. Any kind of integration of V-Model XT would be a really-nice-to-have-feature. Besides, I'd like to hear any personal motivated recommendations and/or experiences with different requirement management systems. Any input is highly appreciated. Content consulted: similar question reqm tool with v-model ToolsJournal.com nice, but too old paper (pdf)

    Read the article

  • Fastest way to get up to speed on webapp development with ASP.NET?

    - by leeand00
    I'm trying to get better at C# ASP.NET 3.5 development (...no none of that MVC stuff :), and fast! My boss gave me a book to read on it from Wrox, but the thing reads like a history novel, telling you how things worked as far back as ASP.NET 1.0; The web application we are developing is completely in ASP.NET 3.5 so I don't need to read through any of the history (maybe I'm wrong about that...but I don't really have the time to read about that...) Do you have any suggestions for a faster (book, series of tutorials) to come up to speed on it? I'd like to learn about UI components, database access, etc... P.S. In a previous position I was an JSP/J2EE developer (and I used MVC all the time! :-D) P.S.S. I did take a course on it in 2008 at some point, but it seemed all very pointy and clickly. I wanna learn the code stuff! The how it works, and where the events are!

    Read the article

  • Gerrit, git and reviewing whole branch

    - by liori
    I'm now learning Gerrit (which is the first code review tool I use). Gerrit requires a reviewed change to consist of a single commit. My feature branch has about 10 commits. The gerrit-prefered way is to squash those 10 commits into a single one. However this way if the commit will be merged into the target branch, the internal history of that feature branch will be lost. For example, I won't be able to use git-bisect to bisect into those commits. Am I right? I am a little bit worried about this state of things. What is the rationale for this choice? Is there any way of doing this in Gerrit without losing history?

    Read the article

  • How to commit a file conversion?

    - by l0b0
    Say you've committed a file of type foo in your favorite vcs: $ vcs add data.foo $ vcs commit -m "My data" After publishing you realize there's a better data format bar. To convert you can use one of these solutions: $ vcs mv data.foo data.bar $ vcs commit -m "Preparing to use format bar" $ foo2bar --output data.bar data.bar $ vcs commit -m "Actual foo to bar conversion" or $ foo2bar --output data.foo data.foo $ vcs commit -m "Converted data to format bar" $ vcs mv data.foo data.bar $ vcs commit -m "Renamed to fit data type" or $ foo2bar --output data.bar data.foo $ vcs rm data.foo $ vcs add data.bar $ vcs commit -m "Converted data to format bar" In the first two cases the conversion is not an atomic operation and the file extension is "lying" in the first commit. In the last case the conversion will not be detected as a move operation, so as far as I can tell it'll be difficult to trace the file history across the commit. Although I'd instinctively prefer the last solution, I can't help thinking that tracing history should be given very high priority in version control. What is the best thing to do here?

    Read the article

  • A Grand Unified Theory of AI

    A new approach unites two prevailing but often opposed strains in the history of AI research Artificial intelligence - Physics - Alternative - Quantum Mechanics - Quantum Field Theory

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >