Search Results

Search found 2767 results on 111 pages for 'history'.

Page 26/111 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Parallel programming, are we not learning from history again?

    - by mezmo
    I started programming because I was a hardware guy that got bored, I thought the problems being solved in the software side of things were much more interesting than those in hardware. At that time, most of the electrical buses I dealt with were serial, some moving data as fast as 1.5 megabit!! ;) Over the years these evolved into parallel buses in order to speed communication up, after all, transferring 8/16/32/64, whatever bits at a time incredibly speeds up the transfer. Well, our ability to create and detect state changes got faster and faster, to the point where we could push data so fast that interference between parallel traces or cable wires made cleaning the signal too expensive to continue, and we still got reasonable performance from serial interfaces, heck some graphics interfaces are even happening over USB for a while now. I think I'm seeing a like trend in software now, our processors were getting faster and faster, so we got good at building "serial" software. Now we've hit a speed bump in raw processor speed, so we're adding cores, or "traces" to the mix, and spending a lot of time and effort on learning how to properly use those. But I'm also seeing what I feel are advances in things like optical switching and even quantum computing that could take us far more quickly that I was expecting back to the point where "serial programming" again makes the most sense. What are your thoughts?

    Read the article

  • How to define a history chart in crystal reports .net (2008)?

    - by hp
    Hi, I want to display a Bar Chart in a Report that shows the sum of a measure grouped by month for the last 24 month. The months that do not have any tuples do not show up in the graph. I do not want that. I want exactly 24 groups/bars that are 0 if there are no tuples. What is the best way to do this? thanks

    Read the article

  • Are compound command's second and subsequent lines not effected by HISTCONTROL in bash?

    - by UniMouS
    When consulting bash's man page, it read this sentence about bash history: The second and subsequent lines of a multi-line compound command are not tested, and are added to the history regardless of the value of HISTCONTROL. But I have tried this: $ HISTCONTROL=ignorespace $ if [ -f /var/log/messages ] > then > echo "/var/log/message exists." > fi $ history | tail -2 18 HISTCONTROL=ignorespace 19 history | tail -2 Note that the if is leaded by a space. Why the second line of this if compound command still not appear in the history?

    Read the article

  • Backbone.js: How to utilize router.navigate to manipulate browser history?

    - by Xavier_Ex
    I am writing something like a registration process containing several steps, and I want to make it a single-page like system so after some studying Backbone.js is my choice. Every time the user completes the current step they will click on a NEXT button I create and I use the router.navigate method to update the url, as well as loading the content of the next page and doing some fancy transition with javascript. Result is, URL is updated which the page is not refreshed, giving a smooth user experience. However, when the user clicks on the back button of the browser, the URL gets updated to that of a previous step, but the content stays the same. My question is through what way I can capture such an event and currently load the content of the previous step and present that to the user? Or even better, can I rely on browser cache to load that previously loaded page? EDIT: in particular, I'm trying something like mentioned in this article.

    Read the article

  • How to migrate a codebase from one svn repo to another preserving history?

    - by chotchki
    I have a branch in a badly structured svn repo that needs to be stripped out and moved to another svn repository. (I'm trying to clean it up some). If I do an 'svn log' and not stop on copy/rename I can see all 3427 commits that I care about. Is there some way to dump the revisions out, short of writing some major scripts? I would follow the advice in this question but this branch has been moved all over the place and I would like to preserve the moves as well. Any ideas?

    Read the article

  • Combining Scrum, TFS2010 and Email to keep everyone in the loop

    - by Martin Hinshelwood
    Often you will receive rich information from your Product Owner (Customer) about tasks. That information can be in the form of Word documents, HTML Emails and Pictures, but you generally receive them in the context of an Email. You need to keep these so your Team can refer to it later, and so you can send a “done” when the task has been completed. This preserves the “history” of the task and allows you to keep relevant partied included in any future conversation. At SSW we keep the original email so that we can reply Done and delete the email. But keeping it in your email does not help other members of the team if they complete the task and need to send the “done”. Worse yet, the description field in Team Foundation Server 2010 (TFS 2010) does not support HTML and images, nor does the default task template support an “interested parties” or CC field. You can attach this content manually, but it can be time consuming. Figure: Description only supports plain text, and History supports HTML with no images   What should we do? At SSW we always follow the rules, and it just so happened that we have rules to both achieve this, and to make it easier. You should follow the existing Rules to Better Project Management  and attach the email to your task so you can refer to and reply to it later when you close the task: Do you know what Outlook add-ins you need? Describe the work item request in an email Use Outlook Add-in to move the email to a TFS Work Item When replying to an email with “done” you should follow: Do you update Team Companion template, so the email "subject" doesn't change? Do you update Team Companion template, so you can generate a proper "done" mail? Following these simple rules will help your Product Owner understand you better, and allow your team to more effectively collaborate with each other. An added bonus is that as we are keeping the email history in sync with TFS. When you “reply all” to the email all of the interested partied to the Task are also included. This notified those that may have been blocked by your task to keep up to date with its status. This has been published as Do you know to ensure that relevant emails are attached to tasks in our Rules to Better Scrum using TFS. What could we do better? I would like to see this process automated so that we capture the information correctly in the task without the need to use email. This would require a change to the process template in Team Foundation Server to add an “Interested Parties” field. Each reply to the email would need to be automatically processed into a Work Item. This could be done by adding a task identifier as the first item in the “Relates to” email header, and copying in an email address that you watch. This would then parse out the relevant information and add the new message to the history, update the “Interested parties” field and attach the Images. Upon reflection, it may even be possible, but more difficult to do this using ONLY the History field and including some of the header information in there to the build a done email with history. This would not currently deal with email “forks” well, but I think it would be adequate for our needs. It would be nice if we could find time to implement this, but currently it is but a pipe dream. Maybe Microsoft could implement something in the next version of Team Foundation Server, and in the mean time we have a process that works well. Technorati Tags: Scrum,SSW Rules,TFS 2010,TFS 2008

    Read the article

  • how to rollback/undo yum update on fedora after messing the fedora versions

    - by misteryes
    I want to install texlive on my fedora 16 laptop with the following procedure: # yum remove tex-* texlive-* # cat > /etc/yum.repos.d/texlive.repo <<EOF [texlive] name=texlive baseurl=http://jnovy.fedorapeople.org/texlive/2012/packages.f17/ enabled=1 gpgcheck=0 EOF # yum update; # yum install texlive after yum update, I notice that my laptop is fedora 16, while I used 2012/packages.fc17/ so I modify /etc/yum.repos.d/texlive.repo to use 2011/packages.fc16 and do yum update again however, there are many errors [root@kitty esolve]# yum update Loaded plugins: auto-update-debuginfo, langpacks, presto, refresh-packagekit http://repos.fedorapeople.org/repos/leigh123linux/cinnamon/fedora-16/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found : http://repos.fedorapeople.org/repos/leigh123linux/cinnamon/fedora-16/x86_64/repodata/repomd.xml Trying other mirror. Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package dvipng.x86_64 0:1.14-1.fc15 will be obsoleted ---> Package kpathsea.x86_64 0:2007-66.fc16 will be obsoleted --> Processing Dependency: libkpathsea.so.4()(64bit) for package: evince-dvi-3.2.1-2.fc16.x86_64 ---> Package mkvtoolnix.x86_64 0:5.8.0-1 will be updated ---> Package mkvtoolnix.x86_64 0:6.3.0-1 will be an update ---> Package nautilus-dropbox.x86_64 0:1.4.0-1.fc10 will be updated ---> Package nautilus-dropbox.x86_64 0:1.6.0-1.fc10 will be an update ---> Package texlive-dvipng-bin.x86_64 2:svn26509.0-19.20130317_r29408.fc17 will be obsoleting --> Processing Dependency: texlive-kpathsea-lib = 2:2012-19.20130317_r29408.fc17 for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 --> Processing Dependency: texlive-base for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 --> Processing Dependency: tex-dvipng for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 --> Processing Dependency: libpng15.so.15()(64bit) for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 --> Processing Dependency: libkpathsea.so.6()(64bit) for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 ---> Package texlive-kpathsea.noarch 2:svn28792.0-19.fc17 will be obsoleting --> Processing Dependency: texlive-kpathsea-bin for package: 2:texlive-kpathsea-svn28792.0-19.fc17.noarch --> Running transaction check ---> Package kpathsea.x86_64 0:2007-66.fc16 will be obsoleted --> Processing Dependency: libkpathsea.so.4()(64bit) for package: evince-dvi-3.2.1-2.fc16.x86_64 ---> Package texlive-base.noarch 2:2012-19.20130317_r29408.fc17 will be installed ---> Package texlive-dvipng.noarch 2:svn26689.1.14-19.fc17 will be installed ---> Package texlive-dvipng-bin.x86_64 2:svn26509.0-19.20130317_r29408.fc17 will be obsoleting --> Processing Dependency: libpng15.so.15()(64bit) for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 ---> Package texlive-kpathsea-bin.x86_64 2:svn27347.0-19.20130317_r29408.fc17 will be installed ---> Package texlive-kpathsea-lib.x86_64 2:2012-19.20130317_r29408.fc17 will be installed --> Finished Dependency Resolution Error: Package: evince-dvi-3.2.1-2.fc16.x86_64 (@fedora) Requires: libkpathsea.so.4()(64bit) Removing: kpathsea-2007-66.fc16.x86_64 (@so-updates) libkpathsea.so.4()(64bit) Obsoleted By: 2:texlive-kpathsea-svn28792.0-19.fc17.noarch (texlive) Not found Error: Package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 (texlive) Requires: libpng15.so.15()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest and when I do yum install texlive, it simply tries to install the f17 version, which failed. what Can I do to install f16 version? how can I undo yum update with 2012/packages.f17/ I tried yum history, and for today's history, I only have Loaded plugins: auto-update-debuginfo, langpacks, presto, refresh-packagekit ID | Login user | Date and time | Action(s) | Altered ------------------------------------------------------------------------------- 124 | esolve ... <esolve> | 2013-09-12 18:35 | Erase | 24 123 | root <root> | 2013-08-23 11:08 | Update | 1 122 | root <root> | 2013-08-21 14:13 | Update | 1 < 121 | esolve ... <esolve> | 2013-05-31 15:36 | Install | 1 > 120 | root <root> | 2013-05-29 15:13 | Install | 1 < 119 | root <root> | 2013-04-18 13:13 | Update | 1 >< which seems not related to yum update the history results: 1003 yum update 1004 vim 1005 vim /etc/yum.repos.d/texlive.repo 1006 yum update 1007 yum install texlive 1008 vim /etc/yum.repos.d/texlive.repo 1009 clear 1010 yum history 1011 yum history list 1012 vim 1013 vim /etc/yum.repos.d/texlive.repo 1014 yum history list 1015 history also I tried yum history undo 124 but it failed! [root@kitty esolve]# yum history undo 124 Loaded plugins: auto-update-debuginfo, langpacks, presto, refresh-packagekit http://repos.fedorapeople.org/repos/leigh123linux/cinnamon/fedora-16/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found : http://repos.fedorapeople.org/repos/leigh123linux/cinnamon/fedora-16/x86_64/repodata/repomd.xml Trying other mirror. Undoing transaction 124, from Thu Sep 12 18:35:31 2013 Erase R-2.14.1-1.fc16.x86_64 ? Erase R-core-2.14.1-1.fc16.x86_64 ? Erase R-devel-2.14.1-1.fc16.x86_64 ? Erase a2ps-4.14-12.fc15.x86_64 ? Erase docbook-utils-pdf-0.6.14-29.fc16.noarch ? Erase html2ps-1.0-0.7.b7.fc15.noarch ? Erase jadetex-3.13-10.fc15.noarch ? Erase kile-2.1.1-1.fc16.x86_64 ? Erase linuxdoc-tools-0.9.66-9.fc15.x86_64 ? Erase tetex-dvipost-1.1-12.fc15.x86_64 ? Erase tex-cm-lgc-0.5-18.fc15.noarch ? Erase tex-preview-11.86-6.fc16.noarch ? Erase texinfo-tex-4.13a-15.fc15.x86_64 ? Erase texlive-2007-66.fc16.x86_64 ? Erase texlive-dvips-2007-66.fc16.x86_64 ? Erase texlive-latex-2007-66.fc16.x86_64 ? Erase texlive-texmf-2007-40.fc16.noarch ? Erase texlive-texmf-dvips-2007-40.fc16.noarch ? Erase texlive-texmf-fonts-2007-40.fc16.noarch ? Erase texlive-texmf-latex-2007-40.fc16.noarch ? Erase texlive-utils-2007-66.fc16.x86_64 ? Erase texmaker-1:3.2.2-1.fc16.x86_64 ? Erase texmf-RR-Inria-4.11-inria.0.noarch ? Erase xdvik-22.84.14-9.fc15.x86_64 ? Error: No package(s) available to install

    Read the article

  • How to bind up arrow in ~/.inputrc (readline) for vim insert mode?

    - by Pawel Goscicki
    When in Readline apps with vim mode enabled in ~/.inputrc (set editing-mode vi) is there a way to bind the up arrow key? To previous history, for example. It seems I have to press ESC key first, only then it works. Here's my attempt at making it work (~/.inputrc): $if mode=vi # INSERT MODE set keymap vi-insert "\e[A": history-search-backward # up-arrow "\e[B": history-search-forward # down-arrow Also note, that when I press Ctrl+v and then <Up>, it prints ^[[A.

    Read the article

  • Oracle Develop Newbies

    - by Cassandra Clark
    tweetmeme_url = 'http://blogs.oracle.com/develop/2010/06/oracle_develop_newbies.html'; Share .FBConnectButton_Small{background-position:-5px -232px !important;border-left:1px solid #1A356E;} .FBConnectButton_Text{margin-left:12px !important ;padding:2px 3px 3px !important;} There are a number of us in the Oracle Technology Network team that came over from the Sun acquisition so we are true Oracle Develop "newbies."  We are boning up on Oracle history and thought it would be fun to test your knowledge too.  Below are a few Oracle history questions.  Post your answers in the comment section of the blog and if you answer all questions correctly you will be listed in the next post as an "Oracle Genius".  Feel free to turn the tables on your fellow blog readers by posting your own Oracle history questions.  If you stump the community we'll add your question to our next post as well.  Oracle History Quiz - In 2003, what Applications rival company did Oracle acquire?In which year was JDeveloper first released?In what language was Oracle v 1.0 written?What Oracle program is designed to recognize and reward members of the Oracle Technology and Applications communities for their contributions back to the Oracle community?What party event draws in nearly 4,000 attendees every year during Oracle OpenWorld, Oracle Develop and now JavaOne?See you in September! 

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #007

    - by pinaldave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Find Stored Procedure Related to Table in Database – Search in All Stored Procedure In 2006 I wrote a small script which will help user  find all the Stored Procedures (SP) which are related to one or more specific tables. This was quite a popular script however, in SQL Server 2012 the same can be achieved using new DMV sys.sql-expression_dependencies. I recently blogged about it over Find Referenced or Referencing Object in SQL Server using sys.sql_expression_dependencies. 2007 SQL SERVER – Versions, CodeNames, Year of Release 1993 – SQL Server 4.21 for Windows NT 1995 – SQL Server 6.0, codenamed SQL95 1996 – SQL Server 6.5, codenamed Hydra 1999 – SQL Server 7.0, codenamed Sphinx 1999 – SQL Server 7.0 OLAP, codenamed Plato 2000 – SQL Server 2000 32-bit, codenamed Shiloh (version 8.0) 2003 – SQL Server 2000 64-bit, codenamed Liberty 2005 – SQL Server 2005, codenamed Yukon (version 9.0) 2008 – SQL Server 2008, codenamed Katmai (version 10.0) 2011 – SQL Server 2008, codenamed Denali (version 11.0) Search String in Stored Procedure Searching sting in the stored procedure is one of the most frequent task developer do. They might be searching for a table, view or any other details. I have written a script to do the same in SQL Server 2000 and SQL Server 2005. This is worth bookmarking blog post. There is an alternative way to do the same as well here is the example. 2008 SQL SERVER – Refresh Database Using T-SQL NO! Some of the questions have a single answer NO! You may want to read the question in the original blog post. I had a great time saying No! SQL SERVER – Delete Backup History – Cleanup Backup History SQL Server stores history of all the taken backup forever. History of all the backup is stored in the msdb database. Many times older history is no more required. Following Stored Procedure can be executed with a parameter which takes days of history to keep. In the following example 30 is passed to keep a history of month. 2009 Stored Procedure are Compiled on First Run – SP taking Longer to Run First Time Is stored procedure pre-compiled? Why the Stored Procedure takes a long time to run for the first time?  This is a very common questions often discussed by developers and DBAs. There is an absolutely definite answer but the question has been discussed forever. There is a misconception that stored procedures are pre-compiled. They are not pre-compiled, but compiled only during the first run. For every subsequent runs, it is for sure pre-compiled. Read the entire article for example and demonstration. Removing Key Lookup – Seek Predicate – Predicate – An Interesting Observation Related to Datatypes This is one of the most important performance tuning lesson on my blog. I suggest this weekend you spend time reading them and let me know what you think about the concepts which I have demonstrated in the four part series. Part 1 | Part 2 | Part 3 | Part 4 Seek Predicate is the operation that describes the b-tree portion of the Seek. Predicate is the operation that describes the additional filter using non-key columns. Based on the description, it is very clear that Seek Predicate is better than Predicate as it searches indexes whereas in Predicate, the search is on non-key columns – which implies that the search is on the data in page files itself. Policy Based Management – Create, Evaluate and Fix Policies This article will cover the most spectacular feature of SQL Server – Policy-based management and how the configuration of SQL Server with policy-based management architecture can make a powerful difference. Policy based management is loaded with several advantages. It can help you implement various policies for reliable configuration of the system. It also provides additional administration assistance to DBAs and helps them effortlessly manage various tasks of SQL Server across the enterprise. 2010 Recycle Error Log – Create New Log file without Server Restart Once I observed a DBA to restaring the SQL Server when he needed new error log file. This was funny and sad both at the same time. There is no need to restart the server to create a new log file or recycle the log file. You can run sp_cycle_errorlog and achieve the same result. Get Database Backup History for a Single Database Simple but effective script! Reducing CXPACKET Wait Stats for High Transactional Database The subject is very complex and I have done my best to simplify the concept. In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Information Related to DATETIME and DATETIME2 There are quite a lot of confusion with DATETIME and DATETIME2. DATETIME2 is also one of the underutilized datatype of SQL Server.  In this blog post I have written a follow up of the my earlier datetime series where I clarify a few of the concepts related to datetime. Difference Between GETDATE and SYSDATETIME Difference Between DATETIME and DATETIME2 – WITH GETDATE Difference Between DATETIME and DATETIME2 2011 Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function CUME_DIST(). This function provides cumulative distribution value. It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical functions FIRST_VALUE() and LAST_VALUE(). This function returns first and last value from the list. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. OVER clause with FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 – ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING “Don’t you think there is bug in your first example where FIRST_VALUE is remain same but the LAST_VALUE is changing every line. I think the LAST_VALUE should be the highest value in the windows or set of result.” Puzzle – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY You can see that row number 2, 3, 4, and 5 has same SalesOrderID = 43667. The FIRST_VALUE is 78 and LAST_VALUE is 77. Now if these function was working on maximum and minimum value they should have given answer as 77 and 80 respectively instead of 78 and 77. Also the value of FIRST_VALUE is greater than LAST_VALUE 77. Why? Explain in detail. Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function LEAD() and LAG(). This functions accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. A Real Story of Book Getting ‘Out of Stock’ to A 25% Discount Story Available Our book was out of stock in 48 hours of it was arrived in stock! We got call from the online store with a request for more copies within 12 hours. But we had printed only as many as we had sent them. There were no extra copies. We finally talked to the printer to get more copies. However, due to festivals and holidays the copies could not be shipped to the online retailer for two days. We knew for sure that they were going to be out of the book for 48 hours. This is the story of how we overcame that situation! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What is the best approach for inline code comments?

    - by d1egoaz
    We are doing some refactoring to a 20 years old legacy codebase, and I'm having a discussion with my colleague about the comments format in the code (plsql, java). There is no a default format for comments, but in most cases people do something like this in the comment: // date (year, year-month, yyyy-mm-dd, dd/mm/yyyy), (author id, author name, author nickname) and comment the proposed format for future and past comments that I want is: // {yyyy-mm-dd}, unique_author_company_id, comment My colleague says that we only need the comment, and must reformat all past and future comments to this format: // comment My arguments: I say for maintenance reasons, it's important to know when and who did a change (even this information is in the SCM). The code is living, and for that reason has a history. Because without the change dates it's impossible to know when a change was introduced without open the SCM tool and search in the long object history. because the author is very important, a change of authors is more credible than a change of authory Agility reasons, no need to open and navigate through the SCM tool people would be more afraid to change something that someone did 15 years ago, than something that was recently created or changed. etc. My colleague's arguments: The history is in the SCM Developers must not be aware of the history of the code directly in the code Packages gets 15k lines long and unstructured comments make these packages harder to understand What do you think is the best approach? Or do you have a better approach to solve this problem?

    Read the article

  • Reformatting and version control

    - by l0b0
    Code formatting matters. Even indentation matters. And consistency is more important than minor improvements. But projects usually don't have a clear, complete, verifiable and enforced style guide from day 1, and major improvements may arrive any day. Maybe you find that SELECT id, name, address FROM persons JOIN addresses ON persons.id = addresses.person_id; could be better written as / is better written than SELECT persons.id, persons.name, addresses.address FROM persons JOIN addresses ON persons.id = addresses.person_id; while working on adding more columns to the query. Maybe this is the most complex of all four queries in your code, or a trivial query among thousands. No matter how difficult the transition, you decide it's worth it. But how do you track code changes across major formatting changes? You could just give up and say "this is the point where we start again", or you could reformat all queries in the entire repository history. If you're using a distributed version control system like Git you can revert to the first commit ever, and reformat your way from there to the current state. But it's a lot of work, and everyone else would have to pause work (or be prepared for the mother of all merges) while it's going on. Is there a better way to change history which gives the best of all results: Same style in all commits Minimal merge work ? To clarify, this is not about best practices when starting the project, but rather what should be done when a large refactoring has been deemed a Good Thing™ but you still want a traceable history? Never rewriting history is great if it's the only way to ensure that your versions always work the same, but what about the developer benefits of a clean rewrite? Especially if you have ways (tests, syntax definitions or an identical binary after compilation) to ensure that the rewritten version works exactly the same way as the original?

    Read the article

  • Storing revisions of a document

    - by dev.e.loper
    This is a follow up question to my original question. I'm thinking of going with generating diffs and storing those diffs in the database 'History' table. I'm using diff-match-patch library to generate what is called a 'patch'. On every save, I compare previous and new version and generate this patch. The patch could be used to generate a document at specific point in time. My dilemma is how to store this data. Should I: a Insert a new database record for every patch? b. Store these patches in javascript array and store that array in history table. So there is only one db History record for document with an array of all the patches. Concerns with: a. Too many db records generated. Will be slow and CPU intensive to query. b. Only one record. If record is somehow corrupted/deleted. Entire revision history is gone. I'm looking for suggestions, concerns with either approach.

    Read the article

  • Aggregating cache data from OCEP in CQL

    - by Manju James
    There are several use cases where OCEP applications need to join stream data with external data, such as data available in a Coherence cache. OCEP’s streaming language, CQL, supports simple cache-key based joins of stream data with data in Coherence (more complex queries will be supported in a future release). However, there are instances where you may need to aggregate the data in Coherence based on input data from a stream. This blog describes a sample that does just that. For our sample, we will use a simplified credit card fraud detection use case. The input to this sample application is a stream of credit card transaction data. The input stream contains information like the credit card ID, transaction time and transaction amount. The purpose of this application is to detect suspicious transactions and send out a warning event. For the sake of simplicity, we will assume that all transactions with amounts greater than $1000 are suspicious. The transaction history is available in a Coherence distributed cache. For every suspicious transaction detected, a warning event must be sent with maximum amount, total amount and total number of transactions over the past 30 days, as shown in the diagram below. Application Input Stream input to the EPN contains events of type CCTransactionEvent. This input has to be joined with the cache with all credit card transactions. The cache is configured in the EPN as shown below: <wlevs:caching-system id="CohCacheSystem" provider="coherence"/> <wlevs:cache id="CCTransactionsCache" value-type="CCTransactionEvent" key-properties="cardID, transactionTime" caching-system="CohCacheSystem"> </wlevs:cache> Application Output The output that must be produced by the application is a fraud warning event. This event is configured in the spring file as shown below. Source for cardHistory property can be seen here. <wlevs:event-type type-name="FraudWarningEvent"> <wlevs:properties type="tuple"> <wlevs:property name="cardID" type="CHAR"/> <wlevs:property name="transactionTime" type="BIGINT"/> <wlevs:property name="transactionAmount" type="DOUBLE"/> <wlevs:property name="cardHistory" type="OBJECT"/> </wlevs:properties </wlevs:event-type> Cache Data Aggregation using Java Cartridge In the output warning event, cardHistory property contains data from the cache aggregated over the past 30 days. To get this information, we use a java cartridge method. This method uses Coherence’s query API on credit card transactions cache to get the required information. Therefore, the java cartridge method requires a reference to the cache. This may be set up by configuring it in the spring context file as shown below: <bean class="com.oracle.cep.ccfraud.CCTransactionsAggregator"> <property name="cache" ref="CCTransactionsCache"/> </bean> This is used by the java class to set a static property: public void setCache(Map cache) { s_cache = (NamedCache) cache; } The code snippet below shows how the total of all the transaction amounts in the past 30 days is computed. Rest of the information required by CardHistory object is calculated in a similar manner. Complete source of this class can be found here. To find out more information about using Coherence's API to query a cache, please refer Coherence Developer’s Guide. public static CreditHistoryData(String cardID) { … Filter filter = QueryHelper.createFilter("cardID = :cardID and transactionTime :transactionTime", map); CardHistoryData history = new CardHistoryData(); Double sum = (Double) s_cache.aggregate(filter, new DoubleSum("getTransactionAmount")); history.setTotalAmount(sum); … return history; } The java cartridge method is used from CQL as seen below: select cardID, transactionTime, transactionAmount, CCTransactionsAggregator.execute(cardID) as cardHistory from inputChannel where transactionAmount1000 This produces a warning event, with history data, for every credit card transaction over $1000. That is all there is to it. The complete source for the sample application, along with the configuration files, is available here. In the sample, I use a simple java bean to load the cache with initial transaction history data. An input adapter is used to create and send transaction events for the input stream.

    Read the article

  • How to verify the Liskov substitution principle in an inheritance hierarchy?

    - by Songo
    Inspired by this answer: Liskov Substitution Principle requires that Preconditions cannot be strengthened in a subtype. Postconditions cannot be weakened in a subtype. Invariants of the supertype must be preserved in a subtype. History constraint (the "history rule"). Objects are regarded as being modifiable only through their methods (encapsulation). Since subtypes may introduce methods that are not present in the supertype, the introduction of these methods may allow state changes in the subtype that are not permissible in the supertype. The history constraint prohibits this. I was hoping if someone would post a class hierarchy that violates these 4 points and how to solve them accordingly. I'm looking for an elaborate explanation for educational purposes on how to identify each of the 4 points in the hierarchy and the best way to fix it. Note: I was hoping to post a code sample for people to work on, but the question itself is about how to identify the faulty hierarchies :)

    Read the article

  • SSMS Tools Pack 2.7 is released. New website, improved licensing and features.

    - by Mladen Prajdic
    New website Nice, isn't it? Cleaner, simpler, better looking and more modern. If you have any suggestions for further improvements I'd be glad to hear them. Simpler licensing With SSMS tools Pack 2.7 the licensing is finally where it should be. It is now based on the activate/deactivate model. This way you can move a license from machine to machine with simple deactivation on one and reactivation on another machine. Much better, no? Because of very good feedback I have added an option for 6 machines and lowered the 4 machines option to 3 machines. This should make it much simpler for you to choose the right option for yourself. Improved features Version 2.5.3 was already extremely stable and 2.7 continues with that tradition. Because of that I could fully focus on features and why 3.0 will rock even more that 2.7! ;) In version 2.7 I have addressed quite a few improvements you were requesting for a while now. SQL History This is probably the biggest time saver out there, therefore it's only fair it gets a few important updates. If you have an existing .sql file opened, the Window Content History now saves your code to that existing file and also makes a backup in the SQL History log default location. Search is still done through the SQL History log but the Tab Sessions Restore opens your existing .sql file. This way you don't have to remember to save your existing files by yourself anymore. A bug when you couldn't search properly if you copied the log files to a new location was fixed. Unfortunately this removed the option to filter a search with the time component. The smallest search interval is now one day. The SSMS Tools Pack now remembers the visibility of the Current Window History window when you exit SSMS. SQL Snippets You can now set the position of the cursor in your snippets by placing {C} somewhere in your snippet. It's a small improvement but can be a huge time saver since you don't have to move through the snippet to the desired location anymore. Run script on multiple databases Database choices can now be saved with a name and then loaded again next time. You can also choose to run the script in a new window for each chosen database. Search through grid results You can now go previous/next search result with the Prev/Next control inside the search window. This is extremely useful if you have a large resultset. IT saves you the scrolling. CRUD generator Four new variables have been added: |CurrentDate| writes current date in format yyyy-MM-dd to your script |CurrentTime| writes current time in 24h format HH:mm:ss to your script |CurrentWinUser| writes current Windows logged on user to your script |CurrentSqlUser| writes current SQL logged on login to your script This was actually quite a requested feature so if you have any other ideas for extra variables, do let me know. That's about it. I hope you're going to enjoy this version as much as the previous ones. Have fun!

    Read the article

  • Tools for Maintaining Branches in SVN

    - by Chris Conway
    My team uses SVN for source control. Recently, I've been working on a branch with occasional merges from the trunk and it's been a fairly annoying experience (cf. Joel Spolsky's "Subversion Story #1"), so I've been looking alternative ways to manage branches and merging. Given that a centralized SVN repository is non-negotiable, what I'd like is a set of tools that satisfy the following conditions. Complete revision history should be stored in SVN for both trunk and branches. Merging in either direction (and potentially criss-crossing) should be relatively painless. Merging history should be stored in SVN to the greatest extent possible. I've looked at both git-svn and bzr-svn and neither seems to be up to the job—basically, given the revision history they can export from the SVN repository, they can't seem to do any better a job handling merges than SVN can. For example, after cloning the repository with git, the revision history for my branch shows the original branch off of trunk, but git doesn't "see" any of the interim SVN merges as "native" merges—the revision history is one long line. As a result, any attempts to merge from trunk in git yield just as many conflicts as an SVN merge would. (Besides, the git-svn documentation explicitly warns against using git to merge between branches.) Is there a way to adjust my workflow to make git satisfy the above requirements? Maybe I just need tips or tricks (or a separate merging tool?) to help SVN be better at merging into branches?

    Read the article

  • Populate a list from xml using python

    - by Sam
    I have an xml file in the following format: <food> <desert> cake <desert> </food> <history> currently in my belly </history> I want to create two list, food and text populated with cake and history in string format. Is there an easy way to do it in python?

    Read the article

  • Javascript Back Button - Stop the initial load of back button from working

    - by Evan
    Hi, I'm using a javascript back button link and forward button link to control the user's history inside a modal/lightbox window. The challenge I have is when the modal window is launched, and the "back" and "forward" buttons are present for the user to click, if the initial javascript back button is clicked when the window opens, it actually closes the modal window, because the javascript history is taking the user back to the page PRIOR to the opening of modal window. So, in essence, I'm trying to disable the "back" button from working on the initial load of the modal/light box. <a href="javascript:history.go(-1)">Back Button</a> <a href="javascript:history.go(1)">Foward Button</a> Is this possible?

    Read the article

  • CSharp: Testing a Generic Class

    - by Jonas Gorauskas
    More than a question, per se, this is an attempt to compare notes with other people. I wrote a generic History class that emulates the functionality of a browser's history. I am trying to wrap my head around how far to go when writing unit tests for it. I am using NUnit. Please share your testing approaches below. The full code for the History class is here (http://pastebin.com/ZGKK2V84).

    Read the article

  • Testing a Generic Class

    - by Jonas Gorauskas
    More than a question, per se, this is an attempt to compare notes with other people. I wrote a generic History class that emulates the functionality of a browser's history. I am trying to wrap my head around how far to go when writing unit tests for it. I am using NUnit. Please share your testing approaches below. The full code for the History class is here (http://pastebin.com/ZGKK2V84).

    Read the article

  • Javascript back button for iframe parent window

    - by DisgruntledGoat
    I have some pages with iframes in them. I want to add a link/button inside the iframe, to make the browser go back one page in history. But I want the PARENT to go back, not the iframe itself. I originally had this, which makes the iframe page go back (if it exists): <a href="javascript:history.back()">&laquo; Go back</a> I've tried window.parent.history.back() and window.parent.document.history.back() but neither one works. There are no cross-domain issues accessing the iframe from the parent and vice-versa.

    Read the article

  • Mercurial: What is the benefit of fixing errors in earlier versions

    - by Ken Earley
    According to the guide, under the heading: Fixing errors in earlier revisions, it states this: When you find a bug in some earlier revision you have two options: either you can fix it in the current code, or you can go back in history and fix the code exactly where you did it, which creates a cleaner history. How does going back in history make it cleaner? It still makes a new changeset at tip. Does it have something to do with what is recorded as it's parent? Is there a way to view the logs seeing the newly inserted changeset in that order? This lesson is under the main heading of Lone developer with nonlinear history. Is this good practice when working on a team?

    Read the article

  • Mercurial central server file discrepancy (using 'diff to local')

    - by David Montgomery
    Newbie alert! OK, I have a working central Mercurial repository that I've been working with for several weeks. Everything has been great until I hit a really bizarre problem: my central server doesn't seem to be synced to itself? I only have one file that seems to be out-of-sync right now, but I really need to know how this happened to prevent it from happening in the future. Scenario: 1) created Mercurial repository on server using an existing project directory. The directory contained the file 'mypage.aspx'. 2) On my workstation, I cloned the central repository 3) I made an edit to mypage.aspx 4) hg commit, then hg push from my workstation to the central server 5) now if I look at mypage.aspx on the server's repository using TortoiseHg's repository explorer, I see the change history for mypage.aspx -- an initial check-in and one edit. However, when I select 'Diff to local', it shows the current version on the server's disk is the original version, not the edited version! I have not experimented with branching at all yet, so I'm sure I'm not getting a branch problem. 'hg status' on the server or client returns no pending changes. If I create a clone of the server's repository to a new location, I see the same change history as I would expect, but the file on disk doesn't contain my edit. So, to recap: Central repository = original file, but shows change in revision history (bad) Local repository 'A' = updated file, shows change in revision history (good) Local repository 'B' = original file, but shows change in revision history (bad) Help please! Thanks, David

    Read the article

  • Data in column not changed

    - by shanks
    I have sql 2005 and when i run below query, data from RealTimeLog table transfer to History but when new data come in RealTimeLog table old data not changed by new one means OutTime data is not changed with new data from RealTimeLog. insert into History (UserID,UserName,LogDate, [InTime], [OutTime]) SELECT UserID,UserName,[LogDate],CONVERT(nvarchar,MIN(CONVERT(datetime, [LogTime], 108)), 108), CONVERT(nvarchar, MAX(CONVERT(datetime, [LogTime], 108)), 108) From RealTimeLog where not Exists (select * from History H Where H.UserID = RealTimeLog.UserID AND H.UserName=RealTimeLog.UserName AND H.LogDate=RealTimeLog.LogDate) GROUP BY UserID,UserName,[LogDate] ORDER BY UserID,[LogDate] for ex. 1 Shanks 02/05/2010 9:00 10:00 if new Max time generated suppose 11:00 in RealtimeLog then it is not inserted in History table and output remain same as above.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >