Search Results

Search found 1213 results on 49 pages for 'sparse checkout'.

Page 44/49 | < Previous Page | 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • questions about using svn

    - by ajsie
    i have set up a svn repository in a remote ubuntu server using webdav (http://ip/svn). but i have some questions: should i create one svn repository for each project folder? so i should use "svnadmin create" 5 times for 5 projects? or should i import them to seperate folders in the svn repository? should i after i have created a svn repo, import one project folder into it with "svn import" and then DELETE my original local folder thus only having it in the svn repo in the remote ubuntu server? is this safe or should i still have a local copy for some reason (cause i wont work in that one)? i use netbeans to check out projects from svn repo, then i edit the files and commit the changes. but how should i do to update the live web server content (in /var/www)? should i in my ubuntu server use "svn checkout" and check it out to /var/www or should i use netbeans to check out to a local folder and then upload the files to /var/www with ftp or webdav (and which one of them should i use)? so i got a svn repo and lets say 4 programmers are working with it, checking out and commiting. how do i check what is happening in the repo, which one made which changes, and all the changes from day 1 to day 213? does netbeans/eclipse letting me see this kind of information or do i download another application for it? i should have one svn user for each programmer in the ubuntu server? is this accomplished by using "htpasswd" 4 times for 4 programmers? how do i couple all these users to same group so that i could modify file access specific for the svn group and all its members?

    Read the article

  • Compile subversion on CentOs

    - by peter
    I have downloaded, compiled and installed so far: apr-1.3.9 apr-util-1.3.9 sqlite-3.6.23 zlib-1.2.4 libtool-2.2.6b Now after downloading subversion-1.6.9, the config works fine but compiling it will end with the following error: cd subversion/svn && /bin/sh /root/subversion-1.6.9/libtool --tag=CC --silent --mode=link gcc -g -O2 -g -O2 -pthread -rpath /usr/local/lib -o svn add-cmd.o blame-cmd.o cat-cmd.o changelist-cmd.o checkout-cmd.o cleanup-cmd.o commit-cmd.o conflict-callbacks.o copy-cmd.o delete-cmd.o diff-cmd.o export-cmd.o help-cmd.o import-cmd.o info-cmd.o list-cmd.o lock-cmd.o log-cmd.o main.o merge-cmd.o mergeinfo-cmd.o mkdir-cmd.o move-cmd.o notify.o propdel-cmd.o propedit-cmd.o propget-cmd.o proplist-cmd.o props.o propset-cmd.o resolve-cmd.o resolved-cmd.o revert-cmd.o status-cmd.o status.o switch-cmd.o tree-conflicts.o unlock-cmd.o update-cmd.o util.o ../../subversion/libsvn_client/libsvn_client-1.la ../../subversion/libsvn_wc/libsvn_wc-1.la ../../subversion/libsvn_ra/libsvn_ra-1.la ../../subversion/libsvn_delta/libsvn_delta-1.la ../../subversion/libsvn_diff/libsvn_diff-1.la ../../subversion/libsvn_subr/libsvn_subr-1.la /usr/local/apr/lib/libaprutil-1.la -lexpat /usr/local/apr/lib/libapr-1.la -lrt -lcrypt -lpthread -ldl /usr/bin/ld: cannot find -lexpat collect2: ld returned 1 exit status make: * [subversion/svn/svn] Error 1 The file at /usr/local/apr/lib/libapr-1.la exists and seems to be OK (from permission perspective What could be the problem here? Thanks Peter

    Read the article

  • Eclipse CDT setup for remote build

    - by Posco Grubb
    Is there a better way to setup Eclipse CDT for local editing and remote building? I am working on a C++ project that uses GNU make in Linux. The code is under CVS on a Linux server. When I'm in the lab, I use Eclipse CDT on a Linux-x64 PC. The project is built on a Linux-x86 PC. All the computers in the lab (including the CVS server) have NFS mounts. When I'm at home, I use Eclipse CDT on a Windows 7 PC. The Windows PC connects to the Linux CVS server via SSH tunnel. To edit source, I rsync the C++ project under the Linux Eclipse workspace back to my Windows Eclipse workspace. (I can also do a remote CVS checkout on the Windows PC.) To build from home, I use a custom build command that SSH's to the Linux-x86 PC, rsync's the C++ project from my Windows Eclipse workspace to my Linux Eclipse workspace, and then runs make on the Liunx-x86 PC, specifying the correct path for the Makefile. In order to go back and forth between lab and home without committing my changes to CVS every time, I use rsync. When I transition from lab to home, I rsync sources to my Windows Eclipse workspace. When I build from home, the sources get rsync'd back to the Linux Eclipse workspace. Is there a better, less wonky way to do this? (I'm NOT interested in remote debugging.)

    Read the article

  • Using SVN post-commit hook to update only files that have been commited

    - by fondie
    I am using an SVN repository for my web development work. I have a development site set up which holds a checkout of the repository. I have set up an SVN post-commit hook so that whenever a commit is made to the repository the development site is updated: cd /home/www/dev_ssl /usr/bin/svn up This works fine but due to the size of the repository the updates take a long time (approx. 3 minutes) which is rather frustrating when making regular commits. What I'd like is to change the post-commit hook to only update those files/directories that have been committed but I don't know how to go about doing this. Updating the "lowest common directory" would probably be the best solution, e.g. If committing the follow files: /branches/feature_x/images/logo.jpg /branches/feature_x/css/screen.css It would update the directory: /branches/feature_x/ Can anyone help me create a solution that achieves this please? Thanks! Update: The repository and development site are located on the same server so network issues shouldn't be involved. CPU usage is very low, and I/O should be ok (it's running on hi-spec dedicated server) The development site is approx. 7.5GB in size and contains approx. 600,000 items, this is mainly due to having multiple branches/tags

    Read the article

  • Paypal Encrypted Website payments

    - by John Isaacks
    I am trying to integrate a PayPal Website Payments Standard Cart Upload payment type into my shopping cart. I integrated Google Checkout a while back and I did not find it overly confusing as I do paypal. I am getting info on how to encrypt it from here: https://cms.paypal.com/us/cgi-bin/?&cmd=_render-content&content_ID=developer/e_howto_html_encryptedwebpayments#id08A3I0P017Q Paypal says I need to generate a private key and a public certificate using OpenSSL. I went to OpenSSL and downloaded the latest release, which is just a folder containing various files but I see no application I can use, not sure what to do here. Even if I were to get OpenSSL to generate me a private key and public cert, the next step is to download either an MS or Java command line tool to create the encrypted cart ahead of time with the cart-total, tax, etc. which sounds crazy to me, like I am supposed to manually do this prior to every order?? Obviously I do not know the items in the cart the customer is going to buy before hand so I need this to be done on the fly on my website using PHP. But I am completely lost. There has to be a way to setup dynamic secure cart uploads to paypal. Can someone please point me in the right direction?

    Read the article

  • Git submodules not updating?

    - by DavidYell
    I have a project in which I've included some libraries as submodules. They work fine on the machine that you add them on, but when I get home and checkout the repo, I get the folders for the submodules but they are empty. .gitmodules Neon@Neon-PC /cygdrive/c/xampp/htdocs/learning-lithium $ cat .gitmodules [submodule "libraries/lithium"] path = libraries/lithium url = git://github.com/UnionOfRAD/lithium.git [submodule "app/webroot/css/elements"] path = app/webroot/css/elements url = https://github.com/dmitryf/elements.git [submodule "app/libraries/li3_markdown"] path = app/libraries/li3_markdown url = https://github.com/sandelius/li3_markdown.git [submodule "app/webroot/markitup"] path = app/webroot/markitup url = https://github.com/markitup/1.x.git Config and status commands Neon@Neon-PC /cygdrive/c/xampp/htdocs/learning-lithium $ git submodule -af14f48b419310935446176290e1f9dc641400e0 app/libraries/li3_markdown -ebdcd8ca09c874f5e2ef81ec198cc441f37a4f74 app/webroot/css/elements -328291e49a3c7e1fb76b3342f112734864836205 app/webroot/markitup -4980010526d05c556c496ff63951da31828c5943 libraries/lithium Neon@Neon-PC /cygdrive/c/xampp/htdocs/learning-lithium $ git submodule update Neon@Neon-PC /cygdrive/c/xampp/htdocs/learning-lithium $ git submodule status -af14f48b419310935446176290e1f9dc641400e0 app/libraries/li3_markdown -ebdcd8ca09c874f5e2ef81ec198cc441f37a4f74 app/webroot/css/elements -328291e49a3c7e1fb76b3342f112734864836205 app/webroot/markitup -4980010526d05c556c496ff63951da31828c5943 libraries/lithium I added these as you would normally with, git submodule add <repo> <path> git submodule init The submodules are hosted on Github and my repo is hosted on Bitbucket, although I'm not sure if this is relevant.

    Read the article

  • How to find if a branch is a locally tracked branch or user created local branch?

    - by Senthil A Kumar
    I have a remote tracking branch tracked locally in my local repository using 'git branch -b branch-name origin/branch-name'. My remote branch is test2/test2 (origin/branch-name) which is being tracked locally as test2. The origin is also named test2. I haven't checked-out my local tracking branch test2. When i do a 'git pull origin remote-branch:local-tracked-branch' i get this error [test2]$ git pull test2 test2:test2 From /gitvobs/git_bare/test2 ! [rejected] test2 - test2 (non fast forward) Whereas when i checkout my local tracking branch test2 and do pull 'git pull origin local-tracked-branch' i don't get the error and i do a pull using 'git pull test2 test2' From /gitvobs/git_bare/test2 * branch test2 - FETCH_HEAD Auto-merging a.txt Automatic merge failed; fix conflicts and then commit the result. i know that adding a + (git pull test2 +test2:test2) would help but it overwrites local changes. So how do i know which of my local branches are created by me locally using 'git branch new-branch-name' or tracked locally from remote branches using git branch -b branch-name origin/branch-name'?

    Read the article

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

  • TortoiseSVN lists files as modified, but they are identical

    - by BJ Safdie
    I am merging a hot fix from our QA branch back into our Dev branch. Five files have changed. I do a fresh checkout of the Dev branch. I then do a merge (range of revisions) from QA into the Dev working copy. It brings in five files and there is a conflict on an external and ignore property -- which I resolve by "using local" (dev). When I check modifications or commit, I expect to see the five files I merged as the only changes. However, I get close to 700 "modified" files showing up in the commit dialog. If I select one of these file and "Compare with base," WinMerge comes up and says the "files are identical." I have tried this with the file dates set to "last committed" and not. Why are all of these files showing up as modified, when they are identical? What in the merge is causing this? How do I prevent SVN/TortoiseSVN from getting confused this way in the future?

    Read the article

  • Is there a way to easily convert a series of tarballs of a source tree into a git repository?

    - by Hotei
    I'm new to git and I have a moderately large number of weekly tarballs from a long running project. Each tarball has on average a few hundred files in it. I'm looking for a git strategy that will allow me to add the expanded contents of each tarball to a new git repository, starting from version 1.001 and going through version 1.650. As of this stage of the project 99.5% of tarball(n) is just a copy of version(n-1) - in other words, a perfect candidate for git. The desired end result is to have only the master branch remaining at the end of the process. I think I know git well enough to do this "by hand". As I understand it there is no possibility of a merge conflict since there will be no opportunity to change the master before the next version is added and committed. A shell script is my first guess, but I'm not sure how well bash will like it when git checkout branch_n gets processed while bash is executing in branch_n-1. For the purposes of this project the host environment is Ubuntu 10.4, resources available are 8 Gig RAM, 500 Gig Disk space free and 4 CPU processor at 3.ghz . I don't need someone else to solve the problem but I could use a nudge in the right direction as to how a git expert would approach it. Any advice from someone who's "been there done that" would be appreciated. Hotei PS: I have looked at site's suggested "related questions" and found nothing relevant.

    Read the article

  • m2eclipse: How to set Eclipse project settings when importing a maven project?

    - by Marius Andreiana
    Using m2eclipse Eclipse plugin, everybody on the dev team should be able to checkout source code, import Maven project in Eclipse and be good to go. I saw m2eclipse is being merged into Eclipse 3.7, and maven-eclipse-plugin won't be maintained any longer, so I'm looking for a m2eclipse-based solution (without running "mvn eclipse:clean eclipse:eclipse" before project import, which is what maven-eclipse-plugin does). maven-eclipse-plugin allows this in pom.xml <additionalConfig> <file> <name>.settings/com.google.gdt.eclipse.core.prefs</name> <content><![CDATA[ eclipse.preferences.version=2 jarsExcludedFromWebInfLib= warSrcDir=${project.build.directory}/${project.build.finalName} warSrcDirIsOutput=true ]]> </content> </file> The more general question is How would m2eclipse do something similar? For some cases, just saving the eclipse .settings/prefs file works (e.g. org.eclipse.jdt.ui.prefs), but in this case, com.google.gdt.eclipse.core.prefs is always overwritten on m2eclipse project import. A specific question is asked here, with no reply. Thanks! UPDATE: Not possible now, see request

    Read the article

  • How to automatically split git commits to separate changes to a single file

    - by Hercynium
    I'm just plain stuck as to how to accomplish this, or if it's even possible. Even it it can be done, I wonder if it could be setting us up for a messed-up, unmanageable repository. I have set up two branches of the code-base. One is "master" and the other is "prod". The HEAD of prod is always the latest code in production, and master is the main development branch. Here's the problem, though: We're converting from CVS here at $work and most of the developers are still getting used to git. Their CVS workflow involved tagging versions of individual files for production, then updating the servers using the tag. Unfortunately, this has let to sloppy practices like committing unrelated changes together and then tagging the files after-the-fact... and the devs want to know how they can do the following: In their local repos, they hack and commit to their hearts' delight, then at the end of the day, be able to run a command that takes a list of files whose commits over the day get merged with their local prod - and only those files - even if those commits combine changes to other files. I know how to split commits with git rebase --interactive, but I have no clue how I would automate splitting commits at all, never mind the way I want to. I do realize the simplest thing would be to just tell them to switch the their prod branches, checkout the files from their master branches into the working tree then commit to prod. My problem with that is losing the history of their commits over the day.

    Read the article

  • How can this code throw a NullReferenceException?

    - by Davy8
    I must be going out of my mind, because my unit tests are failing because the following code is throwing a null ref exception: int pid = 0; if (parentCategory != null) { Console.WriteLine(parentCategory.Id); pid = parentCategory.Id; } The line that's throwing it is: pid = parentCategory.Id; The console.writeline is just to debug in the NUnit GUI, but that outputs a valid int. Edit: It's single-threaded so it can't be assigned to null from some other thread, and the fact the the Console.WriteLine successfully prints out the value shows that it shouldn't throw. Edit: Relevant snippets of the Category class: public class Category { private readonly int id; public Category(Category parent, int id) { Parent = parent; parent.PerformIfNotNull(() => parent.subcategories.AddIfNew(this)); Name = string.Empty; this.id = id; } public int Id { get { return id; } } } Well if anyone wants to look at the full code, it's on Google Code at http://code.google.com/p/chefbook/source/checkout I think I'm going to try rebooting the computer... I've seen pretty weird things fixed by a reboot. Will update after reboot. Update: Mystery solved. Looks like NUnit shows the error line as the last successfully executed statement... Copy/pasted test into new console app and ran in VS showed that it was the line after the if statement block (not shown) that contained a null ref. Thanks for all the ideas everyone. +1 to everyone that answered.

    Read the article

  • ISO/IEC Website and Charging for C and C++ Standards

    - by Michael Aaron Safyan
    The ISO C Standard (ISO/IEC 9899) and the ISO C++ Standard (ISO/IEC 14882) are not published online; instead, one must purchase the PDF for each of those standards. I am wondering what the rationale is behind this... is it not detrimental to both the C and C++ programming languages that the authoritative specification for these languages is not made freely available and searchable online? Doesn't this encourage the use of possibly inaccurate, non-authoritative sources for information regarding these standards? While I understand that much time and effort has gone into developing the C and C++ standards, I am still somewhat puzzled by the choice to charge for the specification. The OpenGroup Base Specification, for example, is available for free online; they make money buy charging for certification. Does anyone know why the ISO standards committees don't make their revenue in certifying standards compliance, instead of charging for these documents? Also, does anyone know if the ISO standards committee's atrociously looking website is intentionally made to look that way? It's as if they don't want people visiting and buying the spec. One last thing... the C and C++ standards are generally described as "open standards"... while I realize that this means that anyone is permitted to implement the standard, should that definition of "open" be revised? Charging for the standard rather than making it openly available seems contrary to the spirit of openness. P.S. I do have a copy of the ISO/IEC 9899:1999 and ISO/IEC 14882:2003, so please no remarks about being cheap or anything... although if you are tempted to say such things, you might want to consider the high school, undergraduate, and graduate students who might not have all that much extra cash. Also, you might want to consider the fact that the ISO website is really sketchy and they don't even tell you the cost until you proceed to the checkout... doesn't really encourage one to go and get a copy, now does it?

    Read the article

  • Max Daily Budget exceeded and Billing Status "Changing Daily Budget"

    - by draftpik
    We've exceeded the Max Daily Budget for our app, but we can't increase the budget due to a serious flaw in Google's billing system. Google App Engine and Google Wallet do not have very capable support for multiple sign-in. As a result, I went to change the budget, but it used the wrong Google Wallet account (a different Google Account I was signed in as). I had to go back and try again, but now our GAE app shows the following status: Billing Status: Changing Daily Budget Your account has been locked while we process your budget changes. If you were redirected to Google Checkout but did not complete the process, your settings will remain unchanged. (You will be able to make changes to your budget settings again once the outstanding payment is processed.) Now I'm completely prevented from making any billing changes, our app is shut off (over quota), and I have NOTHING I can do to fix it. This is a seriously fundamental flaw in App Engine's billing system and Google Wallet integration. Has anyone run into this before? Is there a workaround anyone is aware of? Right now, our production app is completely down thanks to this issue. Any help you can offer would be greatly appreciated? If you're from Google and you might be able to help on the backend, our app id is "nhldraftpik". Thanks! Brian

    Read the article

  • First could access the repository next cannot

    - by Banani
    Hi! I have configured svnserve (1.6.5,plain, without apache) on Fedora 12. I could ran the following svn subcommands which access the repository after configuration. Such as, commit, update,checkout, list. But, when next time ( after stopping,ctrl-c and then starting svnserve)I tried above commands, could not access the repository. This is happening both from local and remote machine. I ran svn and svnserve as below. 'svn commit svn://127.0.0.1/myrepository/' from local client. 'svnserve -d --foregorund --listen-port=3690 -r /path-to-repository/mypository/' To understand the problem better, I created another repository and found similar behavior . Frist I could access the repository and next I could not. I tried doing strace on svnserve, but don't uderstand much of it. Below is the partial output. accept(3, {sa_family=AF_INET, sin_port=htons(54425), sin_addr=inet_addr("127.0.0 .1")}, [16]) = 4 fcntl64(4, F_GETFD) = 0 fcntl64(4, F_SETFD, FD_CLOEXEC) = 0 waitpid(-1, 0xbfcdf31c, WNOHANG|WSTOPPED) = -1 ECHILD (No child processes) clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, chil d_tidptr=0xb7743758) = 9737 close(4) = 0 accept(3, 0xbfcdf2bc, [128]) = ? ERESTARTSYS (To be restarted) --- SIGCHLD (Child exited) @ 0 (0) --- sigreturn() = ? (mask now []) waitpid(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WNOHANG|WSTOPPED) = 9737 waitpid(-1, 0xbfcdf31c, WNOHANG|WSTOPPED) = -1 ECHILD (No child processes) My question: Why user are not able to access the repository anymore? What information the strace output gives about this problem? Any help is much appreciated. Thanks. Banani

    Read the article

  • SVN 409 conflict on commits and updates

    - by bhefny
    We have been using SVN for the past year now and when we migrated to an online server we started getting this error: Commit: Commit failed (details follow): File or directory 'x.php' is out of date; try updating resource out of date; try updating CHECKOUT of '/!svn/ver/491/x.php': 409 Conflict (http://svn.example.com) We are currently using SmartSVN 6.5 and we have also tested with RapidSVN & Syncro (but we can't use tortoise as we have a lot of Ubunutu users) at the begining I though this How do you fix an SVN 409 Conflict Error would help, but it didn't we are still facing the same error and it's even more absurd now. the main problem is that after you get the error, you can't shake it of. Updating doesn't solve, reverting doesn't solve. You are just stuck with the error. The only thing that could work is removing the file from SVN and adding your version but that would be against why we are using SVN in the first place This is our apache config (and yes autoversioning is ON) <Location /> DAV svn SVNPath /home/example/svn SVNAutoversioning on AuthType Basic AuthName "Access Restricted" AuthUserFile /home/example/svn-auth-file Require valid-user </Location> <Directory /> <Files ~ "^\.ht"> Order allow,deny Allow from all Satisfy All </Files> <Files ~ "^error_log"> Order allow,deny Allow from all Satisfy All </Files> </Directory> And here are some observation: We don't receive conflicts anymore, we just get this 409 conflict you can somehow avoid the error if you always update before committing When committing a modified file + a newly added file, you get the error. As if the added file incremented the version by one and then you are committing another file with a older version. Please advise, we are about to go insane

    Read the article

  • How do I put an ASP.NET website project and class library projects in one .sln file on Subversion

    - by JustinP8
    My company has several class libraries we use in multiple website projects (not web application projects). Website projects don't have .sln files, but I'm sure I've read in my past research that you can make a blank solution and put your website and class library projects in it. After answers to my previous questions, this is the direction that I'm going (based slightly on [http://amadiere.com/blog/2009/06/multiple-subversion-projects-in-one-visual-studio-solution-using-svnexternals/][1]: /websites /website1 /trunk /website1 /libraries /library1 /trunk /library1 /library2 /trunk /library2 /etc... Then I planed on using svn:externals to copy /library1, /library2, and so on into the working_copy/websites/website1/ folder. I want my team members to be able to checkout the /trunk folder for website1 and get a .sln file, /library1 external, /library2 external, etc. I want that .sln file to contain the website1 website project, and all of the library external projects. Hopefully that would look something like: /working_copy /websites /website1 /trunk /website1 /library1 (svn:external of libraries/library1/trunk/library1) /library2 (svn:external of libraries/library2/trunk/library2) /etc. website1.sln So, at the end of all of this, the goal is that my teammates check out the trunk, open the solution, and everyone has the exact same solution. When we commit, everything is committed appropriately to subversion (the website code, and the libraries are committed to their appropriate place on the repo). How have others solved these issues? How can I make a .sln file that my team members and I can share in this manner? [1]: "This Article"

    Read the article

  • Setup SVN/LAMP/Test Server/ on linux, where to start?

    - by John Isaacks
    I have a ubuntu machine I have setup. I installed apache2 and php5 on it. I can access the web server from other machines on the network via http://linux-server. I have subversion installed on it. I also have vsftpd installed on it so I can ftp to it from another computer on the network. Myself and other users currently use dreamweaver to checkin-checkout files directly from our live site to make changes. I want the connect to the linux server from pc. make the changes on the test server until ready and then pushed to the live site. I want to use subversion also into this workflow as well. but not sure what the best workflow is or how to set this up. I have no experience with linux, svn, or even using a test server, the checkin/out we are currently doing is the way I have always done it. I have hit many snags already just getting what I have setup because of my lack of knowledge in the area. Dreamweaver 5 has integration with subversion but I can't figure out how to get it to work. I want to setup and create the best workflow possible. I dont expect anyone to be able to give me an answer that will enlighten me enough to know everthing I need to know to do what I want to do (altough if possible that would be great) instead I am looking for maybe a knowledge path like answer. Like a general outline of what I need to do accompanied with links to learn how to do it. like read this book to learn linux, then read this article to learn svn, etc., then you should know what to do. I would be happy just getting it all setup, but I would like to know what I am actually doing while setting it up too.

    Read the article

  • Unable to create a breakpoint in Eclipse and Tomcat

    - by Traroth
    Since yesterday, I get a strange error message when I start my Tomcat (6.0.35) under Eclipse Juno (build 20120614-1722): Among the things I tried in order to get rid of the error: Check in Preferences - Java - Compiler if all "Classfile Generation" checkboxes where checked. I did that for both general preferences and project preference Uncheck them, build, check them build again (found on another question) Add org.eclipse.jdt.core.compiler.debug.lineNumber=generate to my .settings/org.eclipse.jdt.core.prefs file Use a new CVS checkout (same symptoms) And now, I don't know what to do anymore. The problem is really stopping me from get anything done. I can't work anymore. Crazy is: the problem doesn't happen on every class, only on some of them. Neither does it happen on my other Eclipse projects. And it didn't happen before yesterday, even if I can't remeber having done anything weird. Actually, I have never seen a problem like this in almost 10 years I'm using Eclipse... If you have any idea, I would be really grateful... Edit: I also tried to ignore the message and go on with my tests: If I create another breakpoint upstream from my problematic class, when I enter this problematic class, it tries to open a $Proxy132 class, which means it actually opens an empty page, with a source not found message

    Read the article

  • Storing Credit Card Numbers in SESSION - ways around it?

    - by JM4
    I am well aware of PCI Compliance so don't need an earful about storing CC numbers (and especially CVV nums) within our company database during checkout process. However, I want to be safe as possible when handling sensitive consumer information and am curious how to get around passing CC numbers from page to page WITHOUT using SESSION variables if at all possible. My site is built in this way: Step 1) collect Credit Card information from customer - when customer hits submit, the information is first run through JS validation, then run through PHP validation, if all passes he moves to step 2. Step 2) Information is displayed on a review page for customer to make sure the details of their upcoming transaction are shown. Only the first 6 and last 4 of the CC are shown on this page but card type, and exp date are shwon fully. If he clicks proceed, Step 3) The information is sent to another php page which runs one last validation, sends information through secure payment gateway, and string is returned with details. Step 4) If all is good and well, the consumer information (personal, not CC) is stored in DB and redirected to a completion page. If anything is bad, he is informed and told to revisit the CC processing page to try again (max of 3 times). Any suggestions?

    Read the article

  • Document management, SCM ?

    - by tsunade
    Hello, This might not be a hard core programming question, but it's related to some of the tools used by programmers I suspect. So we're a bunch of people each with a bunch of documents and a bunch of different computers on a bunch of operating systems (well, only 2, linux and windows). The best way these documents can be stored/managed is if they were available offline (the laptop might not always be online) but also synchronized between all the machines. Having a server with extra reliable storage be a "base repository" seems like a good idea to me. Using a SCM comes to my mind and I've tried Subversion, and it seems to be a good thing that it uses a centralized repository - but: When checking out the total size of the checkout is roughly double the original size. Big files or big repositories seem to slow it down. Also I've tried rsync, which might work - but it's a bit rough when it comes to the potential conflict. Finally I've tried Unison (which is a wrapping of rsync, I think) and while it works it becomes horribly slow for the big directories we have here since it has to scan everything. So the question is - is there a SCM tool out there that is actually practial to use for a big bunch of both small and big files? If thats a NO - does anyone know other tools that do this job? Thanks for reading :)

    Read the article

  • Safari Web Inspector is not updating when I update an element with ajax

    - by Ashley
    I have a checkout page http://www.oipolloi.com/oipolloi/shop/viewbasket.php with multiple ajax calls after certain items update (EG look up postage cost when country is changed, then update discount boxes etc). I've asked for help in the past about the best method of making sure ALL calls have returned before allowing the form to be submitted for payment processing: http://stackoverflow.com/questions/2290372/how-do-i-prevent-form-submission-until-multiple-ajax-calls-have-finished-jquery I was fairly happy that the logic in the finished solution was correct, but I have still been receiving reports that people using Safari are able to submit the form without the ajax calls returning properly. I have tried using the Safari Web Inspector to debug but it seems that when you Inspect Element, then update an element with an ajax call, the Inspector doesn't seem to update. I am updating hidden fields, so it's hard to be able to know whether the problem lies with the DOM not being updated properly, or the Inspector itself. I'm using Safari 4.0.5 on PC and you can reproduce the problem above by looking for a div id="countryFieldsBilling" with Web Inspector. It should contain three hidden fields that are initially empty. You can try to make it update (or not) by choosing a country from the select menu at the bottom of 'Shipping Address' box, and then clicking the 'click to use Shipping Address' link at the top of the 'Billing Address' just below. The behaviour I am seeing is that the country chosen in the shipping select gets copied correctly to the country in the billing select, but the hidden inputs in the Web Inspector do not get updated. When these hidden inputs do not get updated, this causes the problem that Mac Safari users report. If you can let me know either how to get Web Inspector to work properly, or something else I may have missed in the behaviour of Mac Safari that may cause these problems, that would be great. Thanks in advance

    Read the article

  • rails not recognizing project

    - by tipu
    I can create a new project using rails and I can use stuff like rails migration ... and i (correctly) get a error because the sqlite gem is missing. but when i try using rails migration ... with a project i checked out from github, it doesn't recognize that it is a rails project i get: Usage: rails new APP_PATH [options] Options: -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files -G, [--skip-git] # Skip Git ignores and keeps -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /usr/bin/ruby1.8 [--skip-gemfile] # Don't create a Gemfile and it goes on. any ideas? edit: it's probably an important detail that earlier my rails wasn't working at all. i had to cp /usr/bin/ruby to /usr/bin/local/ruby

    Read the article

  • Netbeans 6.1 Incorrect CVS Status on a file that does not exist

    - by Coder
    Hi, I have been trying to figure this out for a few hours off and on now and can't figure it out. I committed a lot of binary (jar files) to cvs and they worked fine, but one of the 6 directories, netbeans thinks has a file that it keeps trying to commit, but it doesn't actually exist in the file system. There is also another file in the same directory that i did commit, and netbeans cvs status says that it's an unknown file, but when i delete the directory and check it out, it shows up fine, but netbeans can't get the correct cvs status for the file. I looked in the repository and all looks fine. There is only one file present as it should be. Looking at the CVS directory in the checkout folder also reveals nothing suspicious. I don't know what to do about this. I don't know why netbeans thinks there is a file in that directory that is not actually there. I did a search in my working directory and my netbeans project directory for any file containing a reference to this file but there is nothing.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49  | Next Page >