Search Results

Search found 3281 results on 132 pages for 'repo man'.

Page 97/132 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Git push origin master

    - by user306472
    I am new to git and recently set up a new account with github. I'm following a rails tutorial from Michael Hartl online ( http://www.railstutorial.org/book#fig:github_first_page ) and followed his instructions to set up my git which were also inline with the setup instructions at github. Anyways, the "Next Steps" section on github were: mkdir sample_app cd sample_app git init touch README git add README git commit -m 'first commit' git remote add origin [email protected]:rosdabos55/sample_app.git git push origin master I got all the way to the last instruction (git push origin master) without any problem. When I entered that last line into my terminal, however, I got this error message: "fatal: No path specified. See 'man git-pull' for valid url syntax." What might I be doing wrong?

    Read the article

  • Controlling clone access to multiple mercurial repos served via hgwebdir.cgi

    - by chrislawlor
    I'm trying to host multiple hg repositories to use for my clients. I need to control access to each repository individually - not just push access, but clone as well. I've got an .htaccess set which requires authentication globally: AuthUserFile /path/to/hgweb.passwd AuthGroupFile /dev/null AuthName "Chris Lawlor Client Mercurial Repositories" AuthType Basic <Limit GET POST PUT> Require valid-user </Limit> <FilesMatch "\.(htaccess|passwd|config|bak)$"> Order Allow,Deny Deny from all </FilesMatch> Then in each repository, I've got a .hg/hgrc file requiring a valid user [web] allow_push = <comma seperated user list> This almost does what I need. The problem is that I need to add ALL my clients to hgweb.passwd, which gives them clone access to ALL of the repositories. The only solution I can think of is to have another .htaccess and .passwd file in EACH repository. I don't really want to do that though, seems a little convoluted. I can already specify a list of authorized users for each repository in that repos' hgrc file with the allow_push setting. If only there were an allow_clone setting as well... All the documentation I've found for hgwebdir.cgi is incomplete. I've read: http://mercurial.selenic.com/wiki/HgWebDirStepByStep http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi http://hgbook.red-bean.com/read/collaborating-with-other-people.html And others. I've yet to find a comprehensive list of hgrc settings. I guess this is as much an Apache question than a mercurial question. Unless I can find a better approach, I'll be going with a seperate .htaccess and .passwd file for each repo. This is a virtual host on Webfaction if it matters - set up roughly like this http://docs.webfaction.com/software/mercurial.html

    Read the article

  • How to compile a program which needs a newer version of glib

    - by michael
    Hi, I am trying to compile Webkit on Ubuntu 8.04. But when i run autogen.sh, I get the following error saying it needs a newer version of glib. So what is the safest way to install glib without screwing up the rest of my OS (since the rest needs 2.16 while webkit compile needs 2.21)? checking for GLIB... configure: error: Package requirements (glib-2.0 >= 2.21.3 gobject-2.0 >= 2.0 gthread-2.0 >= 2.0) were not met: Requested 'glib-2.0 >= 2.21.3' but version of GLib is 2.16.6 Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables GLIB_CFLAGS and GLIB_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details.

    Read the article

  • Understanding encryption Keys

    - by claws
    Hello, I'm really embarrassed to ask this question but its the fact that I don't know anything about encryption. I always avoided it. I don't understand the concept of encryption keys (public key, private key, RSA key, DSA key, PGP Key, SSH key & what not) . I did encounter these in regular basis but as I said I always avoided them. Here are few instances where I encountered: Creating Account: A public RSA or DSA key will be needed for an account. Send the key along with your desired account name to [email protected] I really don't know what are RSA/DSA or How to get their keys? Do I need to register some where for that? Mailing: I'm unable to recall exactly but I've seen some mails have some attachments like signature or the mail footer will have something called PGP signature etc.. I really don't get its concept. GIT Version control: I created account in assembla.com (for private GIT repo) and it asked me to enter "SSH keys" to my profile. Where am I gonna get these? Why do I need it? Isn't SSH related to remote login (like remote desktop or telnet)? How are these two SSHs related & differ? I don't know in how many more situations I'm going to encounter these things. I'm really confused and have no clue about where to start & how to proceed to learn these things. Kindly someone point me in correct direction. Note: I've absolutely zero interested in encryption related topics. So, there is no way I'm going to read a graduate level book on this subject. I just want to clear my concepts without going into much depth.

    Read the article

  • What options to use for Accurate bacula backup?

    - by Kiss Stefan
    It's actually 2 question in one. First is a bit more theoretically. So when specifying accurate options how does bacula figure out if a file needs to be backed up ? it's a simple AND ? As in if the options are Accurate = sm5 bacula will not backup the file if ((size = old size) AND (modtime = old modtime) AND (md5 = old md5)) Is that correct ? Do any of the options take precedence ? as in would be a file skipped if modif time is diffreent but it has the same md5sum ? Are there any implied options that you cannot ignore ? Practical case, ( bacula 5.0.1 ) I have to back-up a svn repo, in order to be able to make incremental backups as simple as posible i am hotcopying (client run before) it to another location, that bacula will backup ( then delete it with client run after). Now in the fileset i have Accurate = spnd5 This should tell bacula to take into consideration size , permission bits number of links , decreases in size and md5sum. However , an incremental is also including a full copy of the svn. What am i doing wrong ? it seems that it takes into account creation time even tho i have not specified it.

    Read the article

  • How to limit the wordpress tagcloud by date?

    - by Nordin
    Hello, I've been searching for quite a while now to find a way to limit wordpress tags by date and order them by the amount of times they appeared in the selected timeframe. But I've been rather unsuccesful. What I'm trying to achieve is something like the trending topics on Twitter. But in this case, 'trending tags'. By default the wordpress tagcloud displays the most popular tags of all time. Which makes no sense in my case, since I want to track current trends. Ideally it would be something like: Most popular tags of today Obama (18 mentions) New York (15 mentions) Iron Man (11 mentions) Robin Hood (7 mentions) And then multiplied for 'most popular this week' and 'most popular this month'. Does anyone know of a way to achieve this?

    Read the article

  • Why is this not a bug in qmail?

    - by jemfinch
    I was reading DJB's "Some thoughts on security after ten years of Qmail 1.0" and he listed this function for moving a file descriptor: int fd_move(to,from) int to; int from; { if (to == from) return 0; if (fd_copy(to,from) == -1) return -1; close(from); return 0; } It occurred to me that this code does not check the return value of close, so I read the man page for close(2), and it seems it can fail with EINTR, in which case the appropriate behavior would seem to be to call close again with the same argument. Since this code was written by someone with far more experience than I in both C and UNIX, and additionally has stood unchanged in qmail for over a decade, I assume there must be some nuance that I'm missing that makes this code correct. Can anyone explain that nuance to me?

    Read the article

  • Conversion Fahrenheit to celsius programmatically

    - by Doom
    In my project, want to show the weather in fahrenheit first, then if the user wants clickes on conversion, needs to show the weather in celsius. My code is NSNumber *metric = [[NSUserDefaults standardUserDefaults] objectForKey:@"metric"]; NSLog(@"Metric is %@", metric); CGFloat aFloat = [speed floatValue]; CGFloat tFloat = [temperature floatValue]; CGFloat tempFloat = (tFloat-30)/2; NSNumber * p_Number = [NSNumber numberWithFloat:tempFloat]; //Convert mph to kmph if ([metric boolValue]) { [windValueLabel setText:[NSString stringWithFormat:@"%.2f kmph", aFloat * 1.6] ]; temperatureLabel.text = [NSString stringWithFormat:@"%@", p_Number]; } else{ [windValueLabel setText:[NSString stringWithFormat:@"%.2f mph", aFloat / 1.6]]; temperatureLabel.text = [NSString stringWithFormat:@"%@", temperature]; } When u start the app, its working and showing temperature in fahrenheit, but crashes at celsius man... is that the current conversion. help me out guys

    Read the article

  • Changing <img src="XXX" />, js event when new image has finished loading?

    - by carillonator
    I have a photo gallery web page where a single <img src="XXX" /> element's src is changed (on a click) with javascript to show the next image—a poor man's ajax I guess. Works great on faster connections when the new image appears almost immediately. Even if it takes a few seconds to load, every browser I've tested it on keeps the old image in place until the new one is completely loaded. It's a little confusing waiting those few seconds on a slow connection, though, and I'm wondering if there's some javascript event that fires when the new image is done loading, allowing me to put a little working... animated gif or something up in the meantime. I know I could use AJAX for real (I'm using jQuery already), but this is such a nice and simple solution. Besides this lag, is there any other reason I should stay away from this approach to changing images? thanks.

    Read the article

  • What options to use for Accurate bacula backup ?

    - by Kiss Stefan
    It's actually 2 question in one. First is a bit more theoretically. So when specifying accurate options how does bacula figure out if a file needs to be backed up ? it's a simple AND ? As in if the options are Accurate = sm5 bacula will not backup the file if ((size = old size) AND (modtime = old modtime) AND (md5 = old md5)) Is that correct ? Do any of the options take precedence ? as in would be a file skipped if modif time is diffreent but it has the same md5sum ? Are there any implied options that you cannot ignore ? Practical case, ( bacula 5.0.1 ) I have to back-up a svn repo, in order to be able to make incremental backups as simple as posible i am hotcopying (client run before) it to another location, that bacula will backup ( then delete it with client run after). Now in the fileset i have Accurate = spnd5 This should tell bacula to take into consideration size , permission bits number of links , decreases in size and md5sum. However , an incremental is also including a full copy of the svn. What am i doing wrong ? it seems that it takes into account creation time even tho i have not specified it.

    Read the article

  • port forwarding problem

    - by Claudiu
    I want to set up an svn server on my computer, so it's available from anywhere. I think I set up the repository correctly, using CollabSVN. If I go to Repo-Browser with TortoiseSVN and point it to svn://localhost:3690, it shows the proper repository. The problem now is that I'm behind a router. My local IP is 192.168.1.45 . Doing svn://192.168.1.45:3690 also works. My global IP is, say, x.x.x.x. Just doing svn://x.x.x.x:3690 doesn't work, which makes sense, since I have to set up port forwarding. I'm using a Verizon router. Using their web interface (on 192.168.1.1) I added the following port forwarding rule: IP Address forward to: 192.168.1.45 Source Ports: Any Dest Ports: 3690 Forward to: 3690 Protocol: TCP However, even after applying this rule, going to svn://x.x.x.x:3690 doesn't work. It takes a few seconds to fail, then says that the connection couldn't be established because the server connected to didn't respond properly after a period of time. What's interesting is that a random port, like svn://x.x.x.x:36904 fails immediately, saying that the target machine actively refused the connection. So I figure that the forwarding rule did something, but not fully what was necessary. Any ideas on how to get this working? The router model is MI424-WR and the firmware version is 4.0.16.1.56.0.10.12.3. UPDATE: I also tried setting destination port to 45000, and still forwarding to 3690, in case something was wrong w/ the lower-numbered ports, but to no avail. I also tried port 80 to port 3690, still all in vain.

    Read the article

  • correct file permissions for trac and git user to access gitolite server repos

    - by klemens
    hi, sounds like a stupid questions (to me), but i couldn't find any info. on my server i host some git repositories via gitolite, and have a trac for every repository. i have a user called git to push/pull from server (git clone git@server:repo). and trac is a apache vhost with mod_wsgi. this runs with the www-data user. so what riddles me (maybe because I have not much of a clue about file-permissions at all) is whats the best permissions setup (chown, chmod) for the git repositories (/home/git/repositories/...). www-data (or trac) needs to at least read permissions (i think). and git (or gitolite) needs obviously read/write permissions to push changesets. i tried a little bit around (i.e. adding www-data and/or git to the www-data/git group), but didn't got it right. at least one of the two don't work (git or trac). any suggestions are highly appreciated. regard, klemens

    Read the article

  • installing wxGTK-devel on CentOS 5.4

    - by jackhab
    I'm trying to install wxGTK-devel on CentOS and since it's not in the base repo I added RPMForge. But now I'm getting these broken dependencies. I don't want start tampering with separate rpms because I suspect it will make thing worse. I remember installing this package from RPMForge without a problem several months ago. Please, advise. ... wxGTK-2.8.10-1.el4.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: libgstreamer-0.8.so.1()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) wxGTK-2.8.10-1.el4.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: libgstgconf-0.8.so.0()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) wxGTK-2.8.10-1.el4.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: libgstinterfaces-0.8.so.0()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) Error: Missing Dependency: libgstreamer-0.8.so.1()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) Error: Missing Dependency: libgstinterfaces-0.8.so.0()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) Error: Missing Dependency: libgstgconf-0.8.so.0()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge)

    Read the article

  • Installing a .deb file manually?

    - by stef
    apt-get install gitosis --fix-missing on my Linode still leads to a 404 (Failed to fetch http://ftp.debian.org/debian/pool/main/g/gitosis/gitosis_0.2+20080825-2_all.deb 404 Not Found [IP: 130.89.148.12 80] ) . The correct file location seems to be http://ftp.debian.org/debian/pool/main/g/gitosis/gitosis_0.2+20090917-11_all.deb Is there any way I can install this without apt-get, or point apt-get in the right direction somehow? Several other packages on my Debian Linode also point to 404, both from command line and virtualmin. EDIT: Machine details Debian 5.0 64bit (Latest 2.6 (2.6.39.1-x86_64-linode19)) EDIT2 My sources list # main repo deb http://ftp.debian.org/debian/ lenny main contrib non-free deb-src http://ftp.debian.org/debian/ lenny main contrib non-free deb http://security.debian.org/ lenny/updates main contrib non-free deb-src http://security.debian.org/ lenny/updates main contrib non-free deb http://volatile.debian.org/debian-volatile lenny/volatile main contrib non-free deb-src http://volatile.debian.org/debian-volatile lenny/volatile main contrib non-free # contrib & non-free repos #deb http://ftp.debian.org/debian/ lenny contrib non-free #deb-src http://ftp.debian.org/debian/ lenny contrib non-free #deb http://security.debian.org/debian/ lenny/updates contrib non-free #deb-src http://security.debian.org/debian/ lenny/updates contrib non-free deb http://software.virtualmin.com/gpl/debian/ virtualmin-lenny main deb http://software.virtualmin.com/gpl/debian/ virtualmin-universal main

    Read the article

  • CC.NET + SVN : Server certificate issue

    - by MSI
    I am trying to setup Continuous Integration in our office. Being a puny little developer I am facing this supposedly infamous problem: " Source control operation failed: svn: OPTIONS of 'https://trunkURL': Server certificate verification failed: issuer is not trusted" So I tried the following solution - Run CC.NET service (server running as win service) using a domain account (rather than default LOCAL SYSTEMS) and accept cert permanently using command prompt under that user by using svn log/list on the repo. Doesn't help :(. I am getting the following from my artifact/log files(or dashboard) ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://TrunkURL': Server certificate verification failed: issuer is not trusted (https://ServerAdd) . Process command: E:\(svn.exe Path) log https://TrunkURL -r "{2010-11-08T02:12:20Z}:{2010-11-08T02:13:21Z}" --verbose --xml --no-auth-cache --non-interactive at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModificationsWithLogging(ISourceControl sc, IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) We are using VisualSVN Server and CC.NET for this adventure. Tips, suggestions will be highly appreciated. Thanks

    Read the article

  • Is the JavaScript RegExp implicit method deprecated?

    - by Eric
    So everyone knows what I mean by "implicit methods"? They're like those default properties from the Windows COM days of yore, where you could type something like val = obj(arguments) and it would be interpreted as val = obj.defaultMethod(arguments) I just found out JavaScript has the same thing: the default method of a RegExp object appears to be 'exec', as in /(\w{4})/('yip jump man')[1] ==> jump This even works when the RegExp object is assigned to a variable, and even when it's created with the RegExp constructor, instead of /.../, which is good news to us fans of referential transparency. Where is this documented, and/or is it deprecated?

    Read the article

  • How do I install php 5.3 on CentOS?

    - by fivelitresofsoda
    Hi, I have to install php5.3 on my centos server. If i do yum install php, the base repo installs 5.1.6 which is too old for the apps i need to install. So i've been trying to use the ius repository, following the official instructions from ius: root@linuxbox ~]# wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-2.ius.el5.noarch.rpm root@linuxbox ~]# wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm root@linuxbox ~]# rpm -Uvh ius-release*.rpm epel-release*.rpm Ok. Now i simply do yum install php53, etc for all i need... but i get this error: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Check Error: file /usr/bin/php from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /usr/bin/php-cgi from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /usr/share/man/man1/php.1.gz from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /etc/php.ini from install of php53u-common-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-common-5.1.6-27.el5_5.3.x86_64 Error Summary ------------- I have no idea how to solve this. I think i have to delete the base packages however as a linux noob i don't know how to do that. Please help. Thank you.

    Read the article

  • GIT : I keep having to merge my new branch

    - by mnml
    Hi, I have created a new branch and I'm working on it with others dev but for reasons when I want to push my new commits I always have to git merge origin/mynewbranch Otherwise I'm getting some errors: ! [rejected] mynewbranch -> mynewbranch (non-fast-forward) error: failed to push some refs to '[email protected]/repo.git' To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. You asked me to pull without telling me which branch you want to merge with, and 'branch.mynewbranch.merge' in your configuration file does not tell me, either. Please specify which branch you want to use on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details. If you often merge with the same branch, you may want to use something like the following in your configuration file: [branch "mynewbranch"] remote = <nickname> merge = <remote-ref> [remote "<nickname>"] url = <url> fetch = <refspec> See git-config(1) for details. Why is it not automatic? Thanks

    Read the article

  • In a web app, is it wise to give log files ".txt" suffix?

    - by Pekka
    I am building a logging mechanism in a web application. Being a Windows man, I tend to give files with textual content the .txt ending. The suffix is automatically registered to be opened in a text editor in any Windows environment, and is just a nice convention. The app is going to be redistributed, and running mostly on Linux, though. The Linux convention for log files is .log. Is there any good reason on the Linux end, besides convention, why I should use .log? Any filters, real-life applications that could become relevant and that will work only with a .log suffix? Or can I merrily call it error_log.txt?

    Read the article

  • Intercept windows open file

    - by HyLian
    Hello, I'm trying to make a small program that could intercept the open process of a file. The purpose is when an user double-click on a file in a given folder, windows would inform to the software, then it process that petition and return windows the data of the file. Maybe there would be another solution like monitoring Open messages and force Windows to wait while the program prepare the contents of the file. One application of this concept, could be to manage desencryption of a file in a transparent way to the user. In this context, the encrypted file would be on the disk and when the user open it ( with double-click on it or with some application such as notepad ), the background process would intercept that open event, desencrypt the file and give the contents of that file to the asking application. It's a little bit strange concept, it could be like "Man In The Middle" network concept, but with files instead of network packets. Thanks for reading.

    Read the article

  • RHEL 6 vs latest vanilla kernel differences?

    - by Yanko Hernández Álvarez
    What are the differences of the RHEL 6 kernel and the latest kernel.org one? I know RHEL is based on 2.6.32 with some features backported from newer kernels and that it also has other features that are not yet part of the latest vanilla kernel. Is there any comparison of the features of both kernels so I can tell how advanced is the RHEL kernel 6 vs. latest vanilla and vice versa?. It don't have to be the latest kernel at all, but the more recent the vanilla version, the better. What I want to know is: What features I lose/win if I change the RHEL kernel for the latest kernel.org’s one? What features are less matured/developed in the latest vanilla kernel than in RHEL’s (and vice versa)? (I guess KVM virtualization is one of them, but I'm not so sure.) What things (libraries / programs / etc) don’t interact as well with the latest vanilla kernel than with the RHEL’s one? In a related note: Is there ANY way to be as up to date (kernelwise) as possible (using RHEL 6) without loosing too much in the process? (Any way except doing the patching myself, I don’t have the necessary expertise) Any repo I don’t know of? Any alternative? Update: The srpm doesn't include patches (see comments), so that way is not possible. Clarification: I'm interested in how "old" the RHEL kernel gets as time goes by, and to know when the latest upstream kernel includes all the improvements included in the RHEL version.

    Read the article

  • OwnCloud RSA certificate configured for SERVER- ISSUE, webpage has a redirect loop

    - by jmituzas
    I had Owncloud running on a server that had died, I remember installing being easy, I have migrated server and Owncloud is one of the last apps to install. Ok Just downloaded and installed the newest version of Owncloud on a Ubuntu 14.04 server with PHP 5.5.9-1, I am trying the manual install. I have tried adding repo and installing from apt-get install owncloud, did not work for me :/, whereis owncloud reported nothing. It's installed but never was able to bring up site. Now for my issue I finished the manual install from .tar.bz2 when it came time to login I receive "This webpage has a redirect loop" , I receive the error from Chrome and Safari web browsers. I can't login at all, with no user, I get the error page. Don't know if it is related or not but here's a look at the owncloud-error.log "RSA certificate configured for "mysite.com" Does NOT include an ID which matches the server name" Installed new ssl cert with CN as my ServerName directive in the vhost config file, same error :/ Re-installed owncloud same issue... Out of ideas. Thanks in advance, jmituzas

    Read the article

  • A Simulator for a non-deterministic Push-Down Automaton

    - by shake
    Well, i need to make simulator for non-deterministic Push-Down Automaton. Everything is okey, i know i need to do recursion or something similar. But i do not know how to make that function which would simulate automaton. I got everything else under control, automaton generator, stack ... I am doing it in java, so this is maybe only issue that man can bump on, and i did it. So if anyone have done something similar, i could use advices. This is my current organisation of code: C lasses: class transit: list -contains non deterministic transitions state input sign stack sign class generator it generate automaton from file clas NPA public boolean start() - this function i am having trouble with Of course problem of separate stacks, and input for every branch. I tried to solve it with collection of objects NPA and try to start every object, but it doesn work :((..

    Read the article

  • Network Security and Encryption explained in laymen terms

    - by Ehrann Mehdan
    Although I might pretend very well that I known a thing about networks or security and it might help me pass an interview, or fix a bug, I don't really feel I'm fooling anyone. I'm looking for a laymen terms explanation of nowadays network security concepts and solutions. The information is scattered around and I didn't find a resource for "dummies" like me (e.g experienced Java developers that can speak the jargon but have no real clue what it means) Topics I have a weak notion about and want to understand better as a Java developer PGP Public / Private keys RSA / DES SSL and 2 way SSL (keystore / trustore) Protecting against Man in the middle fraud Digital Signature and Certificates Is there a resource out there that really explains it in a way that doesn't require a Cisco certificate / Linux lingo / know what is subnet masking or other plumbing skills?

    Read the article

  • How do you get "in the zone"?

    - by Wayne Werner
    Hi, I've just started my first real programming job and am pleased to discover that this is exactly what I want to do for the rest of my life. When it comes round to ~1 hour before it's time to go home and I think "Man, do I have to go home already?" I'd say that's A Good Thing(tm). One thing I've discovered though is that it takes a little while for my brain to get "in gear" or "in the Zone", so I'm curious what other folks do to get programming at their prime. My current flow is when I get here I visit SO and look at the interesting problems - I find it helps get my brain moving. After 20-30 minutes I start looking at my code/specs/etc to decide what I want/need to work on first. So how do you get started?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >