Search Results

Search found 62161 results on 2487 pages for 'set difference'.

Page 205/2487 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • What is the difference between AF_INET and PF_INET constants?

    - by Denilson Sá
    Looking at examples about socket programming, we can see that some people use AF_INET while others use PF_INET. In addition, sometimes both of them are used at the same example. The question is: Is there any difference between them? Which one should we use? If you can answer that, another question would be... Why there are these two similar (but equal) constants? What I've discovered, so far: The socket manpage In (Unix) socket programming, we have the socket() function that receives the following parameters: int socket(int domain, int type, int protocol); The manpage says: The domain argument specifies a communication domain; this selects the protocol family which will be used for communication. These families are defined in <sys/socket.h>. And the manpage cites AF_INET as well as some other AF_ constants for the domain parameter. Also, at the NOTES section of the same manpage, we can read: The manifest constants used under 4.x BSD for protocol families are PF_UNIX, PF_INET, etc., while AF_UNIX etc. are used for address families. However, already the BSD man page promises: "The protocol family generally is the same as the address family", and subsequent standards use AF_* everywhere. The C headers The sys/socket.h does not actually define those constants, but instead includes bits/socket.h. This file defines around 38 AF_ constants and 38 PF_ constants like this: #define PF_INET 2 /* IP protocol family. */ #define AF_INET PF_INET Python The Python socket module is very similar to the C API. However, there are many AF_ constants but only one PF_ constant (PF_PACKET). Thus, in Python we have no choice but use AF_INET. I think this decision to include only the AF_ constants follows one of the guiding principles: "There should be one-- and preferably only one --obvious way to do it." (The Zen of Python)

    Read the article

  • Understanding Javascript's difference between calling a function, and returning the function but executing it later.

    - by Squeegy
    I'm trying to understand the difference between foo.bar() and var fn = foo.bar; fn(); I've put together a little example, but I dont totally understand why the failing ones actually fail. var Dog = function() { this.bark = "Arf"; }; Dog.prototype.woof = function() { $('ul').append('<li>'+ this.bark +'</li>'); }; var dog = new Dog(); // works, obviously dog.woof(); // works (dog.woof)(); // FAILS var fnWoof = dog.woof; fnWoof(); // works setTimeout(function() { dog.woof(); }, 0); // FAILS setTimeout(dog.woof, 0); Which produces: Arf Arf undefined Arf undefined On JSFiddle: http://jsfiddle.net/D6Vdg/1/ So it appears that snapping off a function causes it to remove it's context. Ok. But why then does (dog.woof)(); work? It's all just a bit confusing figuring out whats going on here. There are obviously some core semantics I'm just not getting.

    Read the article

  • Relational vs. Dimensional Databases, what's the difference?

    - by grautur
    I'm trying to learn about OLAP and data warehousing, and I'm confused about the difference between relational and dimensional modeling. Is dimensional modeling basically relational modeling, but allowing for redundant/un-normalized data? For example, let's say I have historical sales data on (product, city, # sales). I understand that the following would be a relational point-of-view: Product | City | # Sales Apples, San Francisco, 400 Apples, Boston, 700 Apples, Seattle, 600 Oranges, San Francisco, 550 Oranges, Boston, 500 Oranges, Seattle, 600 While the following is a more dimensional point-of-view: Product | San Francisco | Boston | Seattle Apples, 400, 700, 600 Oranges, 550, 500, 600 But it seems like both points of view would nonetheless be implemented in an identical star schema: Fact table: Product ID, Region ID, # Sales Product dimension: Product ID, Product Name City dimension: City ID, City Name And it's not until you start adding some additional details to each dimension that the differences start popping up. For instance, if you wanted to track regions as well, a relational database would tend to have a separate region table, in order to keep everything normalized: City dimension: City ID, City Name, Region ID Region dimension: Region ID, Region Name, Region Manager, # Regional Stores While a dimensional database would allow for denormalization to keep the region data inside the city dimension, in order to make it easier to slice the data: City dimension: City ID, City Name, Region Name, Region Manager, # Regional Stores Is this correct?

    Read the article

  • What exactly is the difference between the Dreamhost IDE and Netbeans?

    - by mikemick
    I just started using Netbeans about a week ago, and really like it thus far. Now I'm seeing something about Dreamhost IDE which I guess is a program that is built using the Netbeans platform. I use Dreamhost as the hosting company for many of my projects. What is the benefit of using Dreamhost IDE over Netbeans? Documentation on the software is non-existent from what I can tell (not even a mention in the Dreamhost wiki). All I was able to find was a short description of what it was on a Sourceforge download page, and I found a short silent video on YouTube demoing it. So I guess I'm asking, what features is it bringing to the table, and what is the difference between it and Netbeans? The description on the Sourceforge page is as follows (typos retained)... DreamHost IDE is php and ruby integrated development environment built on NetBeans IDE and provides easy deploy of your applications to the DreamHost services. Also provides you an easy eay hew to setup these services. Maybe the answer is in the description, and I just don't comprehend it?

    Read the article

  • Is there a significant mechanical difference between these faux simulations of default parameters?

    - by ccomet
    C#4.0 introduced a very fancy and useful thing by allowing default parameters in methods. But C#3.0 doesn't. So if I want to simulate "default parameters", I have to create two of that method, one with those arguments and one without those arguments. There are two ways I could do this. Version A - Call the other method public string CutBetween(string str, string left, string right, bool inclusive) { return str.CutAfter(left, inclusive).CutBefore(right, inclusive); } public string CutBetween(string str, string left, string right) { return CutBetween(str, left, right, false); } Version B - Copy the method body public string CutBetween(string str, string left, string right, bool inclusive) { return str.CutAfter(left, inclusive).CutBefore(right, inclusive); } public string CutBetween(string str, string left, string right) { return str.CutAfter(left, false).CutBefore(right, false); } Is there any real difference between these? This isn't a question about optimization or resource usage or anything (though part of it is my general goal of remaining consistent), I don't even think there is any significant effect in picking one method or the other, but I find it wiser to ask about these things than perchance faultily assume.

    Read the article

  • VBA: Difference in two ways of declaring a new object? (Trying to understand why my solution works)

    - by Matt
    I was creating a new object within a loop, and adding that object to a collection; but when I read back the collection after, it was always filled entirely with the last object I had added. I've come up with two ways around this, but I simply do not understand why my initial implementation was wrong. Original: Dim oItem As Variant Dim sOutput As String Dim i As Integer Dim oCollection As New Collection For i = 0 To 10 Dim oMatch As New clsMatch oMatch.setLineNumber i oCollection.Add oMatch Next For Each oItem In oCollection sOutput = sOutput & "[" & oItem.lineNumber & "]" Next MsgBox sOutput This resulted in every lineNumber being 10; I was obviously not creating new objects, but instead using the same one each time through the loop, despite the declaration being inside of the loop. So, I added Set oMatch = Nothing immediately before the Next line, and this fixed the problem, it was now 0 to 10. So if the old object was explicitly destroyed, then it was willing to create a new one? I would have thought the next iteration through the loop would cause anything declared within the loop do be destroyed due to scope? Curious, I tried another way of declaring a new object: Dim oMatch As clsMatch: Set oMatch = New clsMatch. This, too, results in 0 to 10. Can anyone explain to me why the first implementation was wrong?

    Read the article

  • difference between http.context.user and thread.currentprincipal and when to use them?

    - by yamspog
    I have just recently run into an issue running an asp.net web app under visual studio 2008. I get the error 'type is not resolved for member...customUserPrincipal'. Tracking down various discussion groups it seems that there is an issue with Visual Studio's web server when you assign a custom principal against the Thread.CurrentPrincipal. In my code, I now use... HttpContext.Current.User = myCustomPrincipal //Thread.CurrentPrincipal = myCustomPrincipal I'm glad that I got the error out of the way, but it begs the question "What is the difference between these two methods of setting a principal?". There are other stackoverflow questions related to the differences but they don't get into the details of the two approaches. I did find one tantalizing post that had the following grandiose comment but no explanation to back up his assertions... Use HttpConext.Current.User for all web (ASPX/ASMX) applications. Use Thread.CurrentPrincipal for all other applications like winForms, console and windows service applications. Can any of you security/dot.net gurus shed some light on this subject?

    Read the article

  • What is the difference between these two linq implementations?

    - by Mahesh Velaga
    I was going through Jon Skeet's Reimplemnting Linq to Objects series. In the implementation of where article, I found the following snippets, but I don't get what is the advantage that we are gettting by splitting the original method into two. Original Method: // Naive validation - broken! public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Refactored Method: public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } return WhereImpl(source, predicate); } private static IEnumerable<TSource> WhereImpl<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Jon says - Its for eager validation and then defferring for the rest of the part. But, I don't get it. Could some one please explain it in a little more detail, whats the difference between these 2 functions and why will the validations be performed in one and not in the other eagerly? Conclusion/Solution: I got confused due to my lack of understanding on which functions are determined to be iterator-generators. I assumed that, it is based on signature of a method like IEnumerable<T>. But, based on the answers, now I get it, a method is an iterator-generator if it uses yield statements.

    Read the article

  • Is it possible to create a Mac OS specific CSS to fix font difference ?

    - by Gabriel
    I'm working on a project with a designer and he insisted on using some specific font for titles and various elements in the page. So we're using a font kit to embed with @font-face. It's working perfectly on PC (Firefox, IE 7 and 8, Chrome, Safari) but on Mac OS (Safari and Firefox) the fonts are not vertically aligned the same way. After looking on the Web, I didn't find any solution for this except "there always been differences between browsers and platforms, live with it". I know that fonts are never rendered exactly the same across platforms, but this time it's not something like the font looks more bold or something like that. The font looks as if it's baseline is completely different between Windows and Mac OS X. On Mac OS, the font, at a size of 16px is 3px higher than on PC. So I'm looking for a backup solution : is there a way to create a CSS specifically for Mac OS users? I do not want to target only Safari because Safari PC is ok, and Firefox Mac is not ok. Or if you have a solution to fix the baseline difference that does not require a specific CSS file, I'd be happy to hear it. Thanks!

    Read the article

  • How to combine two rows and calculate the time difference between two timestamp values in MySQL?

    - by Nadar
    I have a situation that I'm sure is quite common and it's really bothering me that I can't figure out how to do it or what to search for to find a relevant example/solution. I'm relatively new to MySQL (have been using MSSQL and PostgreSQL earlier) and every approach I can think of is blocked by some feature lacking in MySQL. I have a "log" table that simply lists many different events with their timestamp (stored as datetime type). There's lots of data and columns in the table not relevant to this problem, so lets say we have a simple table like this: CREATE TABLE log ( id INT NOT NULL AUTO_INCREMENT, name VARCHAR(16), ts DATETIME NOT NULL, eventtype VARCHAR(25), PRIMARY KEY (id) ) Let's say that some rows have an eventtype = 'start' and others have an eventtype = 'stop'. What I want to do is to somehow couple each "startrow" with each "stoprow" and find the time difference between the two (and then sum the durations per each name, but that's not where the problem lies). Each "start" event should have a corresponding "stop" event occuring at some stage later then the "start" event, but because of problems/bugs/crashed with the data collector it could be that some are missing. In that case I would like to disregard the event without a "partner". That means that given the data: foo, 2010-06-10 19:45, start foo, 2010-06-10 19:47, start foo, 2010-06-10 20:13, stop ..I would like to just disregard the 19:45 start event and not just get two result rows both using the 20:13 stop event as the stop time. I've tried to join the table with itself in different ways, but the key problems for me seems to be to find a way to correctly identify the corresponding "stop" event to the "start" event for the given "name". The problem is exactly the same as you would have if you had table with employees stamping in and out of work and wanted to find out how much they actually were at work. I'm sure there must be well known solutions to this, but I can't seem to find them...

    Read the article

  • What is the difference between _tmain() and main() in C++?

    - by joshcomley
    If I run my C++ application with the following main() method everything is OK: int main(int argc, char *argv[]) { cout << "There are " << argc << " arguments:" << endl; // Loop through each argument and print its number and value for (int i=0; i<argc; i++) cout << i << " " << argv[i] << endl; return 0; } I get what I expect and my arguments are printed out. However, if I use _tmain: int _tmain(int argc, char *argv[]) { cout << "There are " << argc << " arguments:" << endl; // Loop through each argument and print its number and value for (int i=0; i<argc; i++) cout << i << " " << argv[i] << endl; return 0; } It just displays the first character of each argument. What is the difference causing this?

    Read the article

  • Is there any fundamental difference between piping in mac and linux?

    - by Mohammad Moghimi
    ps -e | grep bash sample output from a linux machine: 1128 pts/14 00:00:00 bash 7491 pts/7 00:00:00 bash 12651 pts/14 00:00:00 bash 16145 pts/2 00:00:00 bash sample output from a mac machine: 58352 ttys000 0:00.09 login -pfl username /bin/bash -c exec -la bash /bin/bash 58353 ttys000 0:00.02 -bash 58390 ttys000 0:00.00 grep bash 20372 ttys005 0:00.06 login -pfl username /bin/bash -c exec -la bash /bin/bash 20373 ttys005 0:00.18 -bash My question is that why we see "grep bash" in the second case but not the first case.

    Read the article

  • What is the difference between Anycast and GeoDNS / GeoIP wrt HA?

    - by Riyad
    Based on the Wikipedia description of Anycast, it includes both the distribution of a domain-name-to-many-IP-mapping across many DNS servers as well as replying to clients with the most geographically close (or fastest) server. In the context of a globally distributed, highly available site like google.com (or any CDN service with many global edge locations) this sounds like the two key features one would need. DNS services like Amazon's Route53, EasyDNS and DNSMadeEasy all advertise themselves as Anycast-enabled networks. Therefore my assumption is that each of these DNS services transparently offer me those two killer features: multi-IP-to-domain mapping AND routing clients to the closest node. However, each of these services seem to separate out these two functionalities, referring to the 2nd one (routing clients to closest node) as "GeoDNS", "GeoIP" or "Global Traffic Director" and charge extra for the service. If a core tenant of an Anycast-capable system is to already do this, why is this functionality being earmarked as this extra feature? What is this "GeoDNS" feature doing that a standard Anycast DNS service won't do (according to the definition of Anycast from Wikipedia -- I understand what is being advertised, just not why it isn't implied already). I get extra-confused when a DNS service like Route53 that doesn't support this nebulous "GeoDNS" feature lists functionality like: Fast – Using a global anycast network of DNS servers around the world, Route 53 is designed to automatically route your users to the optimal location depending on network conditions. As a result, the service offers low query latency for your end users, as well as low update latency for your DNS record management needs. ... which sounds exactly like what GeoDNS is intended to do, but geographically directing clients is something they explicitly don't support it yet. Ultimately I am looking for the two following features from a DNS provider: Map multiple IP addresses to a single domain name (like google.com, amazon.com, etc. does) Utilize a DNS service that will respond to client requests for that domain with the IP address of the nearest server to the requestee. As mentioned, it seems like this is all part of an "Anycast" DNS service (all of which these services are), but the features and marketing I see from them suggest otherwise, making me think I need to learn a bit more about how DNS works before making a deployment choice. Thanks in advance for any clarifications.

    Read the article

  • iproute2 rules and iptables NAT... what is the difference?

    - by Jakobud
    We have 2 different ISP connections. Our previous "IT guy" setup our firewall like so: When /etc/rc.local was executed on startup, it did a bunch of ip rule add and ip route add commands in order to route certain internal hosts to use certain ISP connections. Then at the end of /etc/rc.local, he executed our iptables firewall rules that were generated by Firewall Builder. These iptables rules have both Policy and NAT rules setup in them. What I don't understand, is why did he use iproute2 to specify rules and routes but also specify NAT rules for iptables? Why didn't he just do it all in one or the other instead of using them both? Could he have got rid of the iproute2 rules and routes and just put all those same rules into the iptables NAT settings?

    Read the article

  • What is the difference betweeen "Network install" and "Network Boot" options in virt-manager when installing a new virtual machine

    - by Marwan
    From my understanding of PXE (Preboot Execution Environment), I know that there must be some negotiation first between the booting client and a DHCP server to obtain network parameters (IP address, etc) in order for the client to be able to fetch the boot loader and kernel image from the boot server. In other words, and aside from being a "virtual" machine, we're talking here about a "bare metal" machine, so there must be some "pre boot" mechanism for those negotiations to take place, and this is exactly what PXE is all about. When I think about the "Network install" option, I can't figure out how the new VM would be able to fetch the boot images (bootloader and kernel) without the previously mentioned mechanism. So, here is a short version of the question: When provisioning a new virtul machine, how do you expect the "Network install" option in virt-manager to work behind the scenes? Many thanks.

    Read the article

  • How do I set up ZScreen to upload images to my mediawiki?

    - by Johan
    I've set up a mediawiki with all the correct settings and enabled image uploading. When I do this manually this all works OK. I want to be able to upload screenshots automatically into my mediawiki using ZScreen. There is an option to do this: I press Test..., this work OK, however I'm unable to tell ZScreen to to actually get the picture into my mediawiki. There's no option to select mediawiki as my destination. How do I setup zscreen to upload to my mediawiki?

    Read the article

  • Can you set up a gaming LAN using OpenVPN installed in a VMware guest OS and be playing the game on the host OS?

    - by Coder
    I would like to setup a gaming VPN. Ie. I have some games that work over LAN and would like to play them with people that are not on my LAN. I know I can do this with OpenVPN. My ultimate goal would be to run OpenVPN portably on my host OS and not even need any virtualization. As such i don't want to install it on my host, but i'm fine with running it portably. I'm even fine with temporarily adding registry keys, and then running a .reg file to remove these entries once i'm done. To this effect i have installed OpenVPN on a virtual machine and diffed the registry. I then manually (using a .reg file) added all the keys that seem important on my host OS and copied the installation folder of OpenVPN onto my host machine. Then i try to run openVPN GUI 1.0.3 as a test and it says "Error opening registy for reading (HKLM\SOFTWARE\OpenVPN). OpenVPN is probably not installed". I verified that that key is indeed in the registry with all subkeys and it looks correct. I have tried running the GUI as an administrator and in compatibility mode with no success. I am running Windows 7. If this fails then i would be happy with installing OpenVPN on a virtual machine in VMWare but they key is that i will be running the game installed on my host machine. The first question for this option is if this is even possible. The second is, that I can't get the VM to have internet access if I use bridging but i can if i use NAT. Is it possible to do this game VPN setup with VMWare guest OS running using NAT? Summary of questions: -Is it possible to run openVPN portably and if so what did i miss above? -If it's not possible to run it portably, then can setup a gaming LAN by installing OpenVPN in a guest OS with NAT and how can i do this? -If the above is not possible then can i install OpenVPN in a guest using bridging and if so how can i set this up with a Windows 7 host and Windows XP guest as currently i can't get the guest to be able to access the internet in bridging mode, but it working in NAT mode. -In general is there any good documentation on setting up a gaming LAN with OpenVPN (i am using 2.1.4) as i have never set up a VPN of any sort before so any help would be much appreciated. Thanks!

    Read the article

  • Is it possible to set the date on a Linux machine to the year 2040?

    - by Daryl Spitzer
    I need to be able to set the date on Ubuntu (8.04.4 LTS) to the year 2040 (to test something that isn't relevant to this question). Is that possible? I can run: $ sudo date -s "15 JAN 2038 18:00:00" Fri Jan 15 18:00:00 PST 2038 ...but: $ sudo date -s "15 JAN 2039 18:00:00" date: invalid date `15 JAN 2039 18:00:00' Is the limit somewhere in 2038 (or prior to Jan. 15, 2039)? Does this change with different versions of Linux?

    Read the article

  • Why do I need to set up Autologon values in registry twice in before it works and can I fix this?

    - by jJack
    Background: As part an automated testing suite I am building, I need to set up Autologon on my virtual machines 'on demand'. By on demand, I mean that I don't want to necessarily pre-configure my VM or any snapshot to have Autologon set up already, for security reasons and also a huge business case. My solution so far: I'm copying a script to the guest machine and then using Sysinternals PsExec to execute it. The script is: reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultUserName /t REG_SZ /d myusername reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultPassword /t REG_SZ /d myfakepassword reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultDomainName /t REG_SZ /d mydomain reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v ForceAutoLogon /t REG_SZ /d 1 reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v AutoAdminLogon /t REG_SZ /d 1 reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\AutoLogonChecked" /f /ve /d 1 Note: I don't believe AutoLogonChecked is required for machines post Windows 2000 but I'm doing it just in case for now. Maybe ForceAutoLogon isn't either, not sure yet. The Problem: I see PsExec executes this properly and all the values are in the registry, however when I restart the machine, the user isn't automatically logged on...When I run this a second time then restart the machine, the user is finally logged on. A diff between the registry states shows that the first time I run this, it is missing both the "1" for AutoAdminLogon, and also the DefaultPassword key. The second time I execute it, these values are correctly intact as I intended. So, what is going on here? Is this expected? This post claims in the end that it really all just works (the problem was that a logoff script was setting off the values). Doesn't seem to work for me however. Note this seems unique to Windows 7, does not occur in Windows XP Also note that you don't need PsExec to recreate the issue - just modify the registry yourself EDIT/update: Login interactively and run script (so, not executing it remotely), logging off automatically logs me back in (so, it works) remotely execute the script in guest when I'm interactively logged in, logging off automatically logs me back in (so, it works) remotely execute the script in guest when with non-interactive session if I log in afterwards (so, interactive now) then back off, it logs me back in (so, it then works) EDIT/update 2: This only occurs for Win7x86, Win7x64, Win8x64. This does not occur for Windows XP

    Read the article

  • How to set per user mail quota for postfix using policyd v2?

    - by ACHAL
    I have configured cluebringer 2.0.7 mysql httpd and all services are running well . But now i want to set per user mail quota for outgoing mails and want to restrict for a fix number of mail. I have tried to setup a quota for my host r10.4reseller.org but not working Quota List Policy:- Default Outbound Name:-Default Outbound Track:-Sender:user@domain Period:-60 verdict:-REJECT Data:- Disabled:- no Quota Limits Type:- MessageCount Counter Limit:- 1 Disabled:-no Do I need to do anymore settings for quota ?

    Read the article

  • What's the difference between a wifi access point and station?

    - by Earlz
    I noticed that my (rooted) modem has some hidden modes for wifi. It has the default(and only setting without rooting) wireless access point, but it also has the settings repeater, ad-hoc, and station. What I'm really curious about is this station mode and how it differs from access point. I did a cursory search and didn't come up with any significant differences, other than that they are two distinct modes on many wireless chipsets. What is this station mode and how does it differ from access point?

    Read the article

  • Where does $PATH get set in OS X 10.6 Snow Leopard?

    - by Andrew
    I type echo $PATH on the command line and get /opt/local/bin:/opt/local/sbin:/Users/andrew/bin:/usr/local/bin:/usr/local/mysql/bin:/usr/local/pear/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/opt/local/bin:/usr/local/git/bin I'm wondering where this is getting set since my .bash_login file is empty. I'm particularly concerned that, after installing MacPorts, it installed a bunch of junk in /opt. I don't think that directory even exists in a normal Mac OS X install. Update: Thanks to jtimberman for correcting my echo $PATH statement

    Read the article

  • In Apache, how do I set up password protection?

    - by rphello101
    I'm attempting to set up a server using Apache. In the conf file, I inserted the code: <Directory /> Options FollowSymLinks AllowOverride AuthConfig AuthType Basic AuthName "Restricted Files" AuthBasicProvider file AuthUserFile C:\...\serverpass.txt Require user Admin </Directory> In order to try and get Apache to require a password. I created the username and password with htpasswd -c. When I got to localhost though, it doesn't prompt me for a username and password?

    Read the article

  • What's the difference between Host and HostName in SSH Config?

    - by Bill Jobs
    The man page says this: Host Host Restricts the following declarations (up to the next Host keyword) to be only for those hosts that match one of the patterns given after the keyword. If more than one pattern is provided, they should be separated by whitespace. A single `*' as a pattern can be used to provide global defaults for all hosts. The host is the hostname argument given on the command line (i.e. the name is not converted to a canonicalized host name before matching). A pattern entry may be negated by prefixing it with an exclamation mark (`!'). If a negated entry is matched, then the Host entry is ignored, regardless of whether any other patterns on the line match. Negated matches are therefore useful to provide exceptions for wildcard matches. See PATTERNS for more information on patterns. HostName HostName Specifies the real host name to log into. This can be used to specify nicknames or abbreviations for hosts. If the hostname contains the character sequence `%h', then this will be replaced with the host name specified on the command line (this is useful for manipulating unqualified names). The default is the name given on the com- mand line. Numeric IP addresses are also permitted (both on the command line and in HostName specifications). For example, when I want to create an SSH Config for GitHub, what should Host and HostName be respectively?

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >