Search Results

Search found 15558 results on 623 pages for 'basic authentication'.

Page 490/623 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • Multiple WAN interfaces on SonicWall TZ 100?

    - by Chad Decker
    I'm using a SonicWall TZ 100 with a basic configuration of X0 for the LAN and X1 for the WAN. The WAN uses DHCP to obtain its routable IP address. I want to obtain a second routable IP from my ISP. I'm in luck because my cable company will provide me with an additional dynamic IP for $5/mo. How do I bind this IP to my SonicWall? My additional dynamic IP will not be consecutive to the original one. It won't even be on the same class C. I think what I want to do is to use one of the empty ports/interfaces (X2, X3, or X4), tell that interface to use DHCP, and then add that interface to the WAN "zone". I can't figure out how to do this though. Here's what I've tried so far: (1) I've looked in Network Interfaces. I see X0 and X1 but the other unused interfaces don't show up. I don't see an "Add" button to add the new interfaces. (2) I've looked in Network Zones. I see that X0, X2, X3, X4 are in the LAN zone. I tried to drag X3 into the WAN zone but I can't. Nor does clicking the "Configure" button allow me to move an unused interface from LAN to WAN. (3) I've read the post entitled Splitting up multiple WAN's on Sonicwall. This doesn't seem applicable to me. Any thoughts?

    Read the article

  • OS X borked; need to backup outside of Time Machine

    - by rlbgator
    Quick Background: iMac G5 (the white one; 4 years old?) Running Leopard 10.5.something. Time Machine started failing on me; and every time I touch the Finder, things beachball like crazy. Booting from install disk then using Disk Utility to "Repair Disk" also fails. I'm left with the conclusion that I have a corrupt file somewhere important, that's (i) keeping TM from working and (ii) messing with basic functionality. I am not (yet) savvy enough in OS X to know what logs to look in, or how to decipher them - but 'corrupt file' seems to be the likely case, based on my readings of apple.com forum threads. So I think I need to backup, outside of Time Machine, then install fresh OS X on a new drive (or maybe SpinRite the current drive?). I'm able to put a (non-Time Machine) external USB drive on, so I dragged all 3 Users' folders to that... am I done backing up? Am I going to have a massive Permissions problem, trying to put things back together after a re-install? Thanks for reading.

    Read the article

  • Building a Web proxy to get around same-origin restrictions for collaborative Webapp based on a MEAN stack

    - by Lew Cohen
    Can anyone point to books, articles, blogs, or even applications - open-source or proprietary - that detail building a Web proxy? This specific proxy will exist to get around the same-origin restrictions that prevent, for instance, loading a given Website into an <iframe> in a Webapp. This Webapp is a collaborative application in which a group of users log in to the app's Website and can then load different Websites into this app's <iframe> and do various collaborative things (e.g., several users simultaneously browsing a Website, in synch). The Webapp itself is built on a MEAN stack (MongoDB, Express, AngularJS, and Node.js). The purpose of this proxy is not to do anonymous browsing or to bypass censorship. Information on how to build such a vehicle seems not to be readily available from my research. I've come across Glype but am not sure whether this is a feasible solution. I don't want to reinvent the wheel, so if a product is available for purchase, great. Else, we'd need to build one. The one that seems to be close is http://www.corsproxy.com. In effect, we'd like to re-create this since it evidently does what's needed. I don't care what server-side technology is used. Our app is MEAN-based, if that has any bearing. Also, the proxy has to obviously honor basic security considerations (user cookies, etc.) and eventually be scalable. So, anyone know of any sources that would detail how to build one of these? Is it even worth building if something already exists? If so, what would be a good candidate? Any other issues that should be considered with this proxy/application? Thanks a lot!

    Read the article

  • Squid: The request or reply is too large

    - by Ueli
    I have done a reverse proxy with an Apache in the background (on the same server). All works great but I can't open one page. I get the error "The request or reply is too large." In my cache.log contains: 2010/12/09 15:28:29| WARNING: http.c:971: HTTP header too large 2010/12/09 15:29:03| ctx: enter level 0: 'http://server/admin/cms/nav' 2010/12/09 15:29:03| httpProcessReplyHeader: Too large reply header 2010/12/09 15:29:03| ctx: exit level 0 In my squid.conf i disabled the limitations of the request and reply header, without success: reply_body_max_size 0 allow all request_body_max_size 0 Does someone know why that don't work? Thank you very much. Squid Version: Squid Cache: Version 2.7.STABLE3 configure options: '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' '--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' '--enable-async-io' '--with-pthreads' '--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' '--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap' '--enable-snmp' '--enable-delay-pools' '--enable-htcp' '--enable-cache-digests' '--enable-underscores' '--enable-referer-log' '--enable-useragent-log' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-carp' '--enable-follow-x-forwarded-for' '--with-large-files' '--with-maxfd=65536' 'amd64-debian-linux' 'build_alias=amd64-debian-linux' 'host_alias=amd64-debian-linux' 'target_alias=amd64-debian-linux' 'CFLAGS=-Wall -g -O2' 'LDFLAGS=' 'CPPFLAGS='

    Read the article

  • Missing BootMgr in Vista

    - by Selase
    Am in really deep trouble here and would use every advice available from everyone out there.. Am message just pooped on my screen and i had to restart my laptop. upon restarting the BootMgr got corrupted. Am running Windows Vista 32 bit by the way.. i got onto Google with a friend's PC and found two basic ways of fixing it. the first one that requires windows to automatically fix it using Startup repair ends up with the error message : "Startup REpair cannot repair this computer automatically" the second option that requires me to rebuild the BCD scans my system and finds the operating system on drive D:\Windows which i believe should be C:. if i hit Y(yes) for the rebuild process to take place i get the message "The required system device cannot be found" i then try the second option which requires me to recreate the BCD Store ends up with an error message that says "The store export operation has failed. The requested system device cannot be found". proceeding from there is meaningless since the system device cannot be found. I somehow believe the device cannot be found because its identifying the windows installation on D: instead of C: but how to change that i have no idea... I dot know how it happens to identify an O/S on D: when there's none there... How do i go about fixing the BootMgr? I have very important files on my system and cant afford to reinstall windows... i really need to fix this..Please help...

    Read the article

  • smtpd_helo_restrictions = ..., reject_unknown_helo_hostname occasionally rejects mail I care about, how to handle?

    - by lkraav
    I have configured my postfix as follows: smtpd_helo_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unknown_helo_hostname This is working well because most spambots don't seem to have correct reverse lookups. But every once in a while I run into mail I care about getting reject, because the mail source server admin doesn't care about configuring his server correctly. For example here the server introduces itself as "srv1.xbmc.org" which has no DNS record and fails my basic check. Jan 6 04:42:36 mail postfix/smtpd[660]: connect from xbmc.org[205.251.128.242] Jan 6 04:42:37 mail postfix/smtpd[660]: NOQUEUE: reject: RCPT from xbmc.org[205.251.128.242]: 450 4.7.1 <srv1.xbmc.org>: Helo command rejected: Host not found; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<srv1.xbmc.org> I have tried to contact the server admin several times, but there is no response. What is the optimal way to handle this from my side? Is adding these "special" hosts to mynetworks = my only option? Is perhaps my whole smtpd_helo_restrictions setup wrong in some significant way?

    Read the article

  • GeekTool logs "command not found" for commands that work fine in Terminal

    - by Kevin Dowling
    I'm trying to run simple commands so I can have GeekTool output date/time etc. to my desktop. Should be simple enough to do but it never actually outputs anything into the boxes. Console log shows it's getting spammed by GeekTool to say 'command not found', though the same command (e.g. date +"%H:%M") works fine in Terminal. All I want to achieve is the ability to output a clock displaying time/date on my desktop that fits into my wallpaper. I've tried changing the format of the commands, using the built-in editor window as well as the command line box on the Properties tab. I had a look at the permissions in '/' (because GeekTool runs commands from there) and nothing unusual comes up. None of these solved the issue. When I use a command that simply echo's a string it works (e.g. echo "hello" displays the word hello). Does anyone have experience with GeekTool, and understand why it won't run basic commands? As I say, it's spamming my console with 'command not found' despite them working in terminal... Running OS X 10.6.6 on a MacBook Pro (mid-2010).

    Read the article

  • localhost/127.0.0.1 not working, "Unable to connect"

    - by redconservatory
    I am running some pretty basic php sites on Snow Leopard. Usually I just go to my browser and type anything like: localhost http://localhost 127.0.0.1 mycomputername.local But suddenly, after installing a gem file (compass) none of this is working. I tried sudo apachectl restart Thinking that I just needed to restart apache, but no luck. My error log looks like: [Mon Mar 26 09:39:08 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45223 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45043 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45438 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45049 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45439 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45224 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45440 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45441 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45442 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:11 2012] [notice] caught SIGTERM, shutting down I also tried sudo apachectl -k start And I got the error: Syntax error on line 182 of /private/etc/apache2/httpd.conf: Illegal option When I look at the code around that line, I see: <Directory /> Options Indexes MultiViews + FollowSymLinks AllowOverride All Order allow, deny Allow from all </Directory>

    Read the article

  • Using my old PC as a web/file server?

    - by Garrett
    I have an old desktop computer that I've been trying to sell for AGES. I guess nobody is looking for computers because it was advertised at a dirt cheap price on craigslist, local papers, etc. Anyways, I was wondering if it would be worth it to set it up as a home file server, a web dev server (I have a web host for actual production use), and maybe host a few server applications (ex: ventrillo). The computer is actually an old Dell that I cannibalized after the motherboard being destroyed by lightning, so it has fairly new parts in it. The specs are: P4 3.4GHz w/ HT and Artic Cooling Freezer 7 3GB DDR2 533 RAM 80GB hdd (will upgrade the hard drive if it's even worth using as a server) basic dvd rom 430 Watt Thermaltake PSU (it might be important to note that it is only 60% efficiency) ATI Radeon x600 256MB Antec 300 case It's not a really beefy machine, I just can't see giving it away or putting it in the corner to just collect dust. I have Windows Server 2008 R2 Standard and I am confident in my skills in operating most Linux operating systems. I'd also be using it to tinker with when I learn new things in my server admin classes (I'm finishing my 2nd year in college at the moment so I'm still learning) Also, my house is quite old and the electrical wiring is pretty poor (it MIGHT be up to code, then again, where I live most people don't even know what regulations are or let alone know how to spell it...) Would it be safe to leave it running all day and is it going to run up my electric bill because of the PSU efficiency? I only have 5mbit cable internet, but I won't be running very bandwidth intense services on it so it should be ok. I should elaborate on why I am concerned about the power. The circuits should be fine, but I'm more concerned about fire hazard. What is the likelihood that the server could cause an electrical fire? Again, thank you all for the feedback!

    Read the article

  • Windows : Map-a-network-drive to a remote Shared-Folder (on QNAP NAS) using OpenVPN

    - by spelltox
    Provided my lack of networking knowledge, I've been struggling with this issue for quite a few days now : I have a QNAP-TS212 NAS on which i've created a shared-folder (mostly excel files). All the computers in the local network (windows) are able to access it without any problem. Now, i want to access that shared-folder remotely (windows client), so : I enabled OpenVPN (and PPTP) in QNAP admin. Installed OpenVPN on the remote client. Applied the configuration file that the QNAP generated - Configuration (openvpn.ovpn) : client dev tun script-security 3 proto udp remote ***MY_WAN_IP*** 1194 resolv-retry infinite nobind ca ca.crt auth-user-pass reneg-sec 0 cipher AES-128-CBC comp-lzo OpenVPN connect successfully from the remote client. Now, here's my problem : I can ping the NAS (got IP 10.8.0.1) from the remote client, But when i try to map-a-network-drive, i don't see the shared folder or the NAS or any of the other computers in the network... I checked - all computers are in "WORKGROUP" workgroup. I'm probably missing some basic knowledge, So - any help would be greatly appreciated ! Many thanks.

    Read the article

  • Unattended Kickstart Install

    - by Eric
    I've looked around quite a bit and have seen similar setup and questions, but none seem to work for me. I'm using the following command to create a custom ISO: /usr/bin/livecd-creator --config=/usr/share/livecd-tools/test.ks --fslabel=TestAppliance --cache=/var/cache/live This works great and it creates the ISO with all of the packages and configs I want on it. My issue is that I want the install to be unattended. However, every time I start the CD, it asks for all of the info such as keyboard, time zone, root password, etc. These are my basic settings I have in my kickstart script prior to the packages section. cdrom install autopart autostep xconfig --startxonboot rootpw testpassword lang en_US.UTF-8 keyboard us timezone --utc America/New_York auth --useshadow --enablemd5 selinux --disabled services --enabled=iptables,rsyslog,sshd,ntpd,NetworkManager,network --disabled=sendmail,cups,firstboot,ip6tables clearpart --all So after looking around, I was told that I need to modify my isolinux.cfg file to either do "ks=http://X.X.X.X/location/to/test.ks" or "ks=cdrom:/test.ks". I've tried both methods and it still forces me to go through the install process. When I tail the apache logs on the server, I see that the ISO never even tries to get the file. Below are the exact syntax I'm trying on my isolinux.cfg file. label http menu label HTTP kernel vmlinuz0 append initrd=initrd0.img ks=http://192.168.56.101/files/test.ks ksdevice=eth0 label localks menu label LocalKS kernel vmlinuz0 append initrd=initrd0.img ks=cdrom:/test.ks label install0 menu label Install kernel vmlinuz0 append initrd=initrd0.img root=live:CDLABEL=PerimeterAppliance rootfstype=auto ro liveimg liveinst noswap rd_NO_LUKS rd_NO_MD rd_NO_DM menu default EOF_boot_menu The first 2 give me a "dracut: fatal: no or empty root=" error until I give it a root= option and then it just skips the kickstart completely. The last one is my default option that works fine, but just requires a lot of user input. Any help would be greatly appreciated.

    Read the article

  • Backing up default windows installation with dd from linux running on another partition - is this fe

    - by Marek
    I am preparing to reinstall my system. I am thinking about creating a multi boot with a linux distro+Windows 7 to choose from when starting up. I would love to be able to skip all the hassle of reinstalling Windows and all programs when it starts becoming too slow in the future, thus I would like to mirror my fresh Windows system partition with some programs preinstalled. I am thinking about installing Ubuntu, making a partition for windows, installing windows with the basic environment (Visual Studio, Office, etc.) then booting into Linux and making an image of the windows partition with dd. I am not familiar with linux at all so I am a little afraid something may go wrong along the way. Is it possible to do it this way? Will I be able to partition my existing disk for multi boot easily after I install Ubuntu? Will I be able to recover the Windows partition easily using dd when I will need to re-create a fresh windows partition in the future? What other (better) approach can you recommend to achieve the goal of easy disk mirroring (for free)?

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • All commands stopped working in centos 6.5

    - by Michael
    I have made a big mistake while removing some duplicate packages as it appears to be broken. yum 1036 rpm -e --nodeps glibc-2.12-1.132.el6_5.2.x86_64 1037 rpm -e --nodeps nscd-2.12-1.132.el6_5.2.x86_64 1038 rpm -e --nodeps glibc-common-2.12-1.132.el6_5.2.x86_64 1040 rpm -e --nodeps glibc-common-2.12-1.132.el6.x86_64 glibc-devel-2.12-1.132.el6.x86_64 glibc-headers-2.12-1.132.el6.x86_64 1041 rpm -e glibc.x86_64 1042 rpm -e --nodeps glibc.x86_64 The issue happened after doing 1042 step. None of commands work(including yum, rpm, ls, cp etc) and getting error /lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory I thought that installing glibc after removing all the current ones would help to resolve the duplicate package error :( Now I realised that it is used as the C library in the GNU system and most systems with the Linux kernel. It defines the "system calls" and other basic facilities such as open, malloc, printf, exit, etc. Is there any possible solutions other than reinstall? I have lost ssh access. Maybe anything can be done using rescue cd? Thanks

    Read the article

  • Resize a RAID 1 volume on OS X Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OS X 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

  • fail2ban block ports rules iptable

    - by J Spen
    I just installed Ubuntu Server 14.04 and don't have much experience with IPtables. I am trying to get a basic setup going where I only accept SSH connections on port 22 and 2222. I actually have that working with no problem using fail2ban ssh. Then I wanted to block all other ports except 423 and 4242 but either method of DROPing all connections that are not listed seems not to work and it blocks me out of everything. Below is the setup that works: -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -N fail2ban-ssh -A INPUT -p tcp -m multiport --dports 22,2222 -j fail2ban-ssh -A fail2ban-ssh -j RETURN I tried to change it either to: -P INPUT DROP -P FORWARD ACCEPT -P OUTPUT ACCEPT -N fail2ban-ssh -A INPUT -p tcp -m multiport --dports 22,2222 -j fail2ban-ssh -A fail2ban-ssh -j RETURN or: -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -N fail2ban-ssh -A INPUT -p tcp -m multiport --dports 22,2222 -j fail2ban-ssh -A INPUT -j DROP -A fail2ban-ssh -j RETURN I have noticed that the rules for fail2ban-ssh are automatically added to my iptables on boot because if I save them with iptables-persistant they are entered twice. How do I go about blocking everything accept those 2 ports using fail2ban? Is it a bad fail2ban configuration or do I need to add the fail2ban-ssh -j Return somewhere else in my code.

    Read the article

  • One Apache VirtualHost entry overrides another?

    - by johnlai2004
    I can't tell why one apache virtual host entry keeps overriding another. The following file // filename: cbl <VirtualHost 74.207.237.23:80> ServerAdmin [email protected] ServerName completebeautylist.com ServerAlias www.completebeautylist.com DocumentRoot /srv/www/cbl/production/public_html/ ErrorLog /srv/www/cbl/production/logs/error.log CustomLog /srv/www/cbl/production/logs/access.log combined </VirtualHost> keeps overriding this file // filename: theccco.org <VirtualHost 74.207.237.23:80> SuexecUserGroup "#1010" "#1010" ServerName theccco.org ServerAlias www.theccco.org ServerAlias webmail.theccco.org ServerAlias admin.theccco.org DocumentRoot /home/theccco/public_html ErrorLog /var/log/virtualmin/theccco.org_error_log CustomLog /var/log/virtualmin/theccco.org_access_log combined ScriptAlias /cgi-bin/ /home/theccco/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/theccco/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks allow from all AllowOverride All </Directory> <Directory /home/theccco/cgi-bin> allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} =webmail.theccco.org RewriteRule ^(.*) https://theccco.org:20000/ [R] RewriteCond %{HTTP_HOST} =admin.theccco.org RewriteRule ^(.*) https://theccco.org:10000/ [R] Alias /dav /home/theccco/public_html <Location /dav> DAV On AuthType Basic AuthName theccco.org AuthUserFile /home/theccco/etc/dav.digest.passwd Require valid-user ForceType text/plain Satisfy All RewriteEngine off </Location> </VirtualHost> I tried a2ensite, a2dissite, and reloading I get this message * Reloading web server config apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Thu Apr 15 10:47:36 2010] [warn] NameVirtualHost 74.207.237.23:443 has no VirtualHosts Aside from that, I don't know what else could be wrong. Can anyone tell me what to do?

    Read the article

  • Configure one IIS site to handle two separate SSL certificates using external Load Balancing or SSL Acceleration Servers

    - by bmccleary
    I have one web application on our server that needs to be referenced by two different domain names, both of which have their own SSL certificates. The application is exactly the same for both domains, but we have to keep the two domain names for legal reasons. The problem is that, since both domains need to have their own SSL certificate, that inside of our IIS 7.5 configuration we have to have two separate IIS applications (both pointing to the same physical location) with their own unique IP address and SSL certificate installed. Now, I know that, due to the nature of SSL communications, that this is by design and that you can't assign more than one SSL certificate per IP address and domain name. My question is… is there any way around this limitation and keep one web application in IIS and have it service two SSL certificates based on host name? I know that with the basic IIS configuration that this is not possible, but I was thinking that with some sort of combination of external load balancing and/or SSL acceleration servers/services that we could have these servers process the SSL request and leave IIS clean to have one single application. I am not familiar at all with these technologies, hence the reason I am asking if it is theoretically possible. If not, does anyone else know how to achieve this?

    Read the article

  • How to set up RAID-0 first time on new PC?

    - by jasondavis
    I have built basic PC's in the past but have never used a RAID array at all. SO now I am buying parts to build my new PC, it will be an intel i7 processor. My motherboard will have RAID support which I will use instead of an aftermarket raid controller for now. Also I plan to use 2 SSD drives in RAID-0 for my windows 7 OS. (Please note that I am aware of the issues with doing this, including lack of TRIM support when using RAID with SSD drives. I am OK with it not working as I can just re[place the drives in a year or so or wheneer they become more sluggish). SO here is my question part. If I assemble the motherboard, PSU, processor, RAM, vidm card, etc and then go to turn the PC on, it will have the 2 SSD drives hooked up. so I assume I will then soon the BIOS screen before I install windows? How to I go about making the 2 drives work in RAID-0 at this point? I do the raid part before installing my OS right? Please help with the steps involved from assembling the parts of the PC and then turning it on, to the part of getting the RAID-0 set up between the 2 drives and then installing my windows 7 OS from a Optical drive? Please help, all advice, instructions, tips appreciated as long as on topic. I do not need to be told that this is a bad idea as far as if 1 drive fails I losse it all, I plan on having a disk IMAGE to be able to restore my OS and software to a new set of drives at anytime needed in the event of drive failure. Same goes for lack of TRIM support. Thanks for reading and help =)

    Read the article

  • How to grant read/write to specific user in any existent or future subdirectory of a given directory? [migrated]

    - by Samuel Rossille
    I'm a complete newbie in system administration and I'm doing this as a hobby. I host my own git repository on a VPS. Let's say my user is john. I'm using the ssh protocol to access my git repository, so my url is something like ssh://[email protected]/path/to/git/myrepo/. Root is the owner of everything that's under /path/to/git I'm attempting to give read/write access to john to everything which is under /path/to/git/myrepo I've tried both chmod and setfacl to control access, but both fail the same way: they apply rights recursively (with the right options) to all the current existing subdirectories of /path/to/git/myrepo, but as soon as a new directory is created, my user can not write in the new directory. I know that there are hooks in git that would allow me to reapply the rights after each commit, but I'm starting to think that i'm going the wrong way because this seems too complicated for a very basic purpose. Q: How should I setup my right to give rw access to john to anything under /path/to/git/myrepo and make it resilient to tree structure change ? Q2: If I should take a step back change the general approach, please tell me.

    Read the article

  • Solaris 10: How to image a machine?

    - by nonot1
    I've got a Solaris 10 workstation that I'd like to create a full image backup from. The machine has 2 drives, one UFS for system root, and 1 ZFS for data storage. I intend to add a third HD to keep the backup images of both primary drives (including any zfs snapshots). The purpose is not disaster recovery, but rather to allow me to easily blow away a series of application installation/configuration changes I intend to try. What's the best way to do this? I'm not too familiar with Solaris, but have some basic Linux knowledge. I looked at CloneZilla, but it does not support Solaris. I'm OK with just a dd | gzip > image style solution, but I'd need some way to first zero-out the non-used blocks on the primary drives to aid gzip. They are are much larger than my 3rd drive, but hardly have any real data. Update to clarify: I specifically want to avoid using any file-system snapshot functionality, because part of the app configuration changes involve/depend slightly on existing and new snapshots. Ideally the full collection of snapshots should be part of the backup. Virtualization not an option, because the goal is to do performance evaluation on a very specific HW configuration. For the same reason, the spurious "back up" snapshots could skew performance data. Thank you

    Read the article

  • How CPU communicates with HW

    - by b-gen-jack-o-neill
    Good day. I am new here, but I could not find answer to my question using google, so I help I do not violate any rules. So, basically, all I want to ask is, how CPU comminucates with other HW, such as printers, Graphic card, sound card, LAN card etc. I know, that for basic system I/O, you can use BIOS interrupts. INT 10h I believe is for display output. But, what I would like to know is, what actually happens when you execute instruction int 10h. From desription of int instruction, it should jump to routine, which is stored on adress pointed by adress stored in iterrupt table. But how does this routine get into the RAM? Does BIOS save that routines to the RAM? And what actually that routine does? I mean, CPU can only acess RAM, right? So how can now acess some other HW? Is there some special instrucion for it? Or is CPU somehow connected to BIOS, and than BIOS actually does the work? And the last thing, does even OS like Windows or GNU/Linux use BIOS interrupts, or can OS acess HW directly? Thanks.

    Read the article

  • Has anyone had luck running 802.1x over ethernet using the stock Windows or other free supplicant?

    - by maxxpower
    I just wanted to see if anyone else has had luck implementing 802.1x over ethernet. So here's my basic setup. Switch sends out 3 eapol messages spaced out 5 seconds apart. if there's no response the machine gets put on a guest vlan with restricted access. If the machine is properly configured it will authenticate and be placed into a secure vlan. About 10% of my windows xp users are getting self assigned 169 addresses. I've used the Odyssey Access Client and it worked without a hitch. I'm using the setting to automatically use the users windows login to authenticate, but it's workign on 90% of the machines so I don't think that's the issue. Checking the logs on the dc it seems that the machines are trying to authenticate with computer credentials even though they are configured not to. I'm running Juniper switches with IAS for radius. I have radius configured for PEAP and MSvhapv2. Macs and linux boxes seem to have no issues authenticating. One last thing to add If I unplugging the ethernet cable and plug it back in usually resolves the issue, but I'd hardly call that acceptable for production. Kinda long winded and specific for a discussion, but just want to see if anyone else has had similar issues or experiences, or if anyone knows of a free XP supplicant that actually works with 802.1x over ethernet.

    Read the article

  • Trying to run a codeigniter app on custom php

    - by hamstar
    I have a CodeIgniter app that I deployed to a server with php 5.2 and my dev box has 5.3, and some stuff doesn't work anymore. I didn't want to upgrade php and risk the other app on the server having issues. Anyway I compiled a custom PHP and added the following to a single .conf file in /etc/httpd/conf.d/zcid.conf with all the other conf files. <VirtualHost *:80> DocumentRoot /var/www/cid/app ServerName sub.example.co.nz </VirtualHost> <Directory "/var/www/cid/app"> authtype Basic authname "oh dear how did this get here i am no good with computer" authuserfile /path/to/auth require valid-user RewriteEngine on RewriteCond $1 !^(index\.php|robots\.txt|createEvent\.php|/cgi-bin) RewriteRule ^(.*)$ /index.php/$1 [L] AddHandler custom-php .php Action custom-php /cgi-bin/php53.cgi </Directory> In /var/www/cid/app I have the cgi-bin folder and the php53.cgi that I copied from /usr/local/php53/bin/php-cgi But now when I navigate to the subdomain it says: The requested URL /cgi-bin/php53.cgi/index.php/ was not found on this server. And if I try to browse to /cgi-bin it says (what it is supposed to?): You don't have permission to access /cgi-bin/ on this server. Quite confused now. Anyone know what to do here? Thanks :)

    Read the article

  • Apache HTTPd FollowSymLinks path permission

    - by apast
    Hi, I'm configuring my development environment with a basic Apache HTTPd configuration. But, to avoid a often problem, I want to map my test URL to my development folder. I'm using Ubuntu. My development path is located under the following example path: /home/myusername/myworkspace/hptargetpath/src/pages Considering the following symbolic link mapping: #ls -l /opt/share/www/mydevelopmentrootpath: lrwxrwxrwx 1 root root 77 2011-02-13 18:53 /opt/share/www/mydevelopmentrootpath -> /home/myusername/myworkspace/hptargetpath/src/pages With this folder mapping, I configured Apache HTTPd with the following configuration: <VirtualHost *:*> ServerName local.server.com ServerAdmin [email protected] DirectoryIndex index.html DocumentRoot /opt/share/www/mydevelopmentrootpath <Directory /opt/share/www/mydevelopmentrootpath/ > Options +Indexes Options +FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> </VirtualHost> But, I'm receiving a 403 Forbidden error when I want to access index.html under the address http://local.server.com/index.html. 403 Forbidden You don't have permission to access /index.html on this server. On httpd debug log, I checked the following message: [Sun Feb 13 19:34:47 2011] [error] [client 127.0.1.1] Symbolic link not allowed or link target not accessible: /opt/share/www/mydevelopmentrootpath I'm thinking that this problem is been generated by some path permission. It's not a direct permission to directory, but some intermediate directory in the path. There's a directive on httpd core Options: SymLinksIfOwnerMatch The server will only follow symbolic links for which the target file or directory is owned by the same user id as the link. But, I tested it without effects. Somebody may help me? I think that it's a trivial configuration on development environment. Best regards, And Past

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >