Search Results

Search found 29495 results on 1180 pages for 'cross site scripting'.

Page 1120/1180 | < Previous Page | 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127  | Next Page >

  • Can't set up printing from Mac OS X (10.5.7) to an HP PSC 2410 shared from PC running Ubuntu 9.10

    - by Weston C
    I've got an HP PSC 2410 printer shared from a fresh Ubuntu 9.10 installation. I'm able to send documents to this printer over the network from another Ubuntu machine. But so far, I haven't been able to find a setup where I can send documents to that printer from a MacBook running 10.5.7. On the Mac side, when setting things up, I go into System Prefs Print & Fax, click on the "+" mark, select "IP", pick "IPP", enter the IP address of the Ubuntu box, leave the queue blank, enter the Name and location, and I think it's when I get to the "Print Using" (driver selection) part that I'm running into issues. If I use "Auto Select", it defaults to "Generic PostScript Printer", which I doubt the PSC 2410 is (and sure enough, if I print, the jobs don't go through). If I try "Select a driver to use...", there's not an option for an HP PSC 2400. This seems a little odd: I can plug the printer directly into one of our Macs and it immediately figures out the driver and I can print no problem, but that's apparently the way things work. So, that leaves one option: "Other", which, when selected, brings up a dialog apparently for the purpose of manually locating a driver. I've tried visiting HP's web site. They have drivers for earlier versions of Mac OS X, but state that after 10.4, Mac OS X should just come with the relevant drivers. I've also tried setting things up by interacting with the CUPS server on the Mac through a browser: I go to http://localhost:631/, select "Add New Printer", pick "Internet Printing Protocol (http)" for the Device selection, enter "http://ubuntu.machine.ip.address:631/printers/hp-psc-2400-series" for the Device URI, select "HP" for Make, and then on the next screen, we're back to the problem where the PSC 2400 just doesn't show up. There's an option to "provide a PPD file", which I assume would be the printer driver I can't find. A Google search for "HP PSC 2410 ppd Leopard" doesn't seem to yield much other than a reminder that the printer is supposed to just work out of the box on Leopard. A local search for ".ppd" or "2410" on either Mac also doesn't yield anything that looks like a relevant print driver. I'm totally stuck at this point. Any advice?

    Read the article

  • Further Performance Tuning on Medium SharePoint Farm?

    - by elorg
    I figured I would post this here, since it may be related more to the server configuration than the SharePoint configuration or a combination of both? I'm open for ideas to try, or even feedback on things that maybe have been configured incorrectly as far as performance is concerned. We have a medium MOSS 2007 install prepped and ready for receiving the WSS 2003 data to upgrade. The environment was originally architected by a previous coworker, and I have since added a few configuration modifications to assist with performance before we finally performed the install. When testing the new site collections & SharePoint install (no actual data yet), things seemed a bit slow. I had assumed that it was because I was accessing it remotely. Apparently the client is still experiencing this and it is unacceptably slow. 1 SQL Server running SQL Server 2008 2x SharePoint WFEs - hosting queries (no index) 1x SharePoint Index - hosting index (no queries) MOSS 2007 installed and patched up through December '09 on WFEs & Index All 4 servers are VMs, should have more than sufficient disk space & RAM (don't recall at the moment), and are running Windows Server 2008 - everything is 64-bit. The WFEs have Windows NLB configured, with a DNS name & IP for the NLB cluster. Single NIC on each server (virtual, since VMWare). The Index server is configured as a WFE (outside of the NLB cluster) so that it can index itself and replicate the indexes to the WFEs that will serve the queries. Everything is configured & working properly - it just takes a minute or two to load a page on the local LAN. The client is still using their old portal (we haven't started the migration/upgrade just yet) so there's virtually no data or users. We need to either further tune the configuration, or fix anything that may have been configured incorrectly which is causing this slowness? I've already reviewed & taken into account everything that I could find that was relevant before we even started the install. Does anyone have ideas or pointers? Perhaps there's something that I've missed?

    Read the article

  • Complete Active Directory redesign and GPO application

    - by Wolfgang Kuehne
    after much testing and hundreds of tries and hours invested I decided to consult you experts here. Overview: I want to apply some GPO to our users which will add some specific site to the Trusted Sites in Internet Explorer settings for all users. However, the more I try the more confusing the results become. The GPO is either applied to one group of users, or to another one. Finally, I came to the conclusion that this weird behavior is cause rather by the poor organization in Users and Groups in Active Directory. As such I want to kick the problem from the root: Redesign the Active Directory Users and Groups. Scenario: There is one Domain Controller, and we use Terminal Services (so there is a Terminal Server as well). Users usually log on to the Terminal Server using Remote Desktop to perform their daily tasks. I would classify the users in the following way: IT: Admins, Software Development Business: Administration, Management The current structure of the Active Directory Users and Groups is a result of the previous IT management. The company has used Small Business Server which has created multiple default user groups and containers. Unfortunately, the guys working before me have do no documentation at all. Now, as I inherit this structure I am in the no mans land. No idea which direction to head first. As you can see, the Active Directory User and Groups have become a bit confusing. There is no SBS anymore, but when migrating from SBS to the current Windows Server 2008 R2 environment the guys before me have simply copied the same structure. The real question: Where should I start cleaning from, ensuring that I won't break totally the current infrastructure? What is a nice organization for the scenario that I have explained above? Possible useful info for the current structure: Computers folder contains Terminal Services Computers user group Members: TerminalServer computer located at Server -> Terminalserver OU Member of: NONE Foreign Security Principals : EMPTY Managed Service Accounts : EMPTY Microsoft Exchange Security Groups : not sure if needed, our emails are administered by external service provider Distribution Groups : not sure if needed Security Groups : there are couple of groups which are needed SBS users : contains all the users Terminalserver : contains only the TerminalServer machine

    Read the article

  • Do glue records in non-circular dns-lookups speed up domain resolution or not?

    - by Joe Hopfgartner
    Doing a lookup for my domain on http://www.intodns.com/ I noticed theese two messages: In Parent section: DNS Parent sent Glue The parent nameserver g.gtld-servers.net is not sending out GLUE for every nameservers listed, meaning he is sending out your nameservers host names without sending the A records of those nameservers. It's ok but you have to know that this will require an extra A lookup that can delay a little the connections to your site. This happens a lot if you have nameservers on different TLD (domain.com for example with nameserver ns.domain.org.) and in NS section: Glue for NS records INFO: GLUE was not sent when I asked your nameservers for your NS records.This is ok but you should know that in this case an extra A record lookup is required in order to get the IPs of your NS records. The nameservers without glue are: 109.230.225.96 84.201.40.52 You can fix this for example by adding A records to your nameservers for the zones listed above. I do perfectly understand that the primary objective of glue records is to resolve circular dependencies. The classic use case: my domain is example.com and I want to have the nameserver ns1.example.com. This will never work because i cannot know the ip of ns1.example.com if I don't fetch example.com and in order to do that I need to fetch it from ns1.example.com. To resolve this deadlock I add a glue record to ns1.example.com containing the ip adress of the nameserver, so this can work out. So this problem does not occour if the nameservers are in a different TLD than the domain i want to look up. But however to fetch the zone information from the nameservers I need to know their ip adress right? And in order to know that i need to fetch the zone the nameservers are in from their respective nameservers, right? (or rather my ISP needs to do that in the background) So an extra lookup that takes time? If I now have glue records, I know the IP adress right away without the need to look it up - so this should speed up the resolution of my domain, shouldnt it? However my DNS zone provider (tecserver.at) replied that this would make no sense because "we are not running ns1.ourdomain.com an ns1.ourdomain.com as authorative NS for ourdomain.com. This would be the only sense for glue records. Tecserver has a glue record because the NS for tecserver.at are ns1.tecserver.at and ns2.tecserver.at. Therefore a glue record is needed for resolution.

    Read the article

  • Where to get laptop replacement parts?

    - by wai
    I am sure some of us have older laptops that still works but just one or two parts started not working very well. However, to send it back to the manufacturer for repair might not be worth the money (as some of them even charge consultation fees just to have it checked) as these old laptops are most probably well beyond warranty. Replacing these one or two parts could give your laptop a new lease of life. For example, I have a Dell Inspiron 640m which the fan recently failed. Everything else still works. I know enough to open up the laptop (after reading the service manual) and put it back together again, hence a replacement fan could save it. However, finding these parts turned out to be difficult. I guess I wouldn't be able to get it from Dell since I have been searching around the Dell forums and concluded if I were going to get Dell to at least talk to me, I would have to pay some money (since my warranty expired). I found this online:- http://www.parts-people.com/index.php?action=item&id=3725 However, as I don't live in the United States, shipping alone would cost me 3 times more than the part in question. In addition, if there's a problem, it would be difficult because I probably would have to mail the parts back (costing more money). One of my friend spilled water on her Macbook, she sent it back to Apple and they told her the cost of repairing it exceeds that of buying a new Macbook. They probably have to change everything except the LCD screen. I opened it up and found that most of the parts are still working, only the RAM had fried. Replaced it and it's working again. For some time before discovering only the RAM fried, I was toying with the idea of replacing the logic board myself but looks like I didn't have to. =) Now I know there's a site for Macbooks used replacement parts:- http://www.ifixit.com/ The question is, does anyone know where to get replacement parts at their own locality? It might benefit the community as a whole. Someone might know a local computer store that sells precisely just the things we needed (for now, I hope to find a shop that sells just the fan). If you own a laptop which you had an easy experience finding the replacement parts, do post that as well. PS: I wish laptop manufacturers have outlets where they sell these replacement parts (or at least let you place an order).

    Read the article

  • Creating a USB stick for installing centos 6.x using DVD1 and DVD2 iso files

    - by user250563
    First, we create 2 partitions on the USB stick that is let's say 16GB. first partition is let's say only 1GB and the second partition is the rest of what is available. after we "w" write the changes, the USB now has 2 partitions. 1 is 1GB 1 is more than 14GB so , we have... sdb1 and sdb2 now. now we need to turn these partitions into filesystems some say i should run these commands after those procedures. mkfs.vfat -F 32 /dev/sdb1 mkfs.ext3 /dev/sdb2 but some web pages recommend using: mkfs.vfat -n BOOT /dev/sdb1 mkfs.ext2 -m 0 -b 4096 -L DATA /dev/sdb2 which is it? so let's say the DVDs are called: CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso so we make a directory: mkdir -p /mnt/dvd1 and then mount it: mount -o loop CentOS-6.4-x86_64-bin-DVD1.iso /mnt/dvd1 and i suppose we don't make a directory for dvd2 and we don't have to mount it ? at this point i do not know what should be done. but i think this step might be next: we make the USB bootable by finding the file named mbr.bin and then moving it to there via these commnad. dd conv=notrunc bs=440 count=1 if=/usr/lib/syslinux/mbr.bin of=/dev/sdb parted /dev/sdb set 1 boot on in other words we are "dd-ing it to 'sdb' not sdb1' or 'sdb2'. and then we use parted to set the boot to on for sdb so far everything looks good? here is the confusing parts.. how exactly do i move these iso files to the usb drive? EVERYTHING BELOW IS A GUESS. so at this point i should copy the folder /mnt/dvd1/isolinux to usb's sdb1 or sdb2 ? rename it to syslinux ? and then inside this syslinux folder there will be a file called... isolinux.cfg ? which should be renamed to syslinux.cfg ? and then copy the contents of /mnt/dvd1/images/* to USB's sdb2 ? but i think i am also suppose to copy and paste the both CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso somewhere into this USB's sdb2 partition, correct ? almost like a drag and drop kind of a thing? or do they go into any folders ? centos' own web site has some instructions but those instructions do not work. http://wiki.centos.org/HowTos/InstallFromUSBkey i once got this working but things got ruined, i have to do it again and this time take notes.

    Read the article

  • Centos 5.5 [Read-only file system] issue after rebooting

    - by canu johann
    I have a virtual server under centos 5.5 (hosted by a japanese company called sakura ) Since yesterday, connection through ssh couldn't be established. I've contacted support center who told me to restart VS from the control panel. After restarting, I got the message below Connected to domain wwwxxxxxx.sakura.ne.jp Escape character is ^] [ OK ] Setting hostname localhost.localdomain: [ OK ] Setting up Logical Volume Management: No volume groups found [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/vda3 / contains a file system with errors, check forced. /: Inodes that were part of a corrupted orphan linked list found. /: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) @@cat: /proc/self/attr/current: Invalid argument Welcome to CentOS Starting udev: @[ OK ] Setting hostname localhost.localdomain: [ OK ] Setting up Logical Volume Management: No volume groups found [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/vda3 / contains a file system with errors, check forced. /: Inodes that were part of a corrupted orphan linked list found. /: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) [FAILED] *** An error occurred during the file system check. *** Dropping you to a shell; the system will reboot *** when you leave the shell. *** Warning -- SELinux is active *** Disabling security enforcement for system recovery. *** Run 'setenforce 1' to reenable. /etc/rc.d/rc.sysinit: line 53: /selinux/enforce: Read-only file system Give root password for maintenance (or type Control-D to continue): bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system (Repair filesystem) 1 # setenforce 1 setenforce: SELinux is disabled (Repair filesystem) 2 # echo 1 (Repair filesystem) 4 # /etc/init.d/sshd status openssh-daemon is stopped (Repair filesystem) 5 # /etc/init.d/sshd start Starting sshd: NET: Registered protocol family 10 lo: Disabled Privacy Extensions touch: cannot touch `/var/lock/subsys/sshd': Read-only file system (Repair filesystem) 6 # sudo /etc/init.d/sshd start sudo: sorry, you must have a tty to run sudo (Repair filesystem) 7 # I have 4 site in production and I need to restart the server quickly (SSH + HTTPD ,...). Thank you for your time.

    Read the article

  • Certain Programs cannot access internet

    - by Cindy
    Operating System: Windows 7 (x64) Problem: Certain Programs are unable to access the internet. They claim that there is no connection when you already are connected. Hello, before we start. Just letting you know I'm new here, and I'm very new to Windows 7. I installed it two days ago. I just installed Windows 7 on my laptop and I have a few problems. I play World of Warcraft, as well as a variety of games. And when I first attempt to log into the game, I get a windows error message, but it doesn't stop there. I thought World of Warcraft got corrupted during the upgrade. It seems that I am unable to access the internet from other online games as well. Most say in along the lines of "Cannot connect to patch server, try again later." I cannot use a downloader Also, I have internet explorer. The x32 version of the browser cannot connect to the internet, and when I try to enter "google.com", it says the same thing. I'm only accessing this site through Internet Explorer x64, which I would have been fine with is it's compatible with Adobe Flash. The only thing that seems to connect to the internet are Internet Explorer x64 and Windows Live Messenger. Here are the steps I have taken, but none worked. 1.) Disable Windows Firewall 2.) Have Windows Firewall Enabled, but allow the specific programs to access internet. And allowed all incoming access. 3.) Disabled UAC, Ran the programs as an admin, and set compatibility to Vista. 4.) Uninstalled an anti-virus program. (McAffee Security Suite 2010) 5.) Reinstalled the programs 6.) Reinstalled Windows 7 7.) Retaken the steps on the Administrator account. Please assist me in this problem. I need to get back into the game. Thanks so much in advance.

    Read the article

  • "No route to host" with ssl but not with telnet

    - by Clemens Bergmann
    I have a strange problem with connecting to a https site from one of my servers. When I type: telnet puppet 8140 I am presented with a standard telnet console and can talk to the Server as always: Connected to athena.hidden.tld. Escape character is '^]'. GET / HTTP/1.1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://athena.hidden.tld:8140/"><b>https://athena.hidden.tld:8140/</b></a></blockquote></p> <hr> <address>Apache/2.2.16 (Debian) Server at athena.hidden.tld Port 8140</address> </body></html> Connection closed by foreign host. But when I try to connect to the same host and port with ssl: openssl s_client -connect puppet:8140 It is not working connect: No route to host connect:errno=113 I am confused. At first it sounded like a firewall problem but this could not be, could it? Because this would also prevent the telnet connection. As Firewall I am using ferm on both servers. The systems are debian squeeze vm-boxes. [edit 1] Even when I try to connect directly with the IP address: openssl s_client -connect 198.51.100.1:8140 #address exchanged connect: No route to host connect:errno=113 Bringing down the firewalls on both hosts with service ferm stop is also not helping. But when I do openssl s_client -connect localhost:8140 on the server machine it is connecting fine. [edit 2] if I connect to the IP with telnet it also is not working. telnet 198.51.100.1 8140 Trying 198.51.100.1... telnet: Unable to connect to remote host: No route to host The confusion might come from IPv6. I have IPv6 on all my hosts. It seems that telnet uses IPv6 by default and this works. For example: telnet -6 puppet 8140 works but telnet -4 puppet 8140 does not work. So there seems to be a problem with the IPv4 route. openssl seems to only (or by default) use IPv4 and therefore fails but telnet uses IPv6 and succeeds.

    Read the article

  • Postfix: Relay access denied

    - by Joseph Silvashy
    When I telnet to my server thats running postfix and try to send an email: MAIL FROM:<[email protected]> #=> 250 2.1.0 Ok RCPT TO:<[email protected]> #=> 554 5.7.1 <[email protected]>: Relay access denied I couldn't really find the answer on the site or by looking at other users question/answers, I'm not sure where to start. Ideas? Update So basically looking at the docs: http://www.postfix.org/SMTPD_ACCESS_README.html (section: Getting selective with SMTP access restriction lists), I don't seem to have any of those directives in etc/postfix/main.cf like smtpd_client_restrictions = permit_mynetworks, reject or any of the other ones, so I'm quite confused. But really I'm going to have a rails app connect to the server and send the emails, so I'm not sure how to handle it. Here is what my config file looks like: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = rerecipe-utils alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = $myhostname, localhost.$mydomain, localhost, mail.rerecipe.com, rerecipe.com relayhost = mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all mynetworks = 127.0.0.0/8 204.232.207.0/24 10.177.64.0/19 [::1]/128 [fe80::%eth0]/64 [fe80::%eth1]/64 Something to note is that relayhost is blank, this is the default configuration file that was created when I installed Postfix, when testing to connect with openssl I get this: ~% openssl s_client -connect mail.myhostname.com:25 -starttls smtp CONNECTED(00000003) depth=0 /CN=myhostname verify error:num=18:self signed certificate verify return:1 depth=0 /CN=myhostname verify return:1 --- Certificate chain 0 s:/CN=myhostname i:/CN=myhostname --- Server certificate -----BEGIN CERTIFICATE----- MIIBqTCCARICCQDDxVr+420qvjANBgkqhkiG9w0BAQUFADAZMRcwFQYDVQQDEw5y ZXJlY2lwZS11dGlsczAeFw0xMDEwMTMwNjU1MTVaFw0yMDEwMTAwNjU1MTVaMBkx FzAVBgNVBAMTDnJlcmVjaXBlLXV0aWxzMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB iQKBgQDODh2w4A1k0qiPNPhkrPj8sfkxpKPTk28AuZhgOEBYBLeHacTKNH0jXxPv P3TyhINijvvdDPzyuPJoTTliR2EHR/nL4DLhr5FzhV+PB4PsIFUER7arx+1sMjz6 5l/Ubu1ppMzW9U0IFNbaPm2AiiGBQRCQN8L0bLUjzVzwoSRMOQIDAQABMA0GCSqG SIb3DQEBBQUAA4GBALi2vvk9TGKJubXYJbU0PKmVmsfzFK35yLqr0keiDBhK2Leg 274sWxEH3ds8mUaRftuFlXb7RYAGNlVyTuMTY3CEcnqIsH7F2McCUTpjMzu/o1mZ O/B21CelKetBd1u79Gkrv2vWyN7Csft6uTx5NIGG2+pGi3r0gX2r0Hbu2K94 -----END CERTIFICATE----- subject=/CN=myhostname issuer=/CN=myhostname --- No client certificate CA names sent --- SSL handshake has read 1203 bytes and written 360 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 1AA4B8BFAAA85DA9ED4755194C50311670E57C35B8C51F9C2749936DA11918E4 Session-ID-ctx: Master-Key: 9B432F1DE9F3580DCC6208C76F96631DC5A4BC517BDBADD5F514414DCF34AC526C30687B96C5C4742E9583555A118232 Key-Arg : None Start Time: 1292985376 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) --- 250 DSN Oddly enough when I try to send an email from the machine itself it does work: echo test | mail -s "test subject" [email protected]

    Read the article

  • Apache2 & .htaccess : Apache ignoring AccessFile

    - by Elyx0
    Hi there here is my server configuration: DEBIAN 32Bits / PHP 5 / Apache Server version: Apache/2.2.3 - Server built: Mar 22 2008 09:29:10 The AccessFiles : grep -ni AccessFileName * apache2.conf:134:AccessFileName .htaccess apache2.conf:667:AccessFileName .httpdoverride All the AllowOverride statements in my apache2/ folder. mods-available/userdir.conf:6: AllowOverride Indexes AuthConfig Limit mods-available/userdir.conf:16: AllowOverride FileInfo AuthConfig Limit mods-enabled/userdir.conf:6: AllowOverride Indexes AuthConfig Limit mods-enabled/userdir.conf:16: AllowOverride FileInfo AuthConfig Limit sites-enabled/default:8: AllowOverride All sites-enabled/default:14: AllowOverride All sites-enabled/default:19: AllowOverride All sites-enabled/default:24: AllowOverride All sites-enabled/default:42: AllowOverride All The sites-enabled/default file : 1 <VirtualHost *> 2 ServerAdmin [email protected] 3 ServerName mysite.com 4 ServerAlias mysite.com 5 DocumentRoot /var/www/mysite.com/ 6 <Directory /> 7 Options FollowSymLinks 8 AllowOverride All 9 Order Deny,Allow 10 Deny from all 11 </Directory> 12 <Directory /var/www/mysite.com/> 13 Options Indexes FollowSymLinks MultiViews 14 AllowOverride All 15 Order allow,deny 16 allow from all 17 </Directory> 18 <Directory /var/www/mysite.com/test/> 19 AllowOverride All 20 </Directory> 21 22 ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ 23 <Directory "/usr/lib/cgi-bin"> 24 AllowOverride All 25 Options ExecCGI -MultiViews +SymLinksIfOwnerMatch 26 Order allow,deny 27 Allow from all 28 </Directory> 29 30 ErrorLog /var/log/apache2/error.log 31 32 # Possible values include: debug, info, notice, warn, error, crit, 33 # alert, emerg. 34 LogLevel warn 35 36 CustomLog /var/log/apache2/access.log combined 37 ServerSignature Off 38 39 Alias /doc/ "/usr/share/doc/" 40 <Directory "/usr/share/doc/"> 41 Options Indexes MultiViews FollowSymLinks 42 AllowOverride All 43 Order deny,allow 44 Deny from all 45 Allow from 127.0.0.0/255.0.0.0 ::1/128 46 </Directory> 47 48 49 50 51 52 53 54 </VirtualHost> If i change any Allow from all in Deny from all , it works whenever i put it. I've got one .htaccess at /mysite.com/.htaccess & one at /mysite.com/test/.htaccess with: Order Deny,Allow Deny from all Neither of them work i can still see my website. I've got mod_rewrite enabled but i don't think it does anything here. I've tried almost everything :/ It works on my local environnement (MAMP) but fails when on my Debian server.

    Read the article

  • DVD ROM is not working

    - by Cyril N.
    (note: I don't know in which StackExchange site to put this question, I'll thank the moderator that will move it to a more appropriate place, if there is a S.E. available for my question). I have a DVD RW drive that is well listed in the bios, and if no CD is in, it is also present in the "My Computer" of my Fedora 16. But when I put a disc on it, the icon disapear from "My Computer", and I can not do anything with this ! (Like erasing a RW disc). I'd like to boot a Fedora 17 Live CD image. I burned it on an other computer but when I try to run it in bios, nothing is done and I'm redirected to Grub of my HD. The command cdrecord -scanbus shows this : wodim: Warning: controller returns wrong size for CD capabilities page. wodim: Cannot get CD capabilities data. 6,1,0 601) 'HD-DT%ST' 'DVD%RAM G@22NP20' '1&04' Removable CD-ROM And when I try to mount manually the disc, I got this error : mount: block device /dev/sr0 is write-protected, mounting read-only mount: /dev/sr0: can't read superblock Here's a paste of dmesg | grep sr0 : [ 5.161265] sr0: scsi-1 drive [ 5.161621] sr 6:0:1:0: Attached scsi CD-ROM sr0 [ 834.545978] sr0: Hmm, seems the drive doesn't support multisession CD's [ 841.731194] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 842.021640] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 842.021652] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 842.021662] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 842.021672] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 [ 842.021688] end_request: I/O error, dev sr0, sector 0 [ 842.021697] Buffer I/O error on device sr0, logical block 0 [ 842.023715] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 843.048203] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 843.048211] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 843.048219] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 01 00 [ 843.048234] end_request: I/O error, dev sr0, sector 0 [ 843.048274] EXT4-fs (sr0): unable to read superblock [ 843.063155] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 843.075904] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 843.220512] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 843.220522] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 843.220530] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 843.220538] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 01 00 [ 843.220553] end_request: I/O error, dev sr0, sector 0 [ 843.220609] FAT-fs (sr0): unable to read boot sector The lines from Sense Key .. (line 6) to DRIVER_SENSE (line 11) are repeating a lot. I then changed my DVD player with an other spare one I had, and the disc didn't boot neither. I then changed the IDE cable, but still no success. What can I do to make it work? Thanks for your help.

    Read the article

  • List repositories from multiple projects in Trac using mod_python

    - by Steffen Eriksen
    Currently working on a customized webpage that shows the available projects I have in Trac (1.0.1). I am using mod_python to connect the trac interface. I found a standard page for this, but it didn't show listing of repositories. The page showed some variables to link to the different projects, but I can't find variables to the different repositories inside the projects. I have set up the webpage from reading this: http://trac.edgewall.org/wiki/TracInterfaceCustomization (under Site Appearance) Short summary; editing ../conf.d/trac.conf: PythonOption TracEnvParentDir /parent/dir/of/projects PythonOption TracEnvIndexTemplate /path/to/template And making a template file I can edit at /path/to/template: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:py="http://genshi.edgewall.org/" xmlns:xi="http://www.w3.org/2001/XInclude"> <head> <title>Available Projects</title> </head> <body> <h1>Available Projects</h1> <ul> <dl> <li py:for="project in projects" py:choose=""> <a py:when="project.href" href="$project.href" title="$project.description">$project.name</a> ## <dd> WANT TO ADD CODE HERE! </dd> <py:otherwise> <small>$project.name: <em>Error</em> <br /> ($project.description)</small> </py:otherwise> </li> </dl> </ul> </body> </html> So... The code I want to add is something like: <dd py:for="repos in project.repository" py:choose=""> <a py:when="repos.href" href="$repos.href"> $repos.name</a> </dd> I can't figure out where to add the variables, or if there already exists some variables I can use. After searching through the files it seemed like main.py had something to do with the variables (/usr/local/Trac-1.0.1/trac/web/main.py), but at first look it didn't seem easy to just add more variables. Is there a simple way to find the rest of the variables ? And how hard is it to add more variables? Will it perhaps be easier to do this an alternative way ? All I need is to link to the repositories dynamically

    Read the article

  • HTTP Content-type header for cached files

    - by Brian
    Hello, Using Apache with mod_rewrite, when I load a .css or .js file and view the HTTP headers, the Content-type is only set correctly the first time I load it - subsequent refreshes are missing Content-type altogether and it's creating some problems for me. Specifically, gzip is not compressing these files. I can get around this by appending a random query string value to the end of each filename, eg. http://www.site.com/script.js?12345 However, I don't want to have to do that, since caching is good and all I want is for the Content-type to be present. I've tried using a RewriteRule to force the type but still didn't solve the problem. Any ideas? Thanks, Brian More Details: HTTP headers WITHOUT random query string value: http://localhost/script.js GET /script.js HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; If-Modified-Since: Thu, 29 Apr 2010 15:49:56 GMT If-None-Match: "3440e9-119ed-485621404f100" Cache-Control: max-age=0 HTTP/1.1 304 Not Modified Date: Thu, 29 Apr 2010 20:19:44 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Connection: Keep-Alive Keep-Alive: timeout=5, max=100 Etag: "3440e9-119ed-485621404f100" Vary: Accept-Encoding X-Pad: avoid browser bug HTTP headers WITH random query string value: http://localhost/script.js?c947344de8278053f6edbb4365550b25 GET /script.js?c947344de8278053f6edbb4365550b25 HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; HTTP/1.1 200 OK Date: Thu, 29 Apr 2010 20:14:40 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Last-Modified: Thu, 29 Apr 2010 15:49:56 GMT Etag: "3440e9-119ed-485621404f100" Accept-Ranges: bytes Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 24605 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: application/javascript

    Read the article

  • Revolutionary brand powder packing machine price from affecting marketplace boom and put on uniform in addition to a lengthy service life

    - by user74606
    In mining in stone crushing, our machinery company's encounter becomes much more apparent. As a consequence of production capacity in between 600~800t/h of mining stone crusher, stone is mine Mobile Cone Crushing Plant Price 25~40 times, effectively solved the initially mining stone crusher operation because of low yield prices, no upkeep problems. Full chunk of mining stone crusher. Maximum particle size for crushing 1000x1200mm, an effective answer for the original side is mine stone provide, storing significant chunks of stone can not use complications in mines. Completed goods granularity is modest, only 2~15mm, an effective option for the original mine stone size, generally blocking chute production was an issue even the grinding machine. Two types of material mixed great uniformity, desulfurization of mining stone by adding weight considerably. Present quantity added is often reached 60%, effectively minimizing the cost of raw supplies. Electrical energy consumption has fallen. Dropped 1~2KWh/t tons of mining stone electrical energy consumption, annual electricity savings of one hundred,000 yuan. Efficient labor intensity of workers and also the atmosphere. Due to mine stone powder packing machine price a high degree of automation, with out human make contact with supplies, workers working circumstances enhanced significantly. Positive aspects, and along with mine for stone crushing, CS series cone Crusher has the following efficiency traits. CS series cone Crusher Chamber is divided into 3 unique designs, the user is usually chosen in accordance with the scenario on site crushing efficiency is high, uniform item size, grain shape, rolling mortar wall friction and put on uniform in addition to a extended service life of crushing cavity-. CS series cone Crusher utilizes a one of a kind dust-proof seal, sealing dependable, properly extend the service life of the lubricant replacement cycle and parts. CS series Sprial Sand washer price manufacture of important components to choose unique materials. Each and every stroke left rolling mortar wall of broken cone distances, by permitting a lot more products into the crushing cavity, as well as the formation of big discharge volume, speed of supplies by way of the crushing Chamber. This machine makes use of the principle of crushing cavity, also as unique laminated crushing, particle fragmentation, so that the completed product drastically improved the proportions of a cube, needle-shaped stones to lower particle levels extra evenly.

    Read the article

  • Web Service gets unavailable after several concurrent calls

    - by Roman
    We are testing GoDaddy Virtual Data Center and came to a very strange issue when our web site gets unavailable. GoDaddy Support keeps saying the issue is in our web server settings, but looking at the result of our tests I doubt it. TEST ENVIRONMENT Virtual DataCenter with Windows hosted at GoDaddy.com. All servers have Windows Server 2008 R2 Datacenter, IIS 7. Server One with IP address 10.1.0.4 Server Two with IP address 10.1.0.3 Both servers are in private network not visible from outside. Port Forward with IP address 50.62.13.174. Port Forward is assigned to Server One TEST DESCRIPTION JMeter is used as a Client App to simulate 30 concurrent users sending 100 SOAP requests each. Interval between requests is 1 second. Http link used for testing: http://50.62.13.174/v2/webservices.asmx TEST ONE Test is run from a computer in our office. After JMeter starts running test, almost immediately, the link above becomes unavailable in a browser. After test completion, the link is not available in a browser for about 5 more minutes. Remote Desktop is working well, so we can connect to Server One remotely. After about 5 minutes since test completion, the link becomes available in a browser again. TEST TWO Test is run from Server Two (that is part of our virtual data center). Test works very well, no visible delays in processing. The link is available in a browser all the time. TEST THREE Test is run from Server One using localhost. The result is the same as in TEST TWO - no issues. TEST FOUR We repeated TEST ONE from other computers that we have located in different countries, all with the same result as TEST ONE. CONCLUSION As the test works well from Server Two, but does not work from outside our virtual data center, we feel there are issues with the network or its capacity. The whole behaviour looks like out requests from outside get stuck somewhere before reaching our virtual data center. Has anybody had similar issues in the past? Are there chances that something is wrong with our server settings?

    Read the article

  • PC monitors shut off and system hangs while playing 3D games, but sound continues - Diagnosis?

    - by Jon Schneider
    Two days ago, I started running into a problem with my Windows PC: The PC's two connected monitors simultaneously lose signal and go black (as though the PC had been powered off). The keyboard's Numlock, Capslock, and Scroll Lights will become "stuck" in their current positions, as though the PC is hung. (For example, the Numlock light on the keyboard remains lit regardless of me pressing the Numlock key repeatedly.) No keyboard input does anything. (Ctrl+Alt+Del, Ctrl+Shift+Esc, Ctrl+C, etc.) However -- Whatever sound/music the PC was playing continues to play, and the PC's fans continue running, so the PC hasn't powered itself off or rebooted itself. Opening up the case, the graphics card is pretty hot to the touch. I had this happen 3 times in one evening. In all cases, I was playing a game with 3D graphics when the problem occurred (Torchlight, Minecraft, Magic: The Gathering 2012, Avadon: The Black Fortress demo). I have yet to have the problem happen when I'm not playing a game. This system has been running stable for about 2.5 years prior to this. I didn't make any changes to the system prior to the problem starting to occur. System specs: OS: Windows 7 64-bit Processor: Intel Core 2 Duo E7200 Wolfdale 2.53GHz Video Card: XFX GeForce 9800 GT 512 MB Motherboard: Foxconn P45A-S LGA 775 Intel ATX RAM: Corsair 4 GB (2x 2GB) DDR2-800 (PC2 6400) Full specs: New PC 2008 Troubleshooting tried so far (the problem occurred again after taking each of these steps, one at a time): Updated the video drivers with the latest drivers from NVidia's site. Opened up the case and cleaned out the video card and processor fans (both were pretty dirty). Installed and ran temperature monitor software. The processor idles at about 50 degrees C, and goes up to about 63 degrees C while playing a game (seems on the warm side, but not excessively so?). The software wasn't able to report the temperature of the GPU -- not sure this particular GPU supports software temperature readout? My initial diagnosis is that maybe the GPU is on its last legs (given that it seems to be running pretty hot, and the problem only occurs while playing 3D games). Does this seem likely? Or is it likely that this problem is caused by the processor, RAM, or motherboard? Or could this be a software issue of some kind? Thanks for any advice!

    Read the article

  • Connecting multiple ColdFusion 10 instances to a single Apache 2.2 server

    - by Adam Cameron
    This is on Windows 7 Home Premium edition. I have got two ColdFusion 10 (updater 2) instances: "cfusion" (the default one), and "scratch". I have got a single instance of Apache 2.2 running. Within Apache, I have set up two virtual hosts, each of which needs to be served by a different ColdFusion instance. Each of the CF instances serves files fine via Tomcat's internal web server. Apache serves vanilla HTML files fine too. So both CF instances, and both virtual hosts separately work OK. I can get wsconfig.exe to connect either one of the CF instances to the Apache server, and serve CF files via Apache & that instance. However I cannot find a way of connecting the second CF instance to Apache as well, so that both CF instances are conected, each serving one of the virtual hosts. WSConfig doesn't seem to understand the notion of "multiple CF instances", and the changes it makes to the httpd.conf (via mod_jk.conf) does not seem to be implemented in such a way as to accommodate multiple CF instances talking to a single Apache instance, or multiple virtual hosts. I freely admit to not being confident enough with how mod_jk (or even really httpd.conf) works to be able to guess if I can change stuff to make it work. If I try to add the second CF instance using WSConfig, I just get a message "the web server is already configured for ColdFusion". Be that as it may... not the instance of ColdFusion I want to connect it to! If I remove the existing connector to whichever instance is already connected, I can then connect the other one no problems. Not that this helps, but it demonstrates that the CF instance can connect to Apache. This all used to be fairly straight fwd under older versions of CF and JRun :-( The only docs I have found are on the "Connect multiple Apache virtual hosts on a web server to a single ColdFusion server" page, but that specifically only deals with a single CF instance. There is no equivalent page for multiple CF instances. I'm kinda hoping I can move some of the mod_jk config into my virtual host entries in httpd-vhosts.conf (this is how it used to work for JRun), but I've no idea what to put where. I think I've covered all the necessary info here? If not, sing out and I'll add more. Thanks. PS: tried to specifically tag this as "ColdFusion-10" as the answer will be different from previous CF versions, but it won't let me cos my rep on this site is too low (odd how it doesn't consider my rep from other S/O sites...). If someone with sufficient rep can add it, that'd be cool: it's probably a valid tag to have. Ta.

    Read the article

  • Sendmail smtp-auth issues

    - by SlackO
    I'm running into a problem with Sendmail trying to implement SMTP- auth. I"m running 8.14.5 and have saslauthd running under FreeBSD 7.0-R. I don't believe I have starttls enabled (but I also compiled a version with it and have been testing it too - same problem) - just looking for basic auth, but am wondering if my configuration is not compatible with modern mail clients? I don't think I have any certs set up. It seems an older version of Microsoft Outlook Express works fine with SMTP-auth with no problems, but Outlook 2010 won't work, and neither will Eudora (basic settings to not use encryption and use same uid/pw as pop3 account name) When trying to send mail the server reports: "550 571 Relaying Denied. Proper authentication required." Is there some config that I am missing? Why does it work with Outlook Express but not other e-mail clients? my site.config.m4 has: APPENDDEF(confENVDEF',-DSASL=2') APPENDDEF(conf_sendmail_LIBS',-lsasl2') dnl APPENDDEF(confLIBDIRS',-L/usr/local/lib/sasl2') APPENDDEF(confLIBDIRS',-L/usr/local/lib') APPENDDEF(confINCDIRS',-I/usr/local/include') My sendmail.mc has: define(ConfAUTH_OPTIONS',A') TRUST_AUTH_MECH(LOGIN PLAIN')dnl define(ConfAUTH_MECHANISMS',`LOGIN PLAIN')dnl My /usr/local/lib/sasl2/Sendmail.conf has: pwcheck_method: saslauthd When I restart sendmail this shows up in the logs: Jun 16 12:36:24 x sm-mta[79090]: restarting /usr/sbin/sendmail due to signal Jun 16 12:36:24 x sm-mta[81145]: starting daemon (8.14.5): SMTP+queueing@00:30:00 Jun 16 12:36:24 x sm-mta[81147]: STARTTLS=client, relay=mxgw1.mail.nationalnet.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-SHA, bits=256/256 Jun 16 12:36:24 x sm-mta[81148]: STARTTLS=client, relay=mxgw1.mail.nationalnet.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-SHA, bits=256/256 testing on the cmd line: telnet localhost 587 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 xxxt ESMTP Sendmail 8.14.5/8.14.5; Fri, 15 Jun 2012 18:28:03 -0500 (CDT) ehlo localhost 250-xxxx Hello localhost [127.0.0.1], pleased to meet you 250-ENHANCEDSTATUSCODES 250-PIPELINING 250-8BITMIME 250-SIZE 250-DSN 250-AUTH GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN 250-DELIVERBY 250 HELP I am not using any certs or ssl right now - just trying to get basic auth to work. Anyone have any ideas?

    Read the article

  • Is Gmail Being Blocked by my ISP?

    - by james
    EDIT: I thought I pinpointed the problem. Just now I tried to go to the firefox addons page which uses https and gmail also uses https. So I thought I am unable to load https pages on this computer. So I went to a bank site which uses https but that loads just fine. Sigh.... I asked this over at superuser but they weren't able to help, so I was hoping the sysadmins here will be able to advise as to what's wrong. Although the issue here is with a PC and not a server it still deals with networking so I hope it's not too irrelevant. The Issue: I have a desktop on which I cannot access Gmail and also youtube sign in (I believe since youtube is owned by google they both use the same sign in system). On other computers that uses the same connection via a wireless router I can access both gmail and youtube sign in just fine. On this computer which doesn't have a wireless card and so I have to connect via Ethernet cable (connected to a USB converter since the Ethernet port doesn't work anymore) I can access all sites and services including things like aol and hotmail. But only when it comes to gmail, do I get complete and utter throttling. I even turned off my AV ad Firewall momentarily and no luck. The gmail log in page starts to load and by mid point it just stays there loading and loading and loading... never ends. I tried everything, I reset the modem and router multiple times. I reinstalled my operating system from a vista to a windows 7 hoping that a complete reinstall would solve the issue, but no luck. And yes, I am going to call my ISP but not to solve this issue, but to cancel them. I want to upgrade to cable from DSL anyway. I didn't mention my ISP because I'm not sure if that is within the rules (if it's okay some one let me know and I will). P.S. All this happened one day, before that gmail was perfectly accessible in this computer. I can't remember anything special happening on that day prior to this. The only thing I can think of is, my ISP or Google itself is blocking this computer based on it's mac address, but I don't know if that's even done. Additional info: PC: Windows 7 Ultimate 32 bit Connection Type: DSL Connecting Medium: Ethernet cable via USB converter I should mention I can access gmail and youtube just fine through a IP proxy service.

    Read the article

  • Cisco ASA SSL VPN options?

    - by JonH
    Disclaimer: I am not a network admin so I may be wrong here but I thought asking here would help. I'm a developer mainly on the .net framework as well as helping get a mobile intranet app working. Because this app is only allowed to be used on our network I can easily run this app on our wireless network connection within our building. All is fine and dandy but we'd also like to be able to run this mobile app at say a customer plant using VPN software. I thought surely this could be easy as we exclusively use Samsung s4 phones so I thought I'd download Cisco's Samsung any connect software to allow us to VPN...its right on the play store. Sure enough it doesn't work. I mention it to our network admin who says not possible since we have old technology that doesn't support SSL. He mentions we'd have to upgrade all of our hardware, the firewall, etc. to get this to work. We really need VPN on our phones not only for this app but other internal apps, etc. He did mention the following: We can’t upgrade the software on our ASA, because we don’t have enough memory for the new version.  (the asa is very old).  We can’t add more memory, so we would have to get a new firewall, which I have been told I cannot do. In addition he also mentioned: The Samsung AnyConnect client uses SSL to connect.  With the current (old) version of software that our firewall is running, the SSL connections are unreliable.  We need different hardware in order to upgrade our firewall, which we are unable to attain at this time.  This is the same reason that Windows 8 clients are not able to connect. I am curious hence me asking. vpns seem to be fairly simple to setup. What other options do I have aside from making this a public site or web service that consumes this data over the internet as this is a complete no no. What can we do to make this work without that much effort or cost.

    Read the article

  • eAccelerator settings for PHP/Centos/Apache

    - by bobbyh
    I have eAccelerator installed on a server running Wordpress using PHP/Apache on CentOS. I am occassionally getting persistent "white pages", which presumably are PHP Fatal Errors (although these errors don't appear in my error_log). These "white pages" are sprinkled here and there throughout the site. They persist until I go to my eAccelerator control.php page and clear/clean/purge my caches, which suggests to me that I've configured eAccelerator improperly. Here are my current /etc/php.ini settings: memory_limit = 128M; eaccelerator.shm_size="64", where shm.size is "the amount of shared memory eAccelerator should allocate to cache PHP scripts" (see http://eaccelerator.net/wiki/Settings) eaccelerator.shm_max="0", where shm_max is "the maximum size a user can put in shared memory with functions like eaccelerator_put ... The default value is "0" which disables the limit" eaccelerator.shm_ttl="0" - "When eAccelerator doesn't have enough free shared memory to cache a new script it will remove all scripts from shared memory cache that haven't been accessed in at least shm_ttl seconds. By default this value is set to "0" which means that eAccelerator won't try to remove any old scripts from shared memory." eaccelerator.shm_prune_period="0" - "When eAccelerator doesn't have enough free shared memory to cache a script it tries to remove old scripts if the previous try was made more then "shm_prune_period" seconds ago. Default value is "0" which means that eAccelerator won't try to remove any old script from shared memory." eaccelerator.keys = "shm_only" - "These settings control the places eAccelerator may cache user content. ... 'shm_only' cache[s] data in shared memory" On my phpinfo page, it says: memory_limit 128M Version 0.9.5.3 and Caching Enabled true On my eAccelerator control.php page, it says 64 MB of total RAM available Memory usage 77.70% (49.73MB/ 64.00MB) 27.6 MB is used by cached scripts in the PHP opcode cache (I added up the file sizes myself) 22.1 MB is used by the cache keys, which is populated by the Wordpress object cache. My questions are: Is it true that there is only 36.4 MB of room in the eAccelerator cache for total "cache keys" (64 MB of total RAM minus whatever is taken by cached scripts, which is 27.6 MB at the moment)? What happens if my app tries to write more than 22.1 MB of cache keys to the eAccelerator memory cache? Does this cause eAccelerator to go crazy, like I've seen? If I change eaccelerator.shm_max to be equal to (say) 32 MB, would that avoid this problem? Do I also need to change shm_ttl and shm_prune_period to make eAccelerator respect the MB limit set by shm_max? Thanks! :-)

    Read the article

  • Can't get virtual desktops to show up on RDWeb for Server 2012 R2

    - by Scott Chamberlain
    I built a test lab using the Windows Server 2012 R2 Preview. The initial test lab has the following configuration (I have replaced our name with "OurCompanyName" because I would like it if Google searches for our name did not cause people to come to this site, please do the same in any responses) Physical hardware running Windows Server 2012 R2 Preview full GUI, acting as Hyper-V host (joined to the test domain as testVwHost.testVw.OurCompanyName.com) with the following VM's running on it VM running 2012 R2 Core acting as domain controller for the forest testVw.OurCompanyName.com (testDC.testVw.OurCompanyName.com) VM running 2012 R2 Core with nothing running on it joined to the test domain as testIIS.testVw.OurCompanyName.com A clean install of Windows 7, all that was done to it was all windows updates where loaded and sysprep /generalize /oobe /shutdown /mode:vm was run on it A clean install of Windows 8, all that was done to it was all windows updates where loaded and sysprep /generalize /oobe /shutdown /mode:vm was run on it I then ran "Add Roles and Features" from testVwHost and chose the "Remote Desktop Services Installation", "Standard Deployment", "Virtual machine-based desktop deployment". I choose testIIS for the roles "RD Connection Broker" and "RD Web Access" and testVwHost as "RD Virtualization Host" The Install of the roles went fine, I then went to Remote Desktop Services in server manager and wet to setup Deployment Properties. I set the certificate for all 3 roles to our certificate signed by a CA for *.OurCompanyName.com. I then created a new Virtual Desktop Collection for Windows 7 and Windows 8 and both where created without issue. On the Windows 7 pool I added RemoteApp to launch WordPad, For windows 8 I did not add any RemoteApp programs. Everything now appears to be fine from a setup perspective however if I go to https://testIIS.testVw.OurCompanyName.com/RDWeb and log in as the use Administrator (or any orher user) I don't see the virtual desktops I created nor the RemoteApp publishing of WordPad. I tried adding a licensing server, using testDC as the server but that made no difference. What step did I miss in setting this up that is causing this not to show up on RDWeb? If any additional information is needed pleas let me know. I have tried every possible thing I can think of and I am just groping around in the dark now. The virtual machines running on testVwHost The configuration screen for RD Services The Windows 7 Pool The Windows 8 Pool This is logged in as testVw\Administrator

    Read the article

  • ASA and cisco vs NSA sonic firewall

    - by Lbaker101
    Currently I’m trying to structure our network to fully support and be redundant with BGP/Multi homing. Our current company size is 40 employees but the major part of that is our Development department. We are a software company and continued connection to the internet is a requirement as 90% of work stops when the net goes down. The only thing hosted on site (that needs to remain up) is our exchange server. Right now i'm faced with 2 different directions and was wondering if I could get your opinions on this. We will have 2 ISPs that are both 20meg up/down and dedicated fiber (so 40megs combined). This is handed off as an Ethernet cable into our server room. ISP#1 first digital ISP#2 CenturyLink we currently have 2x ASA5505s but the 2nd one is not in use. It was there to be a failover and it just needs the security+ license to be matched with the primary device. But this depends on the network structure. I have been looking into the hardware that would be required to be fully redundant and I found that we will either of the following. 2x Cisco 2921+ series routers with failover licenses. They will go in front of the ASAs and either connects in a failover state or 1 ISP into each of the 2921 series routers and then 1 line into each of the ASAs (thus all 4 hardware components will be used actively). So 2x Cisco 2921+ series routers 2x Cisco ASA5505 firewalls The other route 2x SonicWalls NSA2400MX series. 1 primary and the secondary will be in a failover state. This will remove the ASAs from the network and be about 2k cheaper than the cisco route. This also brings down the points of failure because it’s just the 2x sonicwalls It will also allow us to scale all the way up to 200-400 users (depending on their configuration). This also makes so the Sonic walls. So the real question is with the added functionality ect of the sonicwall is there a point in paying so much more to stay the cisco route? Thanks!

    Read the article

  • DNS NS and domain clarification

    - by thejartender
    I am really trying to get my home web server up and I don't seem to be succeeding. My web server withing my host system is running my web application and is viewable at the current isp ip 88.89.190.171 over WAN indicating that the webapp is fine and that router ports are forwarded. I have set up a DNS on this system with a single name server in the network and I manage to ping it with ping ns.thejarbar.org I have registered this private name server at my current hosting provider. My domain (thejarbar.org) is obviously registered and I have pointed it to my name server. My question here is if it is simply a matter of waiting on propagation for me to be able to ping my domain? Another way of asking this is if the fact that my name server is discoverable indicates that I have set it up correctly to be used? I have tested with dig and dig -x on my host and have A records for the name server. The server is not the Authorative server so I am concerned that this may be the reason why my site is not discoverable. Is there anything else I may need to so still? I only have one ns. currently, but should this succeed I will be purchasing a more stable secondary system to host my development applications. This is my best chance at getting work (freelance development) due to illness) and this I feel is the last step I need to succeed. Please note that this is temporarily a home server and I will most likely be using it as part of a professional setup very soon I will likely have to repeat this question therefore in a prefessional context in a few weeks as nothing will be different other than the fact that I am going to have a server running elsewhere. I am using bind9 and Ubuntu 12.10 and my records are: $TTL 3D @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 10.0.0.42 @ IN NS ns.thejarbar,org. yuccalaptop IN A 10.0.0.19 ns IN A 10.0.0.42 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. $TTL 3D 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 0.0.10.in-addr.arpa. IN NS ns.thejarbar.org. 42 IN PTR thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. My localhost IP is 10.0.0.42 I wish for this to be my host and name server.

    Read the article

< Previous Page | 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127  | Next Page >