Search Results

Search found 17924 results on 717 pages for 'z order'.

Page 514/717 | < Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >

  • Data recovery on a corrupted 3TB disk

    - by Mark K Cowan
    Short version I probably need software to run a deep-scan recovery (ideally on Linux) to find files on NTFS filesystem. The file data is intact, but the references are no longer present. Analogous to recovering data from a "quick-formatted" partition. Hopefully there is a smarter way available than deep-scan, one which would recover filenames and possibly paths. Long version I have a 3TB disk containing a load of backups. Windows 7 SP1 refused to detect the disk when plugged in directly via SATA, so I put it on a USB/SATA adaptor which seemed to work at first. The SATA/USB adaptor probably does not support disks over 2.2TB though. Windows first asked me if I wanted to 'format' the disk, then later showed me most of the contents but some folder were inaccessible. I stupidly decided to run a CHKDSK on my backup disk, which made the folders accessible but also left them empty. I connected this disk via SATA to my main PC (Arch Linux). I tried: testdisk ntfsundelete ntfsfix --no-action (to look for diagnostically relevant faults, disk was "OK" though) to no avail as the files references in the tables had presumably been zeroed out by CHKDSK, rather than using a typical journal'd deletion). If it is useful at all, a majority of the files that I want to recover are JPEG, Photoshop PSD, and MPEG-3/MPEG-4/AVI/MKV files. If worst comes to worst, I'll just design my own sector scanner and use some simple heuristic-driven analysis to recover raw binary blocks of data from the disk which appears to match the structures of the above file types. I am unfamiliar with the exact workings of NTFS but used to be proficient at recovering FAT32 systems with just a hex-editor, so I can provide any useful diagnostic information if you let me know how to find it! My priorities in ascending order of importance for choosing the accepted answer: Restores directory structure Recovers many filenames in addition to the file data Is free / very cheap Runs on Linux Recovers a majority of file data The last point is the most important, but the more of the higher points you match the more rep you'll probably get :)

    Read the article

  • howto configure proxy.conf for mod_proxy, apache2, jetty

    - by Kaustubh P
    Hello, This is how I have setup my environment, atm. An apache2 instance on port 80. Jetty instance on the same server, on port 8090. Use-Case: When I visit foo.com, I should see the webapp, which is hosted on jetty, port 8090. If I put foo.com/blog, I should see the wordpress blog, which is hosted on apache. (I read howtos on the web, and installed it using AMP.) Below are my various configuration files: /etc/apache2/mods-enabled/proxy.conf: ProxyPass / http://foo.com:8090/ << this is the jetty server ProxyPass /blog http://foo.com/blog ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyStatus On /etc/apache2/httpd.conf: LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_balancer_module /usr/lib/apache2/modules/mod_proxy_balancer.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so LoadModule proxy_ajp_module /usr/lib/apache2/modules/mod_proxy_ajp.so I have not created any other files, in sites-available or sites-enabled. Current situation: If I goto foo.com, I see the webapp. If I goto foo.com/blog, I see a HTTP ERROR 404 Problem accessing /errors/404.html. Reason: NOT_FOUND powered by jetty:// If I comment out the first ProxyPass line, then on foo.com, I only see the homepage, without CSS applied, ie, only text.. .. and going to foo.com/blog gives me a this error: The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /blog. Reason: Error reading from remote server I also cannot access /phpmyadmin, giving the same 404 NOT_FOUND error as above. I am running Debian squeeze on an Amazon EC2 Instance. Question: Where am I going wrong? What changes should I make in the proxy.conf (or another conf files) to be able to visit the blog?

    Read the article

  • "The zone can be scavenged after" keeps incrementing

    - by kce
    What are you trying to do? I'm trying to enable DNS scavenging on a DNS zone that has about a hundred stale DNS records. What have you tried in order to make it happen? I setup DNS Scavenging per everyone's favorite TechNet Blog post: Don't be afraid of DNS Scavenging. Just be patient. I first disabled scavenging on all of our domain controllers: DNSCmd . /ZoneResetScavengeServers contoso.com 192.168.1.1 192.168.1.2 I then enabled automatic scavenging on the DNS zone: I then enabled DNS scavenging on one of the domain controllers: I then found a few records that I expected to get delete with timstamps from a few years ago and ensured that that the Delete this record when it becomes stale and that time stamp was actually set: Finally I reloaded the zone and waited 14 days (the sum of the Refresh + No-Refresh periods). What results did you expect? I expected to see a 2501 Event in the DNS server logs noting the deletion of a bunch of DNS records. What actually happened? Nothing happened. The Zone Aging/Scavenging Properties showed that the zone could be scavenged after 6/12/2014 10:00:00 AM last week. No 2501/2502 events were recorded. All of the records with "aged" time stamps are still present. The date at which the zone can be scavenged after incremented another seven days to ?6/?18/?2014 10:00:00 AM. As I understand it until that date stays at least 14 days in the past nothing will ever even be eligible for scavenging let alone actually be scavenged. The only 2501 events recorded in the event logs are ones that I have triggered by right clicking and selecting "Scavenge Stale Resource Records". They note that scavenging will try to run again in 168 hours which was this morning. I have DNS scavenging enabled for a few months and have waited patiently for something to happen. I have reloaded the zone multiple times (which resets this timestamp). What am I missing here?

    Read the article

  • Why is /usr/bin/env permission denied to rails server?

    - by Eric Hopkins
    I've just set up rails on an apache server running on Ubuntu, and when I try to go to the root page it gives this error: /usr/bin/env: bash: Permission denied env and all the directories in the path all have permissions 755. I tried setting env to have permissions 777 but still got the same error. Rails is running as "nobody". Why is this happening? I don't know what else to try. In /etc/apache2/sites-available/api.conf: <VirtualHost *:80> ServerName api.thinknation.ca ServerAlias api.thinknation.ca DocumentRoot /var/www/api/public ErrorLog /var/www/logs/error.log CustomLog /var/www/logs/access.log combined RailsSpawnMethod smart <Directory /var/www/api/public> # This relaxes Apache security settings. AllowOverride all # MultiViews must be turned off. Options -MultiViews -Indexes # Uncomment this if you're on Apache >= 2.4: Order allow,deny Allow from all #Require all granted </Directory> </VirtualHost> From config/database.yml in my rails directory (with sensitive user names and passwords omitted): default: &default adapter: mysql2 encoding: utf8 pool: 5 username: root password: socket: /var/run/mysqld/mysqld.sock development: <<: *default database: api_development test: <<: *default database: api_test production: <<: *default url: <%= ENV['DATABASE_URL'] %> database: api username: ------------ password: ------------ Not sure what other details or files are relevant, I will add them if needed.

    Read the article

  • Google Chrome not loading web pages correctly unless multiple refreshes

    - by Brandon Wilson
    Webpages in Google Chrome do not load correctly from time to time. I can't reproduce it, it just happens. Some times it happens when I load the browser other times it happens when I am just browsing. Just now I went to five different web sites which 3 out of 5 of them did not load correctly. I have attached a photo of how Super User loaded the first time I loaded it. If I refreshed it it will load correctly. Facebook is bad like this. Some times Facebook will load correctly but some of there back end scripting may not load so the page may not refresh automatically. Not sure what is going on. I have tried other browsers (Firefox and Internet Explorer) and they seem to be working correctly. Chrome seems to be acting up only on this computer. All my computers are running Windows 8 and I have removed Chrome completely off this computer and re-installed. I even disabled all extensions and cleared all the caches. I even tried running Chrome without being logged in. Not sure what else to do at this point. An example of superuser.com not loading correctly: When I refresh the problem will go away until it happens again. Sometimes it takes two or three refreshes in order for it to correctly load.

    Read the article

  • Elevating UAC via .bat file?

    - by jslaker
    Pretty straightforward one that I'm having trouble finding an answer to. serverfault previously helped me with finding a way to automate Windows updates without using WSUS. It's working fantastically, but to run it over the network, you have to first mount a shared drive. That's pretty simple XP since you just mount the drive and run the updater. On Vista and W7, though, this all has to be done with elevated privileges to work correctly. The UAC account can't see network drives mounted by the regular user, so in order to get everything working, I have to mount the share via net use from an escalated shell. I'd like to automate mounting this share and launching the updater via a simple .bat file. I could probably just instruct everybody to right click "Run as Administrator" on the .bat file, but I'd like to keep things as simple as possible and have the .bat automatically prompt the user to escalate their privileges. Since these computers don't belong to us, I can't count on anything like Powershell being installed, so that rules any solution along those lines out and pretty much have to rely on things that would be included in an RTM Vista install. I'm hoping I'm mostly missing something obvious here. :)

    Read the article

  • How can I set Thunderbird's "Recipient" column to display my email address rather than a friendly na

    - by Howiecamp
    After configuring a single, unified Inbox within Outlook 2007 to unify multiple email accounts, I found Thunderbird 3's Smart Folder feature. It works great, providing individual inboxes for each of your email accounts and a unified inbox which provides a unified, virtual view of those other inboxes. Thunderbird is smart enough so what when I reply to an email addressed to a specific email account, the reply is "From" that email address. In order to know which inbound email was to which of my accounts, I added the "Recipient" column to the inbox Smart Folder: What's displayed in the Recipient column depends on how the sender/sender's email client addresses the email. If they send it to just "[email protected]" without specifying a friendly name, the Recipient column displays "[email protected]" and there's no ambiguity about which account the email was sent to. However, if the sender has me in their address book (likely stored with a friendly name), it will be addressed as "Howard Camp [[email protected]]" and then show in the Recipient column as "Howard Camp". The problem is that if someone emails me with a friendly name at another of my email accounts (e.g. "Howard Camp [[email protected]]", the Recipient column will also display "Howard Camp" and I can't tell which account it's to until I open the message and/or look at the details. How can I configure Thunderbird to always display my email address rather than the sender-specified friendly name in the Recipient column?

    Read the article

  • Providing access to a Samba server for VPN clients

    - by Kamil Kisiel
    We have some Windows users that connect to our network via VPN from home. They need to be able to connect to our Samba server and access a mapped network drive just as they do as when they are on our LAN. The complication is that VPN clients are placed on a subnet other than our office LAN, and behind a firewall. What's the easiest way for me to allow them to still connect to the network share? The solutions I've currently seen involve setting up a WINS server for name resolution purposes and then tunnelling a bunch of the NetBIOS stuff through the firewall. However that means I'd have to set up the VPN DHCP server to hand out the WINS address, something I'm not even sure is possible on the Cisco hardware we have. I'm thinking there must be an easier way. Should I use an LMHOSTS file? Or just map by IP address? Also, I'm not terribly familiar with Windows networking, so which ports would I need to pass through my firewall in order to get the file sharing through?

    Read the article

  • Setting up HTTPS across multiple servers

    - by JohnyD
    I'm looking to offer our online services over https and I'm having a couple of problems understanding how to accomplish this. To access our services you must pass through our ISA firewall to a Win2000 server running IIS6. About half our services are located here and the other half take you to a Win2003 server also running IIS6. So, in order to achieve this must each server have the proper certificate installed? ISA, IIS6_1 and IIS6_2? Is there a separate configuration that must be made to our ISA firewall? The other problem is with the CA and knowing how many certificates I need. It's important to note that the domain name for our services on IIS6_1 is www.domainname.com but the domain name on IIS6_2 is services.domainname.com. I believe that this will require me to purchase more than one certificate. It looks as though we will be going with Thawte's SSL123 as it's a good name and it's fast to get. Will I need to purchase 2 certificates (one for www that will be installed on our ISA firewall as well as IIS6_1, and one for services.domainname.com on IIS6_2)? Or will I need to purchase 3, the extra one being used on our firewall server? Another side question is about SAN's (subject alternative names). Is this basically adding sub-domains to your cert? So I could purchase one cert with 1 SAN for my www and services.? Thanks a lot for your help! Please let me know if I can provide any further information.

    Read the article

  • Managing a test iSCSI target server

    - by dyasny
    Hi all, I am using a RHEL server with a few hard drives, and tgtd as the iscsi target software. I a looking for a way to allocate and deallocate space and targets with that space, without restarting my system, or harming other LUNs. Currently, all my HDDs are PVs in a single VG, and I lvcreate/lvremove as required, and then export the allocated LVs using a tgt script: usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=1 --targetname iqn.2001-04.com.lab.gss:300gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_300Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL /usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=2 --targetname iqn.2001-04.com.lab.gss:200gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 2 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_200Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 2 -I ALL /usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=3 --targetname iqn.2001-04.com.lab.gss:100gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 3 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_100Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 3 -I ALL tgtadm --mode target --op show So in order to remove a LUN, I stop the tgtd service, lvremove the lv, and remove the entry from the iscsi target script When I add a lun, I run lvcreate, and then add an entry to the script and run it. This is not quite optimal, since restarting the service is a bad idea while other LUNs are busy, so I am looking for a more scalable and safer way. Thanks

    Read the article

  • Can't include Javascript variable in PHP mysql_query call? [on hold]

    - by user198895
    I want the PHP mysql_query call to retrieve user values based on the Agency drop-down value but I can't get this to work. Am I unable to include the Javascript variable agency.value in PHP? <script type="text/javascript"> var agency = document.getElementById("agency"); var user = document.getElementById("user"); agency.onchange = onchange; // change options when agency is changed function onchange() { <?php include 'dbConnect.php'; ?> <?php $q = mysql_query("select id as UserID, CONCAT(LastName, ', ' , FirstName) as UserName from users where Agency = " . ?>agency.value<?php . " order by UserName");?> option_html = "<option value=0 selected>- All Users -</option>"; <?php while ($row1 = mysql_fetch_array($q)) {?> if (agency.value == 0 || agency.value == '<?php echo $row1[AgencyID]; ?>') { option_html += "<option value=<?php echo $row1[UserID]; ?>><?php echo $row1[UserName]; ?></option>"; } <?php } ?> user.innerHTML = option_html; } </script>

    Read the article

  • mpirun -np N, what if N is larger than my core number?

    - by Daniel
    Say I have a 4-core workstation, what would linux (Ubuntu) do if I execute mpirun -np 9 XXX Q1. Will 9 run immediately together, or they will run 4 after 4? Q2. I suppose that using 9 is not good, because the remainder 1, it will make the computer confused, (I don't know is it going to be confused at all, or the "head" of the computer will decide which core among the 4 cores will be used?) Or it will be randomly picked. Who decide which one core to call? Q3. If I feel my cpu is not bad and my ram is okay and large enough, and my case is not very big. Is it a good idea in order to fully use my cpu and ram, that I do mpirun -np 8 XXX, or even mpirun -np 12 XXX. Q4. Who decides all of these effciency optimization, Ubuntu, or linux, or motherboard or cpu? Your enlightenment would be really appreciated.

    Read the article

  • Is it possible to re-cab an Administrative Install Point?

    - by Nathaniel Bannister
    We have Acrobat 8 Pro at work, and our media was painfully out of date. Rather than install all of the machines at 8.0.0 and then do the 6 or 7 consecutive reboots adobe expects you to be ok with I decided I'd integrate the .msp files into the installer. After reading up on it, I figured out the exact patch order that adobe required, extracted my cd to an Administrative install point, and ran the patches against it: msiexec /a AcroPro.msi /p AcrobatUpd810_efgj_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd811_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd812_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd813_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd816_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd817_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd820_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd822_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd823_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd825_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd826_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" Now I have a AIP that is fully patched to 8.2.6 (Tested working prior to attempting to CAB it), but is absolutely huge (1.2gb) what I would like to do is take the folders within the AIP and put them back into a cab file for the sake of convenience in transferring the files around. I tried the command: cscript "C:\Program Files\Microsoft SDKs\Windows\v7.0\Samples\sysmgmt\msi\scripts\WiMakCab.vbs" AcroPro.msi Data1 /L /C /S Per the guide I was using, while this did produce the cab file I Wanted, however the resulting MSI fails to install with an error 2602: It's been a while since I've done something like this, and it's probably a glaring oversight on my part, but any insight would be much appreciated.

    Read the article

  • What is causing apache2 proxy error when forwarding to tomcat?

    - by Dark Star1
    I set up apache to proxy for tomcat but I am getting the following error when I target the page. I sometimes get a blank page or a 503: [Error] [Mon Dec 03 04:58:16 2012] [error] proxy: ap_get_scoreboard_lb(2) failed in child 29611 for worker proxy:reverse [Mon Dec 03 04:58:16 2012] [error] proxy: ap_get_scoreboard_lb(1) failed in child 29611 for worker https://localhost:8443/ [Mon Dec 03 04:58:16 2012] [error] proxy: ap_get_scoreboard_lb(0) failed in child 29611 for worker http://localhost:8080/ I have two vhosts configured on the vm as follows: [http host] <VirtualHost *:80> ServerName www.mysite.net ServerAlias mysite.net ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8080/ retry=0 ProxyPassReverse / http://localhost:8080/ retry=0 </VirtualHost> [ssl vhost] <VirtualHost *:443> ServerName www.mysite.net ServerAlias mysite.net ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/apache2/ssl/server.crt SSLCertificateKeyFile /etc/apache2/ssl/server.key ProxyRequests Off ProxyPreserveHost On ProxyPass / https://localhost:8443/ retry=0 ProxyPassReverse / https://localhost:8443/ retry=0 </VirtualHost> My system details are: Apache/2.2.22 (Ubuntu) mod_jk/1.2.32 mod_ssl/2.2.22 OpenSSL/1.0.1 mod proxy_http is also enabled.

    Read the article

  • Proper 16:9 video size for non-HD 4:3 video (for youtube/vimeo)

    - by Xeoncross
    Since High Definition video came out on all the online sites it has changed the default aspect ratio of the player from 4:3 to 16:9. This means that for people posting SD video you have to resize some of your videos to get them to fit right. For example, NTSC DVD quality (aka 480i/p) is 720x480 pixels (width x height). However, low-end High Definition (720i/p) is 1280x720. Resolution Chart Anyway, now that the video players are built for HD you will find that uploading standard quality videos will result in videos that are "letter boxed" which means they have extra black bars on the top and bottom (or sides). Correct me if I'm wrong, but in order to get a 720x480 video to fit a box that is designed for HD the best practice would be to crop some of it off so that it fits as 720x404 since: 16/9 = 1.78 (1.7777777777778) 720/405 = 1.78 405x1.78 = 720.9 The same would stand for 640x480 (old TV quality) video that would need to be 640x360 correct? I'm asking because I'm not sure about all this and whether this is the proper way to fix these letter-boxing/display problems.

    Read the article

  • netsnmp - how to register string?

    - by user1495181
    I use net-snmp. I try to add my own mibs (no need in handler, just a MIB that i can get and set by snmp call), so i followed the scalar example. In order to add my own mibs i defined them in the mib file and create an agent extension.(see below). It work, so i have now an integer MIB. Now i want to add string mib, so i define the MIB , but i dont find a register API for string, like i have for the int - netsnmp_register_int_instance. I look in the includes file , but dosnt found matching one. agent: #include <net-snmp/net-snmp-config.h> #include <net-snmp/net-snmp-includes.h> #include <net-snmp/agent/net-snmp-agent-includes.h> #include "monitor.h" static int int_init = 0; /* default value */ void init_monitor(void) { oid open_connections_count_oid[] = { 1, 3, 6, 1, 4, 1, 8075, 1, 0 }; netsnmp_register_int_instance("open_connections_count", open_connections_count_oid, OID_LENGTH(open_connections_count_oid), &int_init, NULL); }

    Read the article

  • Is it possible to mount a disk image, created with dd, to a directory on a mounted external usb hdd?

    - by Keeper Hood
    I have an image of my home (/dev/sda3) partition, which I've created using the "dd" command. dd if=/dev/sda3 of=/path/to/disk.img I've deleted the home partition via gparted in order to enlarge my /dev/root partition. Then I've recreated the /dev/sda3 partition which is smaller in size then the one I've backed up to the image. I was wondering since I have a 2TB external HDD, could it be possible to mount my backed up image on the external HDD and then copy the files into the /home directory. Since the external HDD would be already in a "mounted state", I'm unsure whether this is a good idea, mounting on a mounted device. I'm running Slackware 13.37 (64bit). used ext4 on all the partitions. resized the root partition with gparted live cd. I've tried mount -t ext4 /path/to/disk.img /mng/image -o loop It gave me an fs error (wrong fs type, bad option, bad superblock on dev/loop/0) Then i did dmesg | tail which outputs: EXT4-fs (loop0) : bad geometry: block count 29009610 exceeds size of defice (1679229 blocks) I have no idea what to do, I want to restore my /home data from the image I've backed up.

    Read the article

  • Boinc permissions problem on OS X

    - by Erik Vold
    I installed boinc 6.10.21 on my OS X 10.5 in order to upgrade from a 6.6 version that I was running today, and I am the admin user, and I was logged in as the admin user. As I was installing 6.10.21 I was asked if non admin users should be allowed to use Boinc, and I said 'yes' to this. Then when I tried to open Boinc I got a message like the following: "You currently are not authorized to manage the client. Either re-install and allow non-admin users or contact your administrator to add you to the 'boinc_master' user group." So I tried to reinstall first, and I was not asked if non admin users should be allowed to use Boinc.. so I retried a few times and got no different result.. So I downloaded 6.10.43 and installed that, and again I was not asked if non admin users should be allowed to use boinc.. and when I tried to run Boinc I got the same message like: "You currently are not authorized to manage the client. Either re-install and allow non-admin users or contact your administrator to add you to the 'boinc_master' user group." So I did a Google search trying to figure out how to add my admin user to the bonic_master user group and found this which suggested I run the following in terminal: "sudo dscl . -append /Groups/boinc_master GroupMembership <your user's short name> CR" So I did this and now I get the following error: BOINC ownership or permissions are not set properly; please reinstall BOINC (Error code -1200) So I reinstall and I am ever asked the question about allowing non admin users again, and I still get this error message every after every reinstall attempt.. What should I do?..

    Read the article

  • Setting Up Apache as a Forward Proxy with Cahcing

    - by Karl
    I am trying to set up Apache as a forward proxy with caching, but it does not seem to be working correctly. Getting Apache working as a forward proxy was no problem, but no matter what I do it is not caching anything, to disk or memory. I already checked to make sure nothing is conflicting in the mods_enabled directory with mod_cache (ended up commenting it all out) and also I tried moving all of the caching related fields to the configuration file for mod_cache. In addition I set up logging for caching requests, but nothing is being written to those logs. Below is my Apache config, any help would be greatly appreciated!! <VIRTUALHOST *:8080> ProxyRequests On ProxyVia On #ErrorLog "/var/log/apache2/proxy-error.log" #CustomLog "/var/log/apache2/proxy-access.log" common CustomLog "/var/log/apache2/cached-requests.log" common env=cache-hit CustomLog "/var/log/apache2/uncached-requests.log" common env=cache-miss CustomLog "/var/log/apache2/revalidated-requests.log" common env=cache-revalidate CustomLog "/var/log/apache2/invalidated-requests.log" common env=cache-invalidate LogFormat "%{cache-status}e ..." # This path must be the same as the one in /etc/default/apache2 CacheRoot /var/cache/apache2/mod_disk_cache # This will also cache local documents. It usually makes more sense to # put this into the configuration for just one virtual host. CacheEnable disk / #CacheHeader on CacheDirLevels 3 CacheDirLength 5 ##<IfModule mod_mem_cache.c> # CacheEnable mem / # MCacheSize 4096 # MCacheMaxObjectCount 100 # MCacheMinObjectSize 1 # MCacheMaxObjectSize 2048 #</IfModule> <Proxy *> Order deny,allow Deny from all Allow from x.x.x.x #IP above hidden for this post <filesMatch "\.(xml|txt|html|js|css)$"> ExpiresDefault A7200 Header append Cache-Control "proxy-revalidate" </filesMatch> </Proxy> </VIRTUALHOST> Thank you once again!

    Read the article

  • How do I prevent or override a group policy on Windows 7?

    - by Kevin
    A few months ago my company was purchased by a large corporation. We recently switched our network over to the large corporate network which has more restrictions requirements. One of these is the requirement to use a proxy server for Internet traffic. However, some of our internal servers are not recognized by the corporate DNS, so we need to provide the fully qualified domain name. For W7, we make changes to the Internet Properties for IE8 and Chrome to include our domain name as an exception to the proxy server (e.g., *.foobar.com). The problem is that a group policy that does not include our domain name is continually pushed out to my systems throughout the day. This requires me to make the appropriate changes to the Internet Properties several times a day in order to access our internal servers. Is there a way that I can prevent the group policy from being pushed to my systems or detect when the group policy is pushed and override it? I am an administrator on all of my systems. I do have Firefox installed which is not subject to the same group policy push, but I need to have IE8 and Chrome working.

    Read the article

  • Active Directory Restricted Group confusion

    - by pepoluan
    I am trying to implement Restricted Group policy for my company's AD infrastructure, namely standardizing the local "Administrators" group. The documentation (and various webpages) said that the "Members of this group" policy will wipe out the "Administrators" group. However, an experiment made me confused: I created 2 GPOs: GPO-A replaces the Local Administrators with a list of domain users (e.g., "Alice" and "Bob") GPO-B inserts a domain user (e.g., "Charlie" -- not part of GPO A) into the Local Administrators Experiment 1: GPO-A gets applied first (link order 2) Everything happens as expected: GPO-A cleans out Local Admins and add "Alice" & "Bob" gets added; GPO-B adds "Charlie". Experiment 2: GPO-B is applied first What happens: "Charlie" gets added to the Local Admins group (which also contains 2 local users) The local users on the PC gets deleted, and "Alice" and "Bob" gets added. Result: Local Admins contain "Alice", "Bob", and "Charlie" My confusion: In Experiment 2, I thought GPO-A will totally erase the Local Admins group, including users added by GPO-B (since GPO-A gets applied after GPO-B). As it happens, it only erase local users from the Local Admins, but keeps the domain users. So, is that the way it should be? Or am I doing something incorrectly?

    Read the article

  • trying to allow domain admins access in apache

    - by sharif
    I am trying to authenticate domain admins through apache and it is not working. Error i get is as follows [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(1432): [client 172.16.0.85] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(915): [client 172.16.0.85] Using HTTP/[email protected] as server principal for password verification [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(655): [client 172.16.0.85] Trying to get TGT for user [email protected] [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(569): [client 172.16.0.85] Trying to verify authenticity of KDC using principal HTTP/[email protected] [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(994): [client 172.16.0.85] kerb_authenticate_user_krb5pwd ret=0 [email protected] authtype=Basic [Mon Sep 24 14:54:45 2012] [debug] mod_authnz_ldap.c(561): [client 172.16.0.85] ldap authorize: Creating LDAP req structure [Mon Sep 24 14:54:45 2012] [debug] mod_authnz_ldap.c(573): [client 172.16.0.85] auth_ldap authorise: User DN not found, LDAP: ldap_simple_bind_s() failed Below is what I have in my httpd file Alias /compass "/data/intranet/html/compass" <Directory "/data/intranet/html/compass"> AuthType Kerberos AuthName KerberosLogin KrbServiceName HTTP/intranet.xxx.com KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms xxx.COM Krb5KeyTab /etc/httpd/conf/intranet.keytab # require valid-user # Options Indexes MultiViews FollowSymLinks # AllowOverride All # Order allow,deny # Allow from all # SetOutputFilter DEFLATE # taken from http://blogs.freebsdish.org/tmclaugh/2010/07/15/mod_auth_kerb-ad-and-ldap-authorization/ # download extra module and install # Strip the kerberos realm from the principle. # MapUsernameRule (.*)@(.*) "$1" AuthLDAPURL "ldap://echo.uk.xxx.com akhutan.usa.xxx.com/dc=xxx,dc=com?sAMAccountName" AuthLDAPBindDN cn=Administrator,ou=Users,dc=xxx,dc=com AuthLDAPBindPassword *** Require ldap-group cn=Domain Admins,ou=Users,dc=xxx,dc=com </Directory> I have followed this guide. I have download and install the tarball. when I try to uncomment MapUsernameRule i get failed error when restarting apache Reloading httpd: not reloading due to configuration syntax error I am using centos 5 64bit. I have added the following line but i still get syntax error LoadModule mod_map_user modules/mod_map_user.so

    Read the article

  • Is there a small business router that shows bandwidth usage graphs in the admin panel?

    - by Robert Drake
    I support a large number of public libraries that are having their networks upgraded in response to a grant application. These libraries are generally home to between 6-15 computers and have little or no tech services either onsite or contracted remotely. In order to justify current and future purchases, a number of the libraries have requested routers that can provide bandwidth usage graphs that they can show to their managing boards. Is there a small business router that displays traffic graphs in the router administration web interface? The router needs to suppport DHCP and basic firewalling. No other features are required. Further, the reports just need to show overall trends. It is not necessary to show traffic by IP, by protocol/application, or by time of day. They just need an overall week to week, month to month, trend line. I'm familiar with MRTG/PRTG/tools that collect SNMP data from the router, but the libraries don't have the expertise for the configuration. I've considered installing the tomato firmware on some cheap home/home office routers, but if there's a commercial product that can be purchased that would be significantly simpler. Also the library boards would be much more likely to approve the purchase of a commercial product over a 'hacked' one. Any assistance would be appreciated.

    Read the article

  • I am trying to set up a ubuntu sever 12.04 on my machine [migrated]

    - by Jseb
    I am trying to set up a server on my home network which will eventually host rails. I am not great in linux server and i try to follow the prompt. I did succesfully get to a black screen which then prompts me to a username then password to then do anything ( assuming). However here what i try to do I kinda fellow his tutorial http://www.ubuntugeek.com/step-by-step-ubuntu-11-04-natty-lamp-server-setup.html but however the command where not 100% like him not in same order but same idea. Then i want to install ubuntu server with gui here the command i try with sudo apt-get upgrade sudo apt-get install ubuntu-desktop Which however give me the following error Err http... inRelease w Failed to fetch ht... So been ignored if i try the desktop one i get E: unable to locate package ubuntu E: unable to locate package desktop So i am assuming i am not connected to the internet, so i try the following command sudo vi /etc/network/interfaces here the output it gives me and i know my gateway on my laptop is 192.168.1.1 address: 192.168.1.148 netmask: 255.255.255.0 network: 192.168.1.0 broadcasts: 192.168.1.255 gateway: 192.168.1.1 Btw i do not know the command to get out of vi and saving it. Err http://us.archive.ubuntu.com precises InRelease Err http://us.archive.ubuntu.com precises-updates InRelease Err http://us.archive.ubuntu.com precises-backports InRelease Reading package lists... Done W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/InRelease W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/InRelease W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backport/InRelease

    Read the article

  • Load Testing a Security/Gateway Appliance

    - by Joel Coel
    In a couple weeks I will load testing a security/gateway appliance. We're a small residential college, and that "residential" means the traffic moving through the appliance is a bit like the Wild West. We have everything from Facebook to World of Warcraft, BitTorrent to Netflix, or Halo to YouTube... basically anything you might find in the home of a high-school or college aged person. Somewhere in there some real academic work gets done as well. We rely on our current appliance for traffic shaping, antivirus, malware filtering, intrusion detection on our servers, logging and abuse reporting, and even some content filtering. All this puts a decent load when we have students around, and I'm concerned about the ability of the new candidate to keep up. On paper it should handle things, but I'm worried. Prior experience is that vendors greatly over-report what an appliance can handle. The product also includes a licensed session limit, and I'm also worried that just a few misbehaving students could unwittingly bring us to that limit and cause service disruptions. I need to know this will work for our campus in order to commit to it. Going a performance level higher in that product takes the pricing way out of line with what we expect and have done in the past. What I need is a good way to load test this guy. My problem is that our current level of summer traffic is less than one percent of what it will be when students come back just six weeks from now. Any ideas on how to really stress this thing and see what it can do, in a way that will give me some clear ideas o. How that will scale for our campus? For the curious, I'm looking at a Watchguard 515, but it could be anything. If I were evaluating a competitor, I'd ask the same question.

    Read the article

< Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >