Search Results

Search found 10245 results on 410 pages for 'quick guide to irm'.

Page 311/410 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • nginx giving of 404 when using set in an if-block

    - by ba
    I've just started using nginx and I'm now trying to make it play nice with the Wordpress plugin WP-SuperCache which adds static files of my blog posts. To serve the static file I need to make sure that some cookies aren't set, that it's not a POST-request and making sure the cached/static file exist. I found this guide and it seems like a good fit. But I've noticed that as soon as I try to set something inside an if my site starta giving 404s on an URL that isn't rewritten. The location block of the configuration: location /blog { index index.php; set $supercache_file ''; set $supercache_ok 1; if ($request_method = POST) { set $supercache_ok 0; } if ($http_cookie ~* "(comment_author_|wordpress|wp-postpass_)") { set $supercache_ok '0'; } if ($supercache_ok = '1') { set $supercache_file '$document_root/blog/wp-content/cache/supercache/$http_host/$1/index.html.gz'; } if (-f $supercache_file) { rewrite ^(.*)$ $supercache_file break; } try_files $uri $uri/ @wordpress; } The above doesn't work, and if I remove all the ifs above and add if ($http_host = 'mydomain.tld') { set $supercache_ok = 1; } and then I get the exact same message in the errors.log. Namely: 2010/05/12 19:53:39 [error] 15977#0: *84 "/home/ba/www/domain.tld/blog/2010/05/blogpost/index.php" is not found (2: No such file or directory), client: <ip>, server: domain.tld, request: "GET /blog/2010/05/blogpost/ HTTP/1.1", host: "domain.tld", referrer: "http://domain.tld/blog/" Remove the if and everything works as it should. I'm stymied, no idea at all where I should start searching. =/ ba@cell: ~> nginx -v nginx version: nginx/0.7.65

    Read the article

  • Managing SharePoint permissions via Active Directory?

    - by rgmatthes
    My company has thousands of employees organized thoroughly via Active Directory. I have confidence in the accuracy of the Department and Title information displayed in the user profiles. I'm helping to put up a brand new SharePoint 2007 site, and I contacted IT about managing the site's permissions through AD Groups. The goal is to have the site automatically assign read/write/contribute/whatever permissions based on the information in AD. For example, we could create an AD Group called "Managers" that would contain anyone with the "Manager" title in their AD user profile. I would have SharePoint tap into this AD Group to mass assign permissions if I knew all managers would need a certain level of access (read/write/contribute/whatever). Then if a manager joins the company or leaves it, the group is automatically updated (provided AD gets updated, of course). My IT rep called back and said it couldn't be done. This seems like a pretty straightforward business requirement, and one of the huge benefits of having Active Directory, but maybe I'm mistaken. Could anyone shed some light on this? A) Is it possible to use dynamically-updated AD Groups when assigning permissions via SharePoint? (Does anyone know of a guide I could show my doubtful IT rep?) B) Is there a "best practice" way to go about this? I've read some debate on whether SharePoint Groups or AD Groups are the way to go. My main concern is dynamic updating. C) If this isn't available out of the box, can someone recommend third-party software that will provide the functionality I'm looking for? A big thanks to anyone who can help me out!!

    Read the article

  • I just got a virus 6 mins ago, how? Situation.

    - by acidzombie24
    -Edit- for the people who say it isn't a virus. Norton does detect it as a virus, an icon was placed on my system tray and rkpg.exe is in my C: which was placed 6 min ago around the time my computer rebooted on its own causing me to lose data :@. Situation I on Windows XP, behind a Linksys router, I don't have DMZ on so nothing should be connecting to me. I had Firefox, MSN and Visual Studio opened. With C# I programmed a quick application to scan some pages with Internet Explorer. The site it was scanning was deviantART (which is pretty trustworthy), I doubt any banners there would hold a virus. I went to a suspicious site called freetxt.com but that was on Firefox and it didn't load the site. With an extra check I ping it and got this message "Ping request could not find host freetxt.com." The virus seems to be called braviax. Right now it brought up a message saying my computer may be infected? How on earth did it get in? I don't have uTorrent installed or any torrent or p2p applications. Nothing is installed on my computer that I haven't installed before and I know the exact time it installed because I see rkpg.exe on my C drive and my computer restarted on its own around the same time. For the previous 30 minutes actually the previous hour all I did was talk on MSN, not click any links (I went to freetxt on my own) and had that Internet Explorer thing running (which I programmed). How did it get in? I really doubt it came from a banner on deviantART and installed when I loaded the page with the webbrowser-control so something else may have happened? Is there any system defaults I should turn off? I have remote assistance off but even if it was on I shouldn't be infected due to the router not forwarding any ports?

    Read the article

  • My SSD stopped working as it ends up in blue death right after windows 7 launches, how can I reset it for new windows install?

    - by HattoriHanzo
    As I mentioned in the title my SSD suddenly started to fail as after launching Windows 7 it goes straight to completely idle and after a few minutes it goes to blue death and restarts. I have an another HD with a windows xp on it and it works fine and I can also see the SSD and can access everything on it. Windows 7 on the SSD does work in safe mode though yet I didn't manage to find out what causes the problem. Since I can sill access the files and save them to another HD I'm looking for the best way to wipe and reinstall Windows 7. I have yet failed to find an easy to follow (or even understand) guide, different sources on the internet recommend different ways of doing this. And some guides are just simply full of terms I have no clue about. It's an OCZ Vertex 2 120GB. I"d very much appreciate if someone could give me an advice on what I should do, preferably in a way so I could regain the best possible performance as well. it doesn't matter if I don't understand the science behind it as long as I can follow the steps. Thanks!

    Read the article

  • Linux will not activate wireless after device has been re-enabled

    - by XHR
    Using a Eee 900A netbook by Asus. By pressing Fn + F2, I can disable or enable the wireless chip on the netbook, a blue LED indicates the status. I've been able to connect to wireless networks just fine with this netbook. However, if the wireless chip ever becomes disabled, I have to reboot to get my network connection back. This generally happens when suspending. For some reason the LED will be off and I have to hit Fn + F2 for it to light up again. However, after doing so, Linux will not reconnect to the network. It simply changes the wireless status from "wireless is disabled" to "device not ready". Even worse, I've recently had issues with the chip being enabled at boot, thus making it nearly impossible to get connected. I've searched around on-line but haven't found much of anything useful on this. This happens on all kinds of different distros including Ubuntu 9.10 Netbook, EeeBuntu 4 beta, Jolicloud and Ubuntu 10.04 Netbook. Edit I noticed this question is getting a lot of views. To give a quick update, I never did resolve this issue with the given distro's. However, I'm currently running Ubuntu 10.10 netbook edition and this issue has gone away.

    Read the article

  • pfsense 2.0 traffic priority - set full priority for single host

    - by Waxhead
    I have a network with several computers all on the same network and since I have very limited bandwidth I would like to prioritize traffic almost like a CPU scheduler prioritize processes. Example: Computer A: Used for webstuff: YouTube, downloads, news, emails etc. Computer B: Transferring files over HTTP Computer C: Transferring files over ftp, rsync whatever What I would like to do is to give A up to for example 90% of the available bandwidth IF A requires it. The leftovers (10%) is divided between B and C (5% each if both is busy) If A is not utilizing all bandwidth then of course B and C should share the full bandwidth (50% each as long as both are maxing out their bandwidth). All computers are on the same network (192.168.1.0 - 192.168.1-10 for example). Appreciate if anyone could shed some light on how I should set up my network to achieve this. To be honest I actually need a step by step guide on how I should set this up. Network setup: (ADSL modem configured in bridge mode (1500kbps/300kbps)) [ADSL modem (bridge)]<-[pfsense2.0]<-[switch]<-[Computer A,B,C...etc]

    Read the article

  • OpenVPN server will not redirect traffic

    - by skerit
    I set up an OpenVPN server on my VPS, using this guide: http://vpsnoc.com/blog/how-to-install-openvpn-on-a-debianubuntu-vps-instantly/ And I can connect to it without problems. Connect, that is, because no traffic is being redirected. When I try to load a webpage when connected to the vpn I just get an error. This is the config file it generated: dev tun server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt ca ca.crt cert server.crt key server.key dh dh1024.pem push "route 10.8.0.0 255.255.255.0" push "redirect-gateway" comp-lzo keepalive 10 60 ping-timer-rem persist-tun persist-key group daemon daemon This is my iptables.conf # Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011 *raw :PREROUTING ACCEPT [37938267:10998335127] :OUTPUT ACCEPT [35616847:14165347907] COMMIT # Completed on Sat May 7 13:09:44 2011 # Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011 *nat :PREROUTING ACCEPT [794948:91051460] :POSTROUTING ACCEPT [1603974:108147033] :OUTPUT ACCEPT [1603974:108147033] -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE -A POSTROUTING -s 10.8.0.0/24 -o eth1 -j MASQUERADE -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j MASQUERADE COMMIT # Completed on Sat May 7 13:09:44 2011 # Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011 *mangle :PREROUTING ACCEPT [37938267:10998335127] :INPUT ACCEPT [37677226:10960834925] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [35616847:14165347907] :POSTROUTING ACCEPT [35680187:14169930490] COMMIT # Completed on Sat May 7 13:09:44 2011 # Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011 *filter :INPUT ACCEPT [37677226:10960834925] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [35616848:14165347947] -A INPUT -i eth0 -j LOG --log-prefix "BANDWIDTH_IN:" --log-level 7 -A FORWARD -o eth0 -j LOG --log-prefix "BANDWIDTH_OUT:" --log-level 7 -A FORWARD -i eth0 -j LOG --log-prefix "BANDWIDTH_IN:" --log-level 7 -A OUTPUT -o eth0 -j LOG --log-prefix "BANDWIDTH_OUT:" --log-level 7 COMMIT # Completed on Sat May 7 13:09:44 2011

    Read the article

  • Apache suddenly very slow on http and faster on https

    - by hsnm
    Background: I have Apache 2 running on ubuntu. There is a low usage on it and mostly being accessed for a web service URL from mobile apps. It was working fine until I installed SSL certificates. I now have both http and https. When I access the server using https, I get a fairly quick response (but probably not as fast as before). When I use http, it's so slow. What I tried: From this post: I curl localhost from the host and it takes some time, meaning there is no routing issue. The server runs on Amazon EC2 instance and is managed by me only. Also: I see that Apache once running, creates the maximum number of processes it is allowed to, which was not the case before. I lowered the MaxClients to 20 and I think I'm getting faster responses but it still takes over a minute and I always have MaxClients Apache processes. dmesg returns many [ 1953.655703] TCP: Possible SYN flooding on port 80. Sending cookies. When I netstat I get many entries with SYN_RECV. Possibly a DDoS attack? From EC2's monitoring diagrams I see a pattern of high "Maximum Network In (Bytes)" since 2 days ago. By the way the server is still being tested, the actual traffic is very low and not consistent. I tried to go with this solution to limit incoming connections using iptables, still no luck, but I'm trying. Question: What could be the problem? Is this a DDoS attack?

    Read the article

  • Jailkit not locking down SFTP, working for SSH

    - by doublesharp
    I installed jailkit on my CentOS 5.8 server, and configured it according to the online guides that I found. These are the commands that were executed as root: mkdir /var/jail jk_init -j /var/jail extshellplusnet jk_init -j /var/jail sftp adduser testuser; passwd testuser jk_jailuser -j /var/jail testuser I then edited /var/jail/etc/passwd to change the login shell for testuser to be /bin/bash to give them access to a full bash shell via SSH. Next I edited /var/jail/etc/jailkit/jk_lsh.ini to look like the following (not sure if this is correct) [testuser] paths= /usr/bin, /usr/lib/ executables= /usr/bin/scp, /usr/lib/openssh/sftp-server, /usr/bin/sftp The testuser is able to connect via SSH and is limited to only view the chroot jail directory, and is also able to log in via SFTP, however the entire file system is visible and can be traversed. SSH Output: > ssh testuser@server Password: Last login: Sat Oct 20 03:26:19 2012 from x.x.x.x bash-3.2$ pwd /home/testuser SFTP Output: > sftp testuser@server Password: Connected to server. sftp> pwd Remote working directory: /var/jail/home/testuser What can be done to lock down SFTP access to the jail? FWIW, I mostly used this as a guide: http://digitalpatch.blogspot.com.ar/2010/03/openssh-daemon-hardening-part-3-setup.html

    Read the article

  • Simple, centralized user management on a small LAN - NIS or LDAP?

    - by einpoklum
    I'm setting up a small LAN for my team. It will, for all intents and purposes, not be connected to any external networks. I would it to have centralized control of user accounts (at least, I think I'd like that; I'm also considering using puppet, so theoretically I could just push /etc/passwd changes, or something). The number of machines is fixed, but not very small. Mostly they're 'attached' to a single user, but sometimes people work remotely on someone else's box; and there are a couple of servers. I've read this question, but my scenario is much simpler (even simpler than in this question) and I'd like to do something (relatively) quick, with not much hassle, but not a dirty totally-insecure hack. Is NIS relevant for my scenario? If not, what's the most hassle-free way to set up LDAP (or LDAP+Kerberos) to achieve the same? Notes: I have no experience with setting up either NIS or LDAP. We use Debian-flavored Linux distributions, mainly Kubuntu 12.04 (not my choice, but that's the way it is).

    Read the article

  • Possible to have different SSLCACertificateFiles under different Location in Apache (client side ssl certs)

    - by Mikko Ohtamaa
    I am setting up Apache to do smartcard authentication. The smartcard login is based on client-side SSL certificates handled by an OS driver. I have currently just one smartcard provider, but in the future there are potentially several of them. I am not sure how Apache 2.2. handles client-side certifications per Location. I did some quick testing and it somehow seemed that only the last SSLCACertificateFile directive would have been effective and this doesn't sound right. Is it possible to have different SSLCACertificateFile per Location in Apache (2.2, 2.4) as described below or is SSL protocol somehow limiting that you cannot have more than one SSLCACertificateFile per IP? Example potential config below how I wish to handle several SSLCACertificateFile on the same server to allow users to log in with different smartcard provides. <VirtualHost 127.0.0.1:443> # Real men use mod_proxy DocumentRoot "/nowhere" ServerName local-apache ServerAdmin [email protected] SSLEngine on SSLOptions +StdEnvVars +ExportCertData # Server-side HTTPS configuration SSLCertificateFile /etc/apache2/certificate-test/server.crt SSLCertificateKeyFile /etc/apache2/certificate-test/server.key # Normal SSL site traffic does not require verify client SSLVerifyClient none SSLVerifyDepth 999 # Provider 1 <Location /@@smartcard-login> SSLVerifyClient require SSLCACertificateFile /etc/apache2/certificate-test/ca.crt # Apache does not natively pass forward headers # created by SSLOptions +StdEnvVars, # so we pass them forward to Python using RequestHeader # from mod_headers RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e </Location> # Provider 2 <Location /@@smartcard-login-provider-2> # For real SSLVerifyClient require SSLCACertificateFile /etc/apache2/certificate-test/provider2.crt # Apache does not natively pass forward headers # created by SSLOptions +StdEnvVars, # so we pass them forward to Python using RequestHeader # from mod_headers RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e </Location> # Connect to Plone ZEO client1 running on fg ProxyPass / http://localhost:8080/VirtualHostBase/https/local-apache:443/folder_sits/sitsngta/VirtualHostRoot/ ProxyPassReverse / http://localhost:8080/VirtualHostBase/https/local-apache:443/folder_sits/sitsngta/VirtualHostRoot/ </VirtualHost>

    Read the article

  • Are ZFS snapshots + S3 a viable backup system for several VMs and general fileserver storage?

    - by AllanA
    I've been tasked with setting up a backup system for my small office (around 12 people). Most of our production stuff is on the AWS cloud, so what I need to back up are some small office/development files (under 100G right now), plus our operational VMs and development, which round out to a bit under 1T. I just need something reliable, convenient, and straightforward. I'm comfortable with Linux, FreeBSD, and to some extent Solaris 10, so I'm leaning toward a full server rather than an appliance system ala Openfiler or FreeNAS. What I'm contemplating is a small fileserver for general storage and nightly backups of the virtual machines, followed up by an offsite backup to Amazon's S3 storage service. It'd be the usual incremental backups nightly and full backup weekly. My question is if using ZFS snapshots, both locally and dumped to S3 via 'zfs send [-i]', is a viable backup tool? Or should I stick to using Duplicity, or some other method entirely? ZFS snapshots on the internal fileserver/backup machine sound like a perfect way to provide quick and convenient data recovery, so I'm likely to go with that for local redundancy. (If you folks see scenarios where relying on ZFS snapshots would be worse than a more traditional archiving backup, feel free to convince me.) But are snapshots flexible enough to lean on for recovery from the loss of my backup server? Or am I better off with something more traditional? (feel free to recommend free or commercial backup solutions you favor.)

    Read the article

  • Without an internet connection my computer loads programs slowly. With an internet connection they load quickly

    - by Peter pete
    Gday. I've a laptop, with 8gb of RAM and a Samsung 830 SSD with about 2/5th of free space out of its 256gb. Win7 64b. Laptop is a Toshiba T130. In the other day I noticed that it took a long time to load a program, for example the python interpreter at the time the computer didn't have access to the home network. In both cases the computer boots quickly. About 40 seconds. Without internet opening the python interpreter, or notepad2.exe, or any program individually takes around 10 seconds. With internet connection (through WiFi) opening the same programs takes about 2 seconds. In both cases, it becomes SSD-instant-quick to open a program from a cold boot. I don't have any network mapped drives. I've tried with AV off (Avast antivirus) and same effect. I've installed all MS updates. I've googled and googled and have found nothing of help. I've run the Samsung SSD magician to "optimise" ssd and that didn't help. What could I do to determine & fix this issue?

    Read the article

  • My new SSD is causing issues. How can I solve them?

    - by Allan
    Computer specs 1 TB harddisc 120 GB 520intel SSD 8 GB DDR3 RAM Athlon Phenom II x64 955 3. 2ghz DK DFI Lanparty FX7900 M3H3 motherboard ASUS ATI RADEON HD6970 2 GB I have bought a new SSD (Intel 520, 120 GB), and wanted to use this as my system disc. I removed the other harddisc, and installed the SSD with the newest firmware. And then Windows 7 I updated Windows 7 with no problems and then put back my old harddrive. I formatted that old harddrive just to clean up at the same time... So at this stage everything was perfect. My new SSD was set as Master 0 Primary it boots on it and I have 1 TB emptyu harddrive I can use for whatever I want. So far no errors at all Now here is the problem, I installed a few games and everytime I tried to play the computer would say Windows must restart because DCOM server process launcher service terminated, or it says Windows must now restart because the Plug and Play service terminated unexpectedly Most commonly this error is caused by a rootkit virus, well I have tried formatting my entire computer, and running every antivirus I could find, so that shouldn't be it. I've also read somewhere it might happen when there are hardware issues. That on the otherhand would make sense, as I just put in a new SSD. I don't expect you to know this error. I haven't found anyone who knew it yet. maybe you can me guide through what might have gone wrong when I placed in the SSD? What have I checked regarding the SSD? It displays the right name when the computer starts up It has the newest firmware Did a 'sfc /scannow' which told me everything was fine I don't know what to do from here. Everything seems to work great with the drive. when I start playing games my computer crashes.

    Read the article

  • Command or tool to display list of connections to a Windows file share

    - by BizTalkMama
    Is there a Windows command or tool that can tell me what users or computers are connected to a Windows fileshare? Here's why I'm looking for this: I've run into issues in the past where our deployment team has deployed BizTalk applications to one of our environments using the wrong bindings, leaving us with two receive locations pointing to the same file share (i.e. both dev and test servers point to dev receive location uri). When this occurs, the two environments in question tend to take turns processing the files received (meaning if I am attempting to debug something in one environment and the other environment has picked the file up, it looks as if my test file has disappeared into thin air). We have several different environments, plus individual developer machines, and I'd rather not have to check each individually to find the culprit. I'm looking for a quick way to detect what locations are connected to the share once I notice my test files vanishing. If I can determine the connections that are invalid, I can go directly to the person responsible for that environment and avoid the time it takes to randomly ask around. Or if the connections appear to be correct, I can go directly to troubleshooting where in the process the message gets lost. Any suggestions?

    Read the article

  • Suspected Corrupted Windows 7 MBR?

    - by AridDecay
    So, this may not be the correct place to put up my question, but i'll give it a shot. I'm having an issue repairing this computer. It was brought to me with the described issue of 'Not turning on' Later, I found that it would come with the error of 'No boot sector found on internal hard drive.' I assumed it was an MBR issue due to a virus or cutting a Windows update short. I booted into my trusty recovery enviroment and ran bootrec.exe /FIXMBR and restarted -- No luck I started to think (After multiple attempts to get the MBR sorted out, including creating a new boot sector) that the Hard-drive was possibly starting to cave in on itself, so I booted into a linux bootable CD and went to check the SMART data. Odd, say's it's inaccessible. That seems odd to me, considering it's a newer (two years old or so) Windows 7 computer. All new Hard-drives have SMART. So, I checked the BIOS. No mention of SMART anywhere. Greaaaat. I decided as a last-ditch effort to switch the hard-drive type to ATA in the BIOS (God knows why, I was getting frusterated) instead of AHCI. VOILA! It actually attempts to boot, gets halfway through the little windows animation, does an incredibly (Half a second) quick BSOD, and shuts down. Does anyone have ideas on what's going on here? I'm at my wits end.

    Read the article

  • Create custom launchers in GNOME 3

    - by hochl
    I'm using Debian testing, and I have been switched to GNOME 3 by the Debian update yesterday. I'm not very comfortable with the UI. I wanted to customize everything like I had it with GNOME 2, but I simply couldn't find any way to change preferences like I'm used to. I've digged some, but all answers I could find did not help me achieve my goals. So please, if anyone knows the solution to this I'd be thankful: 1) I want several launchers that launch terminals, with different arguments and different coloring/title. I have searched everything and there seems to be no menu, no right-click, nothing which is standard in any UI I know. How can I create several launchers in this bar on the left side that launch the same application, just with different parameters? With GNOME 2 this was a piece of cake. 2) I want to switch between different terminals using ALT-TAB. Right now, I'm always just getting to the same, already-opened terminal. When I open two terminals by simply creating the second one by issuing xterm &, I still get one Terminal entry with ALT-TAB, and I have to navigate with cursor keys or mouse wheel to select one of the two xterminals. Instead, I want to open a new terminal when I click the quick launch terminal icon from the bar on the left side of the screen and navigate through them like on KDE/GNOME 2/Windows/any reasonable UI. Can this be done? 3) Is there a trick to make bluetooth devices work like on GNOME 2? Right now, my BT keyboard won't pair anymore, which, as you can imagine, makes me pretty angry. and, if anything fails: 4) How can I switch back to GNOME 2 again? :-) Honestly, who did design this? What were they smoking? I feel like I'm not allowed to do anything except start one of any application that has an icon and just with the default parameters. That can't be true, right? I feel massively restrained by this stuff :(

    Read the article

  • How to quickly check if two columns in Excel are equivalent in value?

    - by mindless.panda
    I am interested in taking two columns and getting a quick answer on whether they are equivalent in value or not. Let me show you what I mean: So its trivial to make another column (EQUAL) that does a simple compare for each pair of cells in the two columns. It's also trivial to use conditional formatting on one of the two, checking its value against the other. The problem is both of these methods require scanning the third column or the color of one of the columns. Often I am doing this for columns that are very, very long, and visual verification would take too long and neither do I trust my eyes. I could use a pivot table to summarize the EQUAL column and see if any FALSE entries occur. I could also enable filtering and click on the filter on EQUAL and see what entries are shown. Again, all of these methods are time consuming for what seems to be such a simple computational task. What I'm interested in finding out is if there is a single cell formula that answers the question. I attempted one above in the screenshot, but clearly it doesn't do what I expected, since A10 does not equal B10. Anyone know of one that works or some other method that accomplishes this?

    Read the article

  • Postgresql Data Aggregation over WAN Securely

    - by Zach
    Hey guys, Need some advice on how to proceed with this situation: My current scenario is that I have several postgresql (50+) boxes deployed throughout various locations and data centers and a beefy postgresql box setup at a homebase location. All of the deployed boxes have identical database layouts. I'm looking for a solution that would allow for a few things. I realize some of these options overlap and some might only contain mutually exclusive solutions. However, I'm interested to hear your thoughts :) Remotely query the deployed boxes and pull the results back to the homebase box for processing. Nightly (remote) "sync" or dump the deployed boxes' databases to a master database on the homebase box. Remotely push a table entry to all of the deployed boxes from the homebase box. Ensure security of data in transit, and remotely deployed boxes. Up to this point I've been floating on a homebrew multithreaded python/perl system that SSH's into these boxes remotely, which are ACL'ed off to the homebase server and pulls (or pushes) the raw query results over the ssh connection. I have even touched #2 (remote syncing) as I know that would get nasty really quick. I'm interested in any ideas for a more elegant solution that can scale up and stick to my FreeBSD/Linux environment.

    Read the article

  • gitweb refusing to blame

    - by Slipp D. Thompson
    I'm attempting to get gitweb (git 1.8.4.2, via git instaweb) in a project dir on my Debian server to offer blame views. In my /etc/gitweb.conf: … # default logo, favicon, etc. settings $feature{'blame'}{'default'} = [1]; $feature{'pickaxe'}{'default'} = [1]; $feature{'snapshot'}{'default'} = ['tgz', 'txz', 'zip']; $feature{'highlight'}{'default'} = [1]; $feature{'pathinfo'}{'default'} = [1]; In my global config file: [gitweb] blame = true snapshot = tgz, txz, zip patches = 256 avatar = gravatar [instaweb] local = false httpd = apache2 -f port = 4321 In my project's .git/config file: [gitweb] blame = true And yet, when I try to load a git blame view (via hand-modifying the URL to http://myserversip:4321/?p=.git;a=blame;f=Tests/InchCoordProxyTests.m;h=b4b2…;hb=53b4, since blame action links don't show up): Doing a quick search for “Blame view not allowed” in the gitweb.cgi source reveals plainly that the gitweb_check_feature('blame') conditional is failing. What am I doing wrong? Or, is there a way to verbosely print out why gitweb is doing what it's doing (e.g. which config files were read, which settings were loaded from each file, etc.)?

    Read the article

  • Tuning up a MySQL server

    - by NinjaCat
    I inherited a mysql server, and so I've started with running the MySQLTuner.pl script. I am not a MySQL expert but I can see that there is definitely a mess here. I'm not looking to go after every single thing that needs fixing and tuning, but I do want to grab the major, low hanging fruit. Total Memory on the system is: 512MB. Yes, I know it's low, but it's what we have for the time being. Here's what the script had to say: General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Your applications are not closing MySQL connections properly Variables to adjust: query_cache_limit (> 1M, or use smaller result sets) tmp_table_size (> 16M) max_heap_table_size (> 16M) table_cache (> 64) innodb_buffer_pool_size (>= 326M) For the variables that it recommends that I adjust, I don't even see most of them in the mysql.cnf file. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_buffer_pool_size = 220M innodb_flush_log_at_trx_commit = 2 innodb_file_per_table = 1 innodb_thread_concurrency = 32 skip-locking big-tables max_connections = 50 innodb_lock_wait_timeout = 600 slave_transaction_retries = 10 innodb_table_locks = 0 innodb_additional_mem_pool_size = 20M user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = localhost key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 4 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M skip-locking innodb_file_per_table = 1 big-tables [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • How to use LVM on Rackspace Cloud

    - by batrick
    Dear all, I am trying to set up a simple but effective solution to make a backup of my rackspace cloud servers. These servers each run subversion, trac, and some database-backed custom php applications. My idea is to set up a LVM and mount a volume under, say, /srv. In this volume, I keep the data from all applications. Instead of caring about how to back-up each app in a different way (svn hotcopy, trac-admin hotcopy, huge mess for mysql), I simply take an LVM snapshot and back this one up cloud files using the excellent cloudcity script (http://github.com/jspringman/cloudcity/blob/master/cloudcity). The advantage of this solution is that it is quick and easy, and LVM allows to make decent backups. As more apps are added, it should not be required to change the backup script much. The downside, and main point of my question here, is that I am not sure how to get LVM working on Rackspace cloud, because there is only one root volume and no service like Amazon's EBS. I was thinking it may be possible to create a large empty file and use this as a "physical volume". Has anybody done anything like this before? Or do you know why it can never work? It would be great to hear from you. Thanks, batrick

    Read the article

  • Managing SharePoint permissions via Active Directory?

    - by rgmatthes
    My company has thousands of employees organized thoroughly via Active Directory. I have confidence in the accuracy of the Department and Title information displayed in the user profiles. I'm helping to put up a brand new SharePoint 2007 site, and I contacted IT about managing the site's permissions through AD Groups. The goal is to have the site automatically assign read/write/contribute/whatever permissions based on the information in AD. For example, we could create an AD Group called "Managers" that would contain anyone with the "Manager" title in their AD user profile. I would have SharePoint tap into this AD Group to mass assign permissions if I knew all managers would need a certain level of access (read/write/contribute/whatever). Then if a manager joins the company or leaves it, the group is automatically updated (provided AD gets updated, of course). My IT rep called back and said it couldn't be done. This seems like a pretty straightforward business requirement, and one of the huge benefits of having Active Directory, but maybe I'm mistaken. Could anyone shed some light on this? A) Is it possible to use dynamically-updated AD Groups when assigning permissions via SharePoint? (Does anyone know of a guide I could show my doubtful IT rep?) B) Is there a "best practice" way to go about this? I've read some debate on whether SharePoint Groups or AD Groups are the way to go. My main concern is dynamic updating. C) If this isn't available out of the box, can someone recommend third-party software that will provide the functionality I'm looking for? A big thanks to anyone who can help me out!!

    Read the article

  • Trying to retrieve data with a thermaltake blacX enclosure: Windows 7 believes the drive to be "uninitialized"

    - by Peeter Joot
    I have a laptop that won't boot. It appears to be a power problem ... laptop auto-turns-off within about 10 seconds of pressing the power button (with power buttons lighted temporarily and no display with or without external monitor). I've followed the dell troubleshooting guide which suggested reseating the memory modules and the hard drives, but that didn't help. Before trying to have the laptop serviced, I wanted to get some data off off the hard drive. I bought a thermaltake blacX enclosure, intending to use this to both use to retrieve the drive data with, and then later use as external storage. Following the instructions (insert cables, insert drive power on) goes fine, and Windows 7 on another laptop installs the device driver software. However, no drive letter shows up in 'Computer'. Under Computer-manage-storage I see the drive is there, and there's an option to "initialize" the drive. The Windows "initialize" dialog gives me the option to pick between "MBR" and "GPT" partitioning, which sounds like a good way to destroy the data on the drive. I'm thinking that I've purchased the wrong device for the job (or that my old drive is damaged). The old drive to recover info from is a Western Digital 500G/7200rpm SATA drive if that is relavent. Both the original laptop and the one I'm using for recovery are running Windows 7. Does anybody have experience with using a blacX enclosure to recover data off an already formatted drive?

    Read the article

  • disk partition centos

    - by FlourishDNA
    I am setting up server for hosting two WordPress which has size of around 70GB. I have already installed CentOS as OS and I would like to partition the Disk. Is there any tool which can help me or can someone guide me though the process as I am not expert is SSH commands. Here are some output that might help. OS: CentOS release 6.3 fdisk -l Disk /dev/xvdb: 214.7 GB, 214748364800 bytes 255 heads, 63 sectors/track, 26108 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000b91e0 Device Boot Start End Blocks Id System Disk /dev/xvda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e542c Device Boot Start End Blocks Id System /dev/xvda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvda2 64 2611 20458496 8e Linux LVM Disk /dev/mapper/vg_flourish-lv_root: 16.7 GB, 16718495744 bytes 255 heads, 63 sectors/track, 2032 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_flourish-lv_swap: 4227 MB, 4227858432 bytes 255 heads, 63 sectors/track, 514 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_flourish-lv_root 16070076 758184 14495560 5% / tmpfs 958500 0 958500 0% /dev/shm /dev/xvda1 495844 31926 438318 7% /boot df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_flourish-lv_root 16G 741M 14G 5% / tmpfs 937M 0 937M 0% /dev/shm /dev/xvda1 485M 32M 429M 7% /boot Thanks

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >