Search Results

Search found 25836 results on 1034 pages for 'solution evangelist'.

Page 101/1034 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • maximum number of connections Squid

    - by Isaac
    I have a Squid proxy server that controls all internet traffic for my network. I need a way to stop users from downloading big files (say 50MB) in my network. I banned some famous ports (e.g. torrent) but some downloads are possible by HTTP port. Obviously I cannot ban port 80! A simple solution is limiting maxmimum number of the simultaneous connections for each IP (e.g. 3 connections). It's possible in Squid with this config: acl ACCOUNTSDEPT 192.168.5.0/24 acl limitusercon maxconn 3 http_access deny ACCOUNTSDEPT limitusercon But this solution has really bad impact in web browsing, because any smart browser get different parts of a website by several connections simultaneously to speedup web browsing. But if we have a maximum number of connections, the browsers will fail to get some parts and the website will be shown partially and some parts/images/frames will not be shown. So, can we limit maximum number of persist connections? I think this policy will works: Specify Maximum number of connections that is alive for 10 seconds But Number of simultaneous connections for every IP is unlimited But how can we implement this policy when Squid? With which config? UPDATE: artifex and Tom Newton offered using a bandwidth-limiting approach to fight against downloaders. But bandwidth-limiting in Squid has a shortcoming: It's static and cannot dynamically change. So a person has a limited bandwidth not matter how many people are using internet (maybe nobody!) Also, this solution cannot help to stop people from downloading. They still can download but in a lower speed. But if we find a way to terminate persist connections (or any connection that is alive more than a specific time), downloading big files will be almost impossible (always there is some way!)

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • Kickstart: Serve dynamic kickstart images via a CGI or PHP script?

    - by Stefan Lasiewski
    I'd like to kickstart a couple dozen RHEL6/SL6 servers. However, some of these servers are different and I don't want to create a new ks.cfg file for each class of server. Are there any products which can generate a Kickstart file dynamically on the fly, from a template? For example, if I append a line like this to the KERNEL: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi Then the script ks.cgi can determine what host this is (Via the MAC address), and print out Kickstart options which are appropriate for that host. I could optionally override some options by passing parameters to the script, like this: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi?NODETYPE=production&IP=192.168.2.80 After we kickstart the server, we activate Cfengine/Puppet on this system and manage the system using our favorite Configuration Management product. We're experimenting with xCAT but it is proving too cumbersome. I've looked into Cobbler, but I'm not sure it does this. Update: A roll-your-own solution is discussed in the O'Reilly book: Managing RPM-Based Systems with Kickstart and Yum, Chapter 3. Customizing Your Kickstart Install Dynamic ks.cfg, which echos some of the comments in this thread: To implement such a tool is beyond the scope of this Short Cut, but I can walk through the high-level design. Any such solution would mix a data store (the things that change) with a templating solution (the things that don’t change). The data store would hold the per-machine data, such as the IP address and hostname. You would also need a unique identifier, perhaps the hostname, such that you could pick up a given machine’s data. The data store could be a flat file, XML data, or a relational database such as PostgreSQL or MySQL. In turn, to invoke the system, you pass a machine’s unique identifier as a URL parameter. For example: boot: linux ks=http://your.kickstart.server/gen_config?host-server25 In this example, the CGI (or servlet, or whatever) generates a ks.cfg for the machine server25. But where, oh where, is the code for ks.cgi?

    Read the article

  • Is this way of using Excel 2007 Pivot table for BI scalable ?

    - by Sim
    Hi all, Background: We need to consolidate sales data across the country to do analysis Our Internet connection/IT expertise/IT investment is not quite strong, therefore full BI solution is out of question I tried several SaaS BI solution (GoodData, ZohoReports) and while they're good, they seem not to fully support what we need We're looking at 'bout 2 millions record for every 2 months My current approach Our (10) sites currently gathers data from all their branches and consolidate them into 1 Excel file with Pivot table and embed source data In HQ, I will request 10 sites to send back those Excel files periodically We will import those Excel to our MSSQL server There will be a master Excel file, that will also have the same pivot table (as those came from site Excel file), and datasource is the MSSQL server More details For testing, I currently use MSSQL 2008 Express on my laptop So far, I imported our transactions for the past 2 months and there are 2 millions+ row in 1 table in MSSQL (we just use 1 table, corresponding to our common pivot table structure). DB size is ~ 600 MB In the master Excel file, if not including the source data, it's just < 10MB. Including the source data will increase the size to 60 MB (so I supposed Office 2007 automatically zip the data ?) I try using the Pivot (drag-and-drop fields) and the performance so far is OK (my laptop specs: C2D T7200, 3GB RAM, Windows XP) So my question is : If we're looking at full year transaction (roughly 15 millions rows in MSSQL 2008 Express, 3.6 GB in size), is there any issue with that 15 million rows in 1 table in SQL Express ? Is there any performance issue with the pivot table at that time ? Can it still embed the source data ? (I google-ed but didn't find the maximum size of source data Excel 2007 can embed) Any other suggestions on how we can better do this ? Given that we can't afford the full BI solution, any light-weight/budget/SaaS BI that you can recommend ? Thanks

    Read the article

  • Multiple Devices connecting to VPN on CentOS server

    - by jfreak53
    I am looking for a solution as to what would be the VPN software for multiple OSes and Devices. I currently have 15 systems to connect to a VPN. I was using Hamachi from LogMeIn but their lack of Android support really upsets me, and their limited support for Linux OSes is also a let down. 90% of my systems are Ubuntu 11+ systems, only 2 are Windows XP. But I also have a few people, maybe 3 that need to connect to it from Android devices. This is where Hamachi has let me down and I want to move to my own VPN solution. The server would be a simple VPS running CentOS. So I need some VPN software that allows connection of those to a Linux based server. I wanted to go with OpenVPN, but I am under the opinion that in any OS you have to have their software to connect to the VPN. Ubuntu supports VPN's out of the gate, but OpenVPN requires extra software to be installed, I don't want this if I can help it. Same with Windows and same with Android. Plus android mostly requires rooted devices for OpenVPN, at least from what I've read. I was looking at maybe L2TP, but I'm not sure how easy it is to get Ubu systems connected with it as I haven't found much on the subject, let alone Window's XP machines. I know Android connects out of the gate to it. I don't know much about L2TP but I know it's a pain to get running in CentOS from what I have read. Now the last option is some sort of software for PPTP but I've never read anything on it and don't know if all systems are compatible with it. What would be your solution to these devices and multiple OSes? OpenVPN seems to be my heading I just don't like it that it always requires software to run and rooted Android Devices. Any solutions for this and install solutions? Maybe a different OS for the server like Ubuntu would make another type of VPN easier?

    Read the article

  • How to connect a USB GDI printer to Linux over a D-Link print server?

    - by jpe
    The setup is the following: +------------+ +-----------------+ +---------+ | HP LJ P1005|--USB--| D-Link DPR-1020 |---LAN---| PC Linux| +------------+ +-----------------+ + +---------+ | +------------+ +--| PC Windows | +------------+ HP LJ P1005 is one of those GDI printers that requires the printer driver to do most of the work for it and therefore is a bit "special". D-Link DPR-1020 is a print server with an Ethernet and an USB port that actually supports printing to challenged (read GDI) printers using a utility called PS-Link. What the utility does is basically mirror a USB port over the network to the print server so that the printer driver and the printer both are happy to talk to each other. The PC-s are notebooks that come and go, i.e. are not there all the time. Is there an equivalent of the D-Link PS-Link utility for Linux that could mirror a USB port over the network for a Linux host? And can the solution be used with D-Link DPR-1020? If not then I basically wasted the money buying the print server because the goal was to share a small printer among a couple of users with diverse operating systems in an office. The print server specs say that it supports Linux and LJ P1005, but the Catch 22 appears to be the solution used for GDI printers... It should be noted that it is possible to print from Linux to LJ P1005 directly over USB. This far sharing involved reconnecting the USB cable to appropriate computer to print. Now one of the desks is separated, so the cable does not work. Searching the net did not yield anything useful. Please do not suggest solutions involving a Windows machine (either virtual or not), my question is whether a solution only involving a Linux machine exists.

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • A lots of Apache processes are using my CPU uses always more than 70%

    - by Barkat Ullah
    I am running a plesk panel in 1and1. I have 120 sites running and all are using pligg cms, each site has 600 visitors per day. Please see the details of my server below: HDD-1000GB RAM-16GB Processor-6 Core I always see a lot of apache processes running in my # top view, so the server seems overloaded. If I can reduce the amount apache processes I think the server will be ok. But I don't know why too many apache processes are running. Please see the link below for the screenshot of my # top view: http://dl.dropbox.com/u/26967109/%23Top-2.jpg Sometimes I saw too many connection error in my plesk control panel, so I added the below line in my [mysqld] section: set-variable=max_connections=416 But I didn't find a solution yet. I have also added maxclients and serverlimit 416 in the config /etc/httpd/conf/httpd.conf But no solution yet. I am researching around more than 7 days but don't get any solution. Please help me to solve the problem. In peak hours my sites are taking too much time to load, but off-peak hour it is ok. Please help me to find out the actual problem.

    Read the article

  • Exchange 2003: recover mailbox when migrated to 2007

    - by lyngsie
    We have migrated almost all of our mailboxes from Exchange 2003 to 2007 so both are still up and co-existing in our domain. One mailbox is missing some mails, and I tried to recover from an Exchange 2003 backup using ExMerge as usual, but it throws an error: Error opening message store (MSEMS). Verify that the Microsoft Exchange Information Store service is running and that you have the correct permissions to log on. (0x8004011d) This should be normal when a mailbox has been migrated to 2007 and you try to recover a 2003 backup as stated here: http://www.petri.co.il/restoring-exchange-2003-mailboxes-exchange-2007-exmerge.htm We then tried the solution given in the article (copying the value "homemdb" for the user and pasting this in "msExchOrigMDB" for the Recovery Storage Group object in adsiedit). The error unfortuntely persists. So my question is, how can I extract a .pst from a mailbox in the recovery storage group when both Exchange 2003 and 2007 are running? I assume this is why the solution in the article didn't work. If no solution is possible using regular tools (exmerge, eseutil, etc.) which third-party tool should we choose - preferably the cheapest as we only need this one mailbox recovered.

    Read the article

  • Does a VPN requirement kill the concept of having a Web Application in the Cloud?

    - by Christian
    Recently I posted a question in SO, but so far I got no answers. I wonder if I'm asking the wrong question. This is the problem: We need to design an application which offers a public http web service, but at the same time it must consume some services through a VPN connection from other existing company. There is no other alternative but to use a VPN connection to access those services. We want to host our application in some cloud infrastructure like Heroku or Amazon EC2. But there is no direct way to access the VPN services of the other company from there. The solution I'm thinking, but I don't like is to have a different server to expose the services from that VPN. But this will require the setup of another server which I prefer to avoid. In the case this is the solution, can I use an Amazon EC2 instance to connect to a VPN? This is what I was thinking, is it correct? I don't have experience using VPNs, tunnels or those kind of networking stuff. I will really appreciate if you can propose me an alternative solution, or just give me a comment.

    Read the article

  • Is there any way to distribute x264 encoding jobs across multiple computers (to increase the encoding speed)?

    - by Breakthrough
    Does anyone know of a current, active solution to encoding x264 videos across many computers (via the network) to increase encoding FPS? Brownie points for cross-platform and open source, but just so you all know, I usually use Windows. Programs that I have heard of, and why I do not believe they are suitable: x264farm: Not actively developed. Good interface, but does not support two-pass encoding, and fails with newer x264 builds. ELDER: Again, not actively developed, but my issue was that it didn't work with new x264 builds, and it was very difficult to configure (read: randomly stopped working). While I don't absolutely need a program which is being actively developed, I would like one that supports two-pass encoding, and works with new(er) x264 builds. Additional information: So far, I've offered (and awarded!) two separate bounties on this question since I first posted it over two years ago, and I still haven't found a solution to this problem. What I'm looking for basically is a simple program to allow me to encode x264 videos using the processing power of multiple computers connected over a LAN. Furthermore, it would be nice if it worked with new(er) x264 builds, and supported two-pass encoding. If at any time someone has an updated answer, or a new solution to this problem, please post it and it will be given some consideration.

    Read the article

  • How to change .htaccess file to work right in localhost?

    - by Manolo Salsas
    I have this snippet code in my .htaccess file to prevent users from hotlinking the server's images: RewriteEngine On RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^http://(www.)?itransformer.es/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://itransformer.es [R,L] Of course, it is not working in my localhost, but don't know how to achieve it. My guess is that I should change the domain name with any wildcard. Any idea? Update I've finally found out the answer thanks to @Chris solution: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} ^https?://%{HTTP_HOST}/.*/usuarios/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://%{HTTP_HOST} [R=301,L] The /usuarios/ directory is because I only want to deny direct access to files inside this directory. Update2 For some reason, it doesn't work again. Finally I think that I found out a better solution: RewriteCond %{REQUEST_FILENAME} .*/usuarios/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://%{HTTP_HOST} [R=301,L] I say better solution because what I want to deny is direct access to a file (image). Update3 Well, after a while I discovered above wasn't exactly what I wanted, so the next is definitive: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^https?://itransformer.*$ [NC] RewriteRule /usuarios/.*\.(gif|jpe?g|png|wbmp)$ - [R=404,L] Just two doubts: If I change the above to: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^https?://%{HTTP_HOST}.*$ [NC] RewriteRule /usuarios/.*\.(gif|jpe?g|png|wbmp)$ - [R=404,L] it doesn't work. I don't understand why, because %{HTTP_HOST} is equal to itransformer in my localhost, and it should work. The second doubt is why is shown the default 404 page and not my custom page (that is shown in all other 404 responses).

    Read the article

  • nginx redirect proxy

    - by andrew
    I have a web app running on a nginx server on local ip 192.168.0.30:80 I have this in my etc/hosts 127.0.0.1 w.myapp.in If someone accesses my app using a "w" subdomain, it shows a webdav interface, otherwise it runs normally (for example, someone calls http://myapp.in , it goes into the app, and http://w.myapp.in goes into webdav interface - this is done within the app, nginx has nothing to do with it) Because I don't have a dns or anything like that, users must access the app by ip. A problem appears if someone wants to access the webdav interface, because you cannot access the app by a subdomain - unless you write a line in your local hosts file, which is not a solution) A possible solution If it's possible to setup the nginx server so that if someone calls http://192.168.0.30 (on port 80), it goes normally into the app, but if a user tries to access say http://192.168.0.30:81 (another defined port) it redirects internally to w.myapp.in, and the app sees the subdomain Given the app, can this be done? If yes, what should I put in the nginx config file? And if you guys think of a better solution, I'm open to any.

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • Reverse proxy for mailserver (SMTP + HTTP for web client)

    - by ba
    I'm looking at doing some reverse proxy work for a mail server with corresponding web client. Both servers are running on the same machine, this is not a server with a high load. :) The solution I've discussed with friends is having the mail server/web client on our internal network. Then to put a reverse proxy on the DMZ to service both SMTP and web client HTTP-traffic to the mail server on the internal network. From what I understand this is the recommended secure solution? So far I've thought for the SMTP-proxy part of using postfix which will receive mail, do some spamhause and similar anti-spam measures and if it all checks out, send the mail to the mail server on the inside. The mail server on the inside will send all outgoing mail to the proxy which will then send it out on the Internet. For the web client I'm not sure exactly which software I should be running on the proxy machine, I've been thinking about using Squid -- but that's basically based on the fact that I know squid is a http proxy. The web client data will be sent out over SSL. Reading around some here on Serverfault I've seen other people using Apache with mod_proxy+mod_security for similar situations. Am I thinking correctly for this solution? What software would you guys use and with which modules? Thanks in advance for the help! :)

    Read the article

  • Windows disk change monitoring for malware analysis

    - by SuperDuck
    Not sure if this question belongs to here, because it has some relations with 'serverfault' (system backups) and 'stackoverflow' (software analysis). I'm looking for a solution to monitor disk changes on a Windows system and selectively revert them. It should be able to handle live files like registry parts, so may need to be an offline backup software. It shouldn't silently pass over files which the current admin user doesn't have permissions on (files with no permission entries or owned by the 'system' user) Registry change tracking would be a bonus but is not a requirement I use virtual machines for malware analysis, there is even no solution to list file changes in disk snapshot files (delta VMDK). I currently use Ashampoo for monitoring changes. Though it's the best one between similars, it's not a good software and hasn't really evolved in many 'platinum', 'deluxe' versions released in the last 10 years (it even used non-resizable windows until the latest version). The real problem is it misses some disk / registry changes. Perhaps it only compares modification dates and doesn't catch a change if the dates are preserved. So, I think the solution should compare files using hashes, or file sizes at least. There are numerous backup software out there and I'm sure one can handle this, offline or online.

    Read the article

  • Accessing a webpage folder with .htaccess in it via apache webdav?

    - by pingo
    I have setup webdav access in order to enable an external user to upload the content of his web page to his folder on my server that is served by apache to the web. This way he could update his web page via webdav. Now the problem is that the user requires a .htaccess file and of course .htaccess breaks webdav probably because it overrides settings. (new files cannot be uploaded anymore via webdav if below specified .htaccess exists) I am running Apache2.2.17 and this is my webdav config: Alias /folderDAV "d:/wamp/www/somewebsite/" <Location /folderDAV> Order Allow,Deny Allow from all Dav On AuthType Digest AuthName DAV-upload AuthUserFile "D:/wamp/passtore/user.passwd" AuthDigestProvider file require valid-user </Location> This config is part of my naive solution to fixing this problem. The idea was to specify an alias to the web page folder where webdav would be enabled and then set AllowOverride to none so that the .htaccess would have no effect. Of course I then found out that in <Location /> AllowOverride directive is not valid. The .htaccess file looks like this: #opencart settings Options +FollowSymlinks Options -Indexes <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] ErrorDocument 403 /403.html deny from 1.1.1.1/19 allow from 2.2.2.2 What would be the solution here? I would like to have the web page accessible from the web but at the same time be able to access and modify it via apache's webdav (with digest auth). How would I do that? Also if possible I would like a solution that permits the existence of the .htaccess so that the user still has the power to setup access rules for his web page.

    Read the article

  • Make a drive from one machine appear as a physical disk in another machine.

    - by Roberto Sebestyen
    I want to take a physical disk (or part of a disk) in one machine (call it machine-A) and I want to make it available in another machine (machine-B). But I don't want to map a network drive. I want it to appear in machine-B as a physical drive. Even though it is not a physical drive. The reason I want to do this is i want the ability to create shares in machine-B on that drive. Since I cannot do that on mapped drives, I need to use some utility that fools machine-B to think that it is a physical drive, and treat it as such. Both of these machines are windows server 2003. I heard about NFS, It sounds like what could be the solution to my problem. But isn't that a Linux/Unix protocol? What tools can I use to make this happen? Are there any open source solutions? I don't care what the solution is, as long as it achieves the end result, preferably open source solution though. Thanks for reading guys and gals!

    Read the article

  • Large scale file replication with an option to "unsubscribe" from a replicated file on a given machine

    - by Alexander Gladysh
    I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.) I have several other machines that do some work on these files. I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it. It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.) Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something... A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)

    Read the article

  • Combat server downtime by duplicating server and re-routing when main server is down

    - by Wasim
    I have a CentOS server which at times either crashes or gets attacked with DDOS. At the moment I have an off site backup which is filled up with 1.7TB of data. I'm currently paying as much for the backup as I am for the server and I was looking for advice from experienced people as to what option is best to proceed from here. Would it be a viable solution to ditch the offsite backup, and instead purchase an additional server which is an exact duplication of the first server. So if the first server is down, users are re-routed to the second server without noticing the first server is even down. This would create an automatic backup of the first server (albeit not offsite) and relinquish the need for the expensive offsite backup. Is the above solution a true solution to pricey backup or is offsite backup absolutely necessary? How would I go about doing this (obviously it's pretty complex so just links to some reading material or the terminology of the procedure would be great)? Appreciate the help and advice.

    Read the article

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • Vim: auto-comment in new line

    - by padde
    Vim automatically inserts a comment when I start a new line from a commented out line, because I have set formatoptions=tcroql. For example (cursor is *): // this is a comment* and after hitting <Enter> (insert mode) or o (normal mode) i am left with: // this is a comment // * This feature is very handy when writing long multi-line comments, but often I just want a single line comment. Now if I want to end the comment series I have several options: hit <Esc>S hit <BS> three times Both of these afford three keystrokes, taken together with the <Enter> this means four keystrokes for a new line, which I think is too much. Ideally, I would like to just hit <Enter> a second time to be left with: // this is a comment * It is important that the solution will also work with different indentation levels, i.e. int main(void) { // this is a comment* } hit <Enter> int main(void) { // this is a comment // * } hit <Enter> int main(void) { // this is a comment * } I think I have seen this feature in some text editor a few years ago but I do not recall which one it was. Is anyone aware of a solution that will do this for me in Vim? Pointers in the right direction on how to roll my own solution are also very welcome.

    Read the article

  • DNAT from localhost (127.0.0.1)

    - by pts
    I'd like to set up a TCP DNAT from 127.0.0.1, port 4242 to 11.22.33.44, port 5353 on Linux 3.x (currently 3.2.52, but I can upgrade if needed). It looks like the simple DNAT rule setup doesn't work, telnet 127.0.0.1 4242 hangs for a minute in Trying 127.0.0.1..., and then it times out. Maybe it's because the kernel is discarding the returning packets (e.g. SYN+ACK), because it considers them Martian. I don't need an explanation why the simple solution doesn't work, I need a solution, even if it's complicated (e.g. it involves creating may rules). I could set up a usual DNAT from another local IP address, outside the 127.0.0.0/8 network, but now I need 127.0.0.1 as the destination address. I know that I can set up a user-level port forwarding process, but now I need a solution which can be set up using iptables and doesn't need helper processes. I was googling for this for an hour. It was asked multiple times, but I couldn't find any working solutions. Also there are many questions about DNAT to 127.0.0.1, but I don't need that, I need the opposite.

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >