Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 484/670 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • Exchange 2003 -- Mailbox Management not deleting ALL messages aged 30 days or older...

    - by tcv
    I've recently created a Mailbox Management task within Exchange 2003 that, every night, looks at the contents of the Deleted Items within a particular mailbox and deletes mail that's 30 days or older. The scheduled task ran on its own last night and I have confirmed that messages within the right mailbox and the right folder were, in fact, processed. Many mails were deleted ... but not never email older than 30 days. In fact, the choice seems kinda random. Last night 3/10/2010 was the 30 day watermark. Mails were deleted from 3/10/2010, sure enough, but not all of them. Mails older than 3/10/2010 were deleted as well, but, again, not all of them. The only criteria I have on the management -- aside from the single mailbox and single folder scopes -- is the age criteria. The size criteria is set to Any, meaning I don't care about the size. I care about the age. It's made me wonder where there is some sort of limit on how many mails can be processed? The schedule is set for 12am and 1am every night. Any hints appreciated.

    Read the article

  • Raid 5 with hot spare or RAID 10 with no hot spare?

    - by Boden
    Yes, this is on of those "do my job for me" questions, have some pity:) I'm at the limit for what I can do with the number of hard drives in a server without spending a substantial amount of money. I have four drives left to configure, and I can either set them up as a RAID 5 and dedicate a hot spare, or a RAID 10 with no hot spare. The size of each will be the same, and the RAID 5 will offer enough performance. I'm RAID 5 shy, but I also don't like the idea of running without a hot spare. I'm not so interested in degraded performance, but the amount of time the system is without adequate redundancy. The server and drives are under a 13x5 4 hour response contract (although I happen to know that the nearest service provider is at least 2-3 hours away by car in the winter). I should note that the server also has two RAID 1 arrays which would also be protected by the hot spare. Why don't they make drive cages with 9 bays! Heh.

    Read the article

  • Using NFS for scalable PHP/MySQL web application

    - by Jeroen Moons
    Here's the situation: I have a PHP/MySQL web application that accepts user uploads (pdf files). From these pdf files' pages a preview image is made on the fly and presented to the web app's users. Some pdfs might be on the large side, most will be under 50 MB but some extreme cases could be as large as a few hundred MB. A little waiting for the preview image for large pdf files is acceptable but no more than a minute let's say. Everything is running on one server for now, but soon the app will hit the server's limit on both storage and processing power. My idea to solve the problem: To deal with this situation I had the idea of having one or more pdf processing servers as needed, and one or more file storage servers. These two types of servers are mounted to the server on which the actual app runs using NFS. The app could then use GearMan to delegate pdf processing tasks to these processing servers. The processing server can mount the storage server and read the file stored there, process it and write its output to that server. The servers I'm talking about will be amazon ec2 instances. The web app returns a link to the resulting pdf preview image on the storage server that was used which can then be used on the front end to show the image to the user. My question: I have zero experience with apps that use multiple servers, is this idea viable or is there a better way to do it? Is an NFS setup fast and reliable enough for this situation?

    Read the article

  • Which Revision Control Software to use for Personal Dropbox?

    - by wag2639
    I want to set up a sync repositiory that would be similar to Dropbox. Goals/Requirements: Free (Open Source very preferable) Linux host (probably Ubuntu) Windows/Mac/Linux clients Potential for multiple users with limited access (optional) Preferable easy, doesn't necessarily need to be automatic Revision control very preferable Basically, I want to be able to use multiple computers, possible with different OS's, and be able to access, use, and sync files across all of them. I also want to have a local copy of the repository for when I'm not connected to the network (as if I'm working on a laptop, I want to keep a local repository to keep revision and merge later with "master" repository). For example, I'm editing a few pictures on my laptop during the day outside of my network, but when I get home, I would like to sync the changes, including incremental changes, with my desktop at home. I would also like my roommates to be able to access and use this repository too but limit access to certain files. For example, I may want to use this to backup financial records but wouldn't want them to have access to those files. I'm a programmer and familiar with SVN but I know that wouldn't be the most appropriate since it doesn't handle binaries well and doesn't keep a local repository. I know better choices exist but I don't really know them well enough to choose the best one.

    Read the article

  • What is the best cloud technology to use for MongoDB/GridFS database servers

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. GridFS is a module for MongoDB that allows to store large files in de database. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities? Edit: Added meaning of GridFS

    Read the article

  • Why does Windows 7 have three system partitions?

    - by Ben
    I am using Windows 7, and I wanted to make a System image (using Windows 7), but Windows 7 checked three partitions as System (100 MB + C (install partition) + D (my partition for my files, all programs are installed at C)). I don't want to backup my D partition, but that is not really the point. I don't want Windows messing with my other partitions and making them system. Is there a way to limit Windows 7 just to partition C (install partition)? If there is no way to stop Windows from making other partitions system, can I at least delete the files that make partition D system? PS: All these three partitions are on one physical disk, partitions from other disks aren't treated as System. FACTS: desktop PC, no OEM partitions, I personally have installed Windows 7 (many times) on the C partition. Why is my D partition checked as System partition when I try to create a System Image (using Windows 7 Ultimate built in tool), even though Windows (and all the software) are installed on the C partition? Is there a way to make D "normal" or non-system partition? Here is a picture of how it looks like if I try to create a system image. Once again, why is D also a system partition?

    Read the article

  • Under what conditions will sendmail try to immediately resend a message instead of waiting for the standard requeue interval?

    - by Mike B
    CentOS 5.8 | Sendmail 8.14.4 I used to think that if SendMail experienced a temporary (400-class) error during delivery, it would place the message in a deferred queue (e.g. /var/spool/mqueue) and retry an hour later. For the most part, that appears to be the case. But every now and then, I'll notice log entries like this (email/domains renamed to protect the innocent :-) ) : Dec 5 01:43:03 foobox-out sendmail [11078]: qBE3l7js123022: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=124588, relay=exbox.foo.com. [10.10.10.10], dsn=4.0.0, stat=Deferred: 421 4.3.2 The maximum number of concurrent connections has exceeded a limit, closing transmission channel Dec 5 01:53:34 foobox-out sendmail [12763]: qBE3l7js123022: to=<[email protected]>, delay=00:10:31, xdelay=00:00:00, mailer=relay, pri=214588, relay=exbox.foo.com., dsn=4.0.0, stat=Deferred: 452 4.3.1 Insufficient system resources Dec 5 02:53:35 foobox-out sendmail [23255]: qBE3l7js123022: to=<[email protected]>, delay=01:10:32, xdelay=00:00:01, mailer=relay, pri=304588, relay=exbox.foo.com. [10.10.10.10], dsn=2.0.0, stat=Sent (<[email protected]> Queued mail for delivery) Why did Sendmail try again just 10 minutes after the first attempt and then wait another hour before trying again? If this is expected behavior, what scenarios will cause this faster requeue interval to occur?

    Read the article

  • Implementing emailing (bulk & event based) features for my website.

    - by Kabeer
    Hello. For my upcoming social networking website, I am looking for suggestions on the best way to implement emailing. Here are some of my requirements and constraints: Requirements: - Should be able to send emails based on events (new registrations, change password, etc.), promotions (advertisements based on user consent), bulk mails (newsletters), reminders (profile updates), etc. I hope I got the point through. - Should be able to process faults (incorrect email address, mail-box full, etc) - User initiated invites (inviting friends to connect) Constraints: - As of now I am looking at Godaddy for hosting. Subsequently I shall move to, may be Amazon Cloud. Godaddy seems to be excruciatingly conservative (not bad always) when it comes to the ability to send email. - My tests on Godaddy so far have been discouraging. There is limit to no. of emails I can send and sometimes if emails carries special characters it throws strange exceptions like there was a virus affected attachment (even though I hadn't attached a thing). The replies from Godaddy support have been equally funny. My intent is not to portray Godaddy as wrong but I am looking for a work-around that frees me from said constraints. I am looking for a mechanism / service that is either free of very cost effective. I wonder how other sites address this. Mine is a .Net / Windows based application.

    Read the article

  • Recover data from Dynamic Disk (MBR) bigger than 2TB

    - by Helder
    Here is the situation: Promise Array FastTrak TX4310 with 3 disks (750 GB each) in RAID5. This comes to around 1500 GB of data. Last week I had the idea of expanding the RAID with an additional 750 GB disk. This would bring the volume to around 2250 GB. I plugged the disk and used the Webpam software to do the RAID expansion. However, I didn't count with the MBR 2TB limit, as I didn't remembered that the disk was using MBR instead of GPT and I didn't check it prior to the expansion. After a couple of days of expansion, today when I got home, the disk in Windows disk manager showed the message "Invalid disk" and when I try to activate it, it says "The operation is not allowed on the Invalid pack". From what I figured, the logical volume on the RAID expanded, and passed that info to the Windows layer and I ended up with an "larger than 2TB" MBR disk. I'm hopping that somehow I can still recover some data from this, and I was wondering if I can "rewrite" the MBR structure back to the 1500 GB partition size, so I can access the partition in Windows. Right now I'm doing an "Analyse" with TestDisk, as I hope the program will pickup the old 1500 structure and allow me to somehow revert back to it. I think that even though the Logical Drive in the RAID is bigger than the 2TB, I can somehow correct the MBR to show the 1500 GB partition again. I had a similar problem once, and I was able to recover the data using a similar method. What do you guys think? Is it a dead end? Am I totally screwed because there is the extra RAID layer that I'm not counting? Or is there other way to move with this? Thanks all!

    Read the article

  • How can I use wildcards in an Nginx map directive?

    - by Ian Clelland
    I am trying to use Nginx to served cached files produced by a web application, and have spotted a potential problem; that the url-space is wide, and will exceed the Ext3 limit of 32000 subdirectories. I would like to break up the subdirectories, making, say, a two-level filesystem cache. So, where I am currently caching a file at /var/cache/www/arbitrary_directory_name/index.html I would store that instead at something like /var/cache/www/a/r/arbitrary_directory_name/index.html My trouble is that I can't get try_files, or even rewrite to make that mapping. My searching on the subject leads me to believe that I need to do something like this (heavily abbreviated): http { map $request_uri $prefix { /aa* a/a; /ab* a/b; /ac* a/c; ... /zz* z/z; } location / { try_files /var/cache/www/$prefix/$request_uri/index.html @fallback; # or # if (-f /var/cache/www/$prefix/$request_uri/index.html) { # rewrite ^(.*)$ /var/cache/www/$prefix/$1/index.html; # } } } But I can't get the /aa* pattern to match the incoming uri. Without the *, it will match an exact uri, but I can't get it to match just the first two characters. The Nginx documentation suggests that wildcards should be allowed, but I can't see a way to get them to work. Is there a way to do this? Am I missing something simple? Or am I going about this the wrong way?

    Read the article

  • How to know whether mongodb is running on 64 bit mode or 32 bit mode

    - by Jim Thio
    My programmer install mongodb. Then somehow it doesn't work. I run C:\mongod\bin>mongod mongod --help for help and startup options Sat Aug 11 22:57:50 Sat Aug 11 22:57:50 warning: 32-bit servers don't have journaling enabled by def ault. Please use --journal if you want durability. Sat Aug 11 22:57:50 Sat Aug 11 22:57:50 [initandlisten] MongoDB starting : pid=3800 port=27017 dbpat h=/data/db 32-bit host=haryantoi5 Sat Aug 11 22:57:50 [initandlisten] Sat Aug 11 22:57:50 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sat Aug 11 22:57:50 [initandlisten] ** see http://blog.mongodb.org/post/13 7788967/32-bit-limitations Sat Aug 11 22:57:50 [initandlisten] ** with --journal, the limit is lower Sat Aug 11 22:57:50 [initandlisten] Sat Aug 11 22:57:50 [initandlisten] db version v2.0.7-rc1, pdfile version 4.5 Sat Aug 11 22:57:50 [initandlisten] git version: 9efe4cce272373b52b96de1309c1fbf 0c984305f Sat Aug 11 22:57:50 [initandlisten] build info: windows sys.getwindowsversion(ma jor=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB _VERSION=1_42 Sat Aug 11 22:57:50 [initandlisten] options: {} ************** Unclean shutdown detected. Please visit http://dochub.mongodb.org/core/repair for recovery instructions. ************* Sat Aug 11 22:57:50 [initandlisten] exception in initAndListen: 12596 old lock f ile, terminating Sat Aug 11 22:57:50 dbexit: Sat Aug 11 22:57:50 [initandlisten] shutdown: going to close listening sockets.. . Sat Aug 11 22:57:50 [initandlisten] shutdown: going to flush diaglog... Sat Aug 11 22:57:50 [initandlisten] shutdown: going to close sockets... Sat Aug 11 22:57:50 [initandlisten] shutdown: waiting for fs preallocator... Sat Aug 11 22:57:50 [initandlisten] shutdown: closing all files... Sat Aug 11 22:57:50 [initandlisten] closeAllFiles() finished Sat Aug 11 22:57:50 dbexit: really exiting now It seems that mongod is running on 32 bit. I have a 64 bit computer and I want to run mongodb in 64 bit enviroment. How do I do so?

    Read the article

  • In a Shell scripts, check version of installed package, make a decision based on output

    - by DJDarkViper
    Looking to write a cross distro / cross version shell script that makes sure a forced version of PHP is installed Example: Ubuntu 12.04 has 5.3, Ubuntu 13.10 has 5.5, Debian 7 has 5.4 I need this script, when run on a distro that has an old version of PHP, to update the repo to point to a package for 5.4, and if the distro has too new of a version, can downgrade to 5.4 appropriately. Im still not entirely comprehensive of Shell/Terminals technical limit of what you can do with it, but ill be perfectly frank that im still not totally used to existing tools The best I can think at the moment is: php -v | grep "PHP 5" but that returns a bunch of potentially changeable granular characters (PHP 5.4.4-14+deb7u5 (cli) (built: Oct 3 2013 09:24:58) ). Im not sure what to pipe to after this to extract out the characters im interested in Im not sure if im being totally clear, im not sure how to ask this.. Basically, in an automated shell script for Linux distros, how do I extract the PHP version (and just the PHP version number preferably) and make a decision based on that output EDIT This line ended up doing pretty dang good php -v | grep "PHP 5" | sed 's/.*PHP \([^-]*\).*/\1/' | cut -c 1-3 Bit long in the tooth, but gives me "5.3", "5.4", and "5.5" which is exactly what I need to work with

    Read the article

  • Looking for some IIS redirect help/ideas

    - by CoreyT
    Right now we have a site with a LOT of static asp pages such as, www.site.com/123.asp. This is due to how our current site's CMS builds it's pages by default. I don't have an exact count but we have roughly 6000 asp files in the site right now. We are in the middle of a redesign and restructuring of the site, and are looking to migrate to SEO friendly URLs. The problem we're having right now is what do we do to redirect the old pages to the new friendly URLs? I know how to do redirects that is not the issue here. The problems I am coming up with right now are listed below. 1 - Is there a limit to the number of redirects in IIS? 2 - Would having even a few thousand redirects affect IIS performance? 3 - My understanding is that we would not be passing along page rank to the new URLs, is that true? (not a major question I can ask on more SEO forums if nobody here is sure) 4 - Would using something like the IIS URL Rewrite 2 module for IIS 7 help us out? Or would I still need to define several thousand unique redirects in it? Our server right now is running Server 2003, however in the redesign I would be open to migrating to Server 2008 R2 if there is a good case for it (i.e. the URL Rewrite module). Thanks for any guidance or help. I have been looking for a good way to do this for a while now and keep coming up with things that sound problematic and bad (such as having 6000 redirects).

    Read the article

  • How to download large files when the download size is restricted ?

    - by Rahul
    ? In my office, the network admin has restricted the download limit to a size of 1.8MB for any file. This is for sub ordinates accounts only. But for my manager's PC, there are no restrictions. Is there any way to download files from my PC by using my managers' ip address. I just tried using his ip on my pc but, had the same problem. ? Earlier I was given access to our Linux server from my pc using putty. Then I used to download large files on to the server and then transfer from server to my machine using fire ftp. This transfer worked perfectly fine. But, now I don't have any access to the server. So can I be able to download large files using fire ftp from my own PC ? I'm using Windows XP machine. Please suggest a solution by any possible combination. Thanks.

    Read the article

  • Windows Vista/7 dropping Mac Server share points

    - by Hooligancat
    My Windows Vista and Windows 7 clients are having problems maintaining access to SMB shares on a Mac server. The initial connection to the server appears to be OK, as the Windows clients can see all of the server share points. However, the client randomly drops a couple of the server share points although the clients can still see the server. For example. If I have the following share points on the Mac server: Share A Share B Share C Share D Share E The Windows client can see these shares most of the time and can access them most of the time. But randomly a couple of the shares will just get dropped or go missing from the Windows client's ability to view them so I end up with something like: Share B Share D Share E All the share points are established int the same way with the same permission settings. My Mac OSX Server is set up with the following for SMB: SMB sharing enabled Standalone Server Workgroup of `CORPORATE` Allow Guest Access = YES Client connections limit = 100 Authentication: NTLMv2 & Kerberos and NTLM Code Page is Latin US (437) This is a workgroup master browser WINS registration is set to Enable WINS server (tried with setting off) Enable virtual share points for homes YES I noticed in my SMB file service log that the clients appear to connect OK, but I get the following error which implies a reset by either the server or the client: /SourceCache/samba/samba-187.9/samba/source/lib/util_sock.c:read_data(534) read_data: read failure for 4 bytes to client 192.168.0.99. = Connection reset by peer I am a bit stumped as to a direction to turn to try and get this to resolve. Continued attempts to access the server from the client will reconnect to the share points, but they inevitably get dropped again in the near future. Any and all help much appreciated.

    Read the article

  • Windows/IIS Hosting :: How much is too much?

    - by bsisupport
    I have 4 Windows 2003 servers running IIS 6. These servers host a bunch of unique web sites (in that they are all different in build/architecture/etc). The code behind these sites range from straight HTML, classic ASP, and 1.1/2.0/3.x flavors of .NET. Some (most) of the sites use a SQL backend, which is hosted on one or two different servers – not the IIS servers themselves. No virtualization on these servers and no load balancing for these particular sites. The problem I’m running into is coming up with some baseline metrics to determine, or basically come up with a “baseline score” to know when a web server has reached its hosting limit. Today, some basic information about each server is used: how much bandwidth does the server pump out, hard drive space availability, and basic (very basic) RAM & CPU utilization (what it looks like at peak traffic times.) I would be grateful if those of you that are 1000x smarter than I am could indulge me with your methods of managing IIS environments. Whether performance monitoring specifics, “score” determination as I’m trying to determine, or the obvious combination of both. Thanks in advance.

    Read the article

  • Play framework 2.2 using Upstart 1.5 (Ubuntu 12.04)

    - by Leon Radley
    I'm trying to get Play 2.2 working with upstart. I've been running Play 2.x with upstart since it's release and it's never been a problem. But since the release of 2.2 and the change to http://www.scala-sbt.org/sbt-native-packager/ play doesn't want to start any more. Here's the config I'm using description "PlayFramework 2.2" version "2.2" env APP=myapp env USER=myuser env GROUP=www-data env HOME=/home/myuser/app env PORT=9000 env ADDRESS=127.0.0.1 env CONFIG=production.conf env JAVAOPTS="-J-Xms128M -J-Xmx512m -J-server" start on runlevel [2345] stop on runlevel [06] respawn respawn limit 10 5 expect daemon # If you want the upstart script to build play with sbt pre-start script chdir $HOME sbt clean compile stage -mem $SBTMEM end script exec start-stop-daemon --pidfile ${HOME}/RUNNING_PID --chuid $USER:$GROUP --exec ${HOME}/bin/${APP} --background --start -- -Dconfig.resource=$CONFIG -Dhttp.address=$ADDRESS -Dhttp.port=$PORT $JAVAOPTS I've changed the JAVAOPTS to include the -J- and I've also changed the path to use the new startscript located in the /bin/ dir. I've read that upstart 1.4 has setuid and setguid. I've tried removing the start-stop-daemon but I haven't got that working either. Any suggestions would be appreciated.

    Read the article

  • how to rewrite '%25' in url

    - by nn4l
    My website software replaces space characters with '+' characters in the URL, A proper link would look like 'http://www.schirmacher.de/display/INFO/How+to+reattach+a+disk+to+XenServer' for example. Some websites link to that article but somehow their embedded editor can't handle the encoding, so what I see in the httpd log files is actually GET /display/INFO/How%2525252bto%2525252breattach%2525252ba%2525252bdisk%2525252bto%2525252bXenServer which of course leads to a 404 error. It seems that the '+' character is encoded as '%2b' and then the '%' character is encoded as '%25' - several times. Since there are many such references to different pages from different websites, I would like to rewrite the url so that the visitors get the correct page. Here's my attempt which does not work: RewriteRule ^(.*)%25(.*)$ $1%$2 [R=301] What it is supposed to do is: take everything before the %25 string and everything after it, concat those strings with a '%' in between, then redirect. With the example input URL the rule should rewrite to /display/INFO/How%25252bto%2525252breattach%2525252ba%2525252bdisk%2525252bto%2525252bXenServer followed by a redirect, then it should rewrite to /display/INFO/How%252bto%2525252breattach%2525252ba%2525252bdisk%2525252bto%2525252bXenServer and again to /display/INFO/How%2bto%2525252breattach%2525252ba%2525252bdisk%2525252bto%2525252bXenServer and so on. Finally, after a lot of redirects I should have left /display/INFO/How%2bto%2breattach%2ba%2bdisk%2bto%2bXenServer which is a valid url equivalent to /display/INFO/How+to+reattach+a+disk+to+XenServer. My problem is that the expression does not match at all, so it does not even replace a single occurrence of %25. I understand that there is a limit in the number of redirects and I should really use the [N] flag however I don't even get the first step right.

    Read the article

  • VPS goes slow at more than 20 users online at the same time

    - by hachiari
    I have 512 MB VPS (brustable to 1GB) Somehow, the site goes slow when there are about 10 users, and becomes impossible to load at 20 users online at the same time. I wonder what could be the problem for this. The bandwidth connection of the VPS is 1Gbps. Here is some settings in my VPS: KeepAlive Off <IfModule prefork.c> StartServers 7 MinSpareServers 7 MaxSpareServers 10 ServerLimit 64 MaxClients 64 MaxRequestsPerChild 0 </IfModule> my.cnf settings - calculated Max Memory 300MB Output from UNIXBENCH INDEX VALUES TEST BASELINE RESULT INDEX Dhrystone 2 using register variables 376783.7 13429727.4 356.4 Double-Precision Whetstone 83.1 1137.5 136.9 Execl Throughput 188.3 1637.4 87.0 File Copy 1024 bufsize 2000 maxblocks 2672.0 148868.0 557.1 File Copy 256 bufsize 500 maxblocks 1077.0 79430.0 737.5 File Read 4096 bufsize 8000 maxblocks 15382.0 1410009.0 916.7 Pipe Throughput 111814.6 4419722.0 395.3 Pipe-based Context Switching 15448.6 561505.1 363.5 Process Creation 569.3 10272.7 180.4 Shell Scripts (8 concurrent) 44.8 514.3 114.8 System Call Overhead 114433.5 3537373.8 309.1 ========= FINAL SCORE 295.0 I am afraid that the VPS company limit the number of connection to the VPS... is it possible? The server is in Japan, but the site has global traffic (some of the traffic are from countries with low speed connection). Could this be the problem? This is a serious problem :( my site just cant grow if this keeps on happening... please tell me if you have any idea. Thank You, Bryant

    Read the article

  • Ubuntu Natty 11.04, Turning the wireless switch off; switches it off permanently!

    - by ZiGi
    i'm using an hp pavilion dv2000 i turned the wifi switch off by mistake, the LED turned orange and the wifi got disconnected. and now when i turn the switch on, it remains orange and the wifi still isn't functional. this happened before; i found a fix that worked searching google. it was done via terminal commands and i didn't have to download anything but i can't find the solution anymore! wlan0 shows up when i use: :~$iwconfig #BLA BLA BLA #... wlan0 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off more results: :~$ sudo ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill :~$ rfkill list all 1: phy0: WirelessLAN Soft blocked: yes Hard blocked: yes :~$ sudo rfkill unblock all :~$ rfkill list all 1: phy0: WirelessLAN Soft blocked: no Hard blocked: yes :~$ sudo ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill it's still hard blocked! even though the switch is turned on; gives the same result eitherways a direction to a page with a working solution is a much appreciated answer!

    Read the article

  • How to prevent eMule from jamming up the router?

    - by the searcher
    Usually, when eMule is started, after some time, I find that the router is jammed, so the internet connection on that computer stopped working, or it seemed to be waiting for some port to be freed up before it can connect to a website. This sometimes affect even other PCs or Macs using the same router. Is there a way to prevent eMule from hogging too much resource or ports? I see that there is under Options -> Connection "Max Sources/File" and a "Connection Limits - Maximum Connections". Right now I set them to really low numbers: the first to 120 and the second to 200, but what are good numbers to fill in there so that it can work well without jamming up the router or use up the network resource of the PC or Mac? Or could it be that the number of files that are "Waiting" is too high, and used up too much resource? (If so, can emule automatically limit the number to 10 or 20 to prevent using too much resource?) (This happened before on Linksys router, Netgear router, and the AT&T U-verse router.)

    Read the article

  • openvpn port 53 bypasses allows restrictions ( find similar ports)

    - by user181216
    scenario of wifi : i'm using wifi in hostel which having cyberoam firewall and all the computer which uses that access point. that access point have following configuration default gateway : 192.168.100.1 primary dns server : 192.168.100.1 here, when i try to open a website the cyberoam firewall redirects the page to a login page (with correct login information, we can browse internet else not), and also website access and bandwidth limitations. once i've heard about pd-proxy which finds open port and tunnels through a port ( usually udp 53). using pd-proxy with UDP 53 port, i can browse internet without login, even bandwidth limit is bypassed !!! and another software called openvpn with connecting openvpn server through udp port 53 i can browse internet without even login into the cyberoam. both of softwares uses port 53, specially openvpn with port 53, now i've a VPS server in which i can install openvpn server and connect through the VPS server to browse internet. i know why that is happening because with pinging on some website(eb. google.com) it returns it's ip address that means it allows dns queries without login. but the problem is there is already DNS service is running on the VPS server on port 53. and i can only use 53 port to bypass the limitations as i think. and i can not run openvpn service on my VPS server on port 53. so how to scan the wifi for vulnerable ports like 53 so that i can figure out the magic port and start a openvpn service on VPS on the same port. ( i want to scan similar vulnerable ports like 53 on cyberoam in which the traffic can be tunneled, not want to scan services running on ports). improvement of the question with retags and edits are always welcomed... NOTE : all these are for Educational purpose only, i'm curious about network related knowledge.....

    Read the article

  • openvpn port 53 bypasses allows restrictions ( find similar ports)

    - by user181216
    scenario of wifi : i'm using wifi in hostel which having cyberoam firewall and all the computer which uses that access point. that access point have following configuration default gateway : 192.168.100.1 primary dns server : 192.168.100.1 here, when i try to open a website the cyberoam firewall redirects the page to a login page (with correct login information, we can browse internet else not), and also website access and bandwidth limitations. once i've heard about pd-proxy which finds open port and tunnels through a port ( usually udp 53). using pd-proxy with UDP 53 port, i can browse internet without login, even bandwidth limit is bypassed !!! and another software called openvpn with connecting openvpn server through udp port 53 i can browse internet without even login into the cyberoam. both of softwares uses port 53, specially openvpn with port 53, now i've a VPS server in which i can install openvpn server and connect through the VPS server to browse internet. i know why that is happening because with pinging on some website(eb. google.com) it returns it's ip address that means it allows dns queries without login. but the problem is there is already DNS service is running on the VPS server on port 53. and i can only use 53 port to bypass the limitations as i think. and i can not run openvpn service on my VPS server on port 53. so how to scan the wifi for vulnerable ports like 53 so that i can figure out the magic port and start a openvpn service on VPS on the same port. ( i want to scan similar vulnerable ports like 53 on cyberoam in which the traffic can be tunneled, not want to scan services running on ports). improvement of the question with retags and edits are always welcomed... NOTE : all these are for Educational purpose only, i'm curious about network related knowledge.....

    Read the article

  • Postfix + Exchange + ActiveDirectory; How to mix them

    - by itwb
    My client has got many sub-offices, and one head office. The headoffice has a domain name: business.com All users in the many sub-offices need to have a headoffice email address: [email protected] Anyone not in the head office will need the email forwarded to an external email address. All users in the head office will have their email delivered to Microsoft Exchange. Users are listed in Active Directory under two different OU's: HeadOffice or SubOffice. Is this something able to be configured? I've done some googling, but I can't find any examples or businesses set up this way. Edit: Postfix will accept all email, will need to determine to forward the email to an external account or alternatively have it delivered to MS Exchange. I've done some reading about MS Exchange and that you can 'mail-enable' contacts for forwarding - but I don't know if each AD account requires an Exchange CAL? The end goal is to forward email to external accounts to sub offices or accept email for head office. Maybe I don't need to worry about Postfix to perform this task..... http://www.windowsitpro.com/article/exchange-server-2010/exchange-server-licensing-some-of-your-questions-answered "What about client access licenses (CALs)? You need one CAL per user who will connect to Exchange. Although it might not be 100 percent precise, I prefer to think of it as one CAL per mailbox; there are exceptions for users outside your organization, automated tools that use mailboxes, and so on. Exchange doesn't enforce this limit, so it's on you to ensure that you have the correct number of CALs for the set of clients you support."

    Read the article

  • How do I prevent a tar pipe from causing swapping?

    - by Jeff Shattock
    I have a rather large filesystem that I need to transfer from one Linux server to another. I figured the best way to do this was via a tar/netcat pipe arrangment, something like tar c . | pv | nc blah blah blah And it works great, the network stays fairly saturated, life is good. Until the source machine starts swapping. The files are on a raid on the source system, so the read speed is much faster than the write speed on the other end. Since the dest machine hasnt picked up the data yet, the source machine needs to stick it somewhere, so into RAM it goes, until there is no more free RAM. It then starts swapping, which is horribly painful since that machine has its OS installed on a somewhat slow CF card. Both machines have 4GB of physical ram, 64 bit Ubuntu 9.04 server. GigE link between them. How do I prevent this swapping? Can I put a "speed-limit" on the tar or netcat process so that the transfer speed doesn't overwhelm the write throughput on the destination end? The man pages didn't list anything, but there might be something I'm overlooking.

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >