Search Results

Search found 6311 results on 253 pages for 'limit clause'.

Page 99/253 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Is it possible to do DNS-based ACLs on a Cisco ASA?

    - by pickles
    Short of using static IP addresses, is it possible to have a Cisco ASA use a DNS name rather than an IP address? For instance, if I want to limit a host in the DMZ to access only one particular web service, but that web service might be globally load balanced or using DynDNS or cloud, how can the ACL be expressed so that a fixed IP address isn't used and the admin doesn't have to keep opening and closing down IP addresses?

    Read the article

  • Apache maximum request number 256?

    - by victor hugo
    I have a very good server running an Apache instance with mod_jk for proxying the request to an Application server. I'm doing a load test and although I'm sending over 600 requests, the status worker keep showing this: 256 requests currently being processed, 0 idle workers I'm using 'prefork MPM' <IfModule prefork.c> ServerLimit 2048 StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 1000 MaxRequestsPerChild 0 </IfModule> Is there a compiled limit for Apache to handle just 256 request or what would I be missing?

    Read the article

  • LS command for torrent files

    - by amir-beygi
    Hi all I have a directory full of torrent files,and i have to download all of them; But the problem is i have disk limit in my remote server,and file sizes are vary(100MB~8GB) and if i add all of torrent files ,none of them would be download completely;So i need a command to list all my torrents and the size of them , to be selected and add to download list later . NOTE: REMOTE SERVER - LINUX_UBUNTU_9.10 // SSH So i need a command like torrentls That output somethings like: file1.torrent 1111MB file2.torrent 222MB file3.torrent 3333MB file4.torrent 444MB file5.torrent 5555MB

    Read the article

  • How do I disable location services system wide?

    - by Daisetsu
    Google has an API which can determine someone's location based on the wifi router names which a user's computer can see. You will see this if you go to google maps and your browser may ask if you would like to share location data. I am wondering if there is any way to disable this on a system wide setting rather than just in each browser (Chrome can do this too). Is there any way I can limit which applications have a list of the wireless routers I can see?

    Read the article

  • Are there any Multifunction printer/scanners with duplex and long document scanning?

    - by zimmer62
    Do any of the $200 or less multifunction printer / scanners support duplex scanning, and possibly long page scanning? Features I'm looking for in a scanner are: Duplex scanning (scan's both sides) ADF (Document feeder allowing a stack of documents) Long page scanning (legal documents or very long receipts) Good quality for pictures The printer side isn't as important, and in fact if there was a good scanner that did the above well I could do without the printer. I suppose $200 isn't a hard limit, just what I'm aiming for.

    Read the article

  • RHEL5: Can't create sparse file bigger than 256GB in tmpfs

    - by John Kugelman
    /var/log/lastlog gets written to when you log in. The size of this file is based off of the largest UID in the system. The larger the maximum UID, the larger this file is. Thankfully it's a sparse file so the size on disk is much smaller than the size ls reports (ls -s reports the size on disk). On our system we're authenticating against an Active Directory server, and the UIDs users are assigned end up being really, really large. Like, say, UID 900,000,000 for the first AD user, 900,000,001 for the second, etc. That's strange but should be okay. It results in /var/log/lastlog being huuuuuge, though--once an AD user logs in lastlog shows up as 280GB. Its real size is still small, thankfully. This works fine when /var/log/lastlog is stored on the hard drive on an ext3 filesystem. It breaks, however, if lastlog is stored in a tmpfs filesystem. Then it appears that the max file size for any file on the tmpfs is 256GB, so the sessreg program errors out trying to write to lastlog. Where is this 256GB limit coming from, and how can I increase it? As a simple test for creating large sparse files I've been doing: dd if=/dev/zero of=sparse-file bs=1 count=1 seek=300GB I've tried Googling for "tmpfs max file size", "256GB filesystem limit", "linux max file size", things like that. I haven't been able to find much. The only mention of 256GB I can find is that ext3 filesystems with 2KB blocks are limited to 256GB files. But our hard drives are formatted with 4K blocks so that doesn't seem to be it--not to mention this is happening in a tmpfs mounted ON TOP of the hard drive so the ext3 partition shouldn't be a factor. This is all happening on a 64-bit Red Hat Enterprise Linux 5.4 system. Interestingly, on my personal development machine, which is a 32-bit Fedora Core 6 box, I can create 300GB+ files in tmpfs filesystems no problem. On the RHEL5.4 systems it is no go.

    Read the article

  • Apache keeps crashing due to unable to create worker thread

    - by Dina Abu-khader
    Hello , Am getting a lot of these in our error log ((11)Resource temporarily unavailable: apr_thread_create: unable to create worker thread) and (110)Connection timed out: proxy: HTTP: attempt to connect to 127.0.0.1:80 (*) failed The parameters of the worker in httpd.conf are as follows StartServers 8 ServerLimit 128 MaxClients 2048 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 32 MaxRequestsPerChild 10000 I have changed the stack size in limit.conf but still not helping , Can anyone please help me ?

    Read the article

  • Is there a max thread per mongrel?

    - by Blankman
    I don't know much about ruby, much less how or what is involved with hosting a ruby on rails web app. BUT, I recall hearing someone saying that they have to run multiple mongrels b/c of a limit of 50 threads? Is this true (or something similiar)? Why does it have this limitation?

    Read the article

  • WiFi N Gigabit Router

    - by SLaks
    I'm looking for a WiFi N Gigabit router. I'll be using an iPod Touch 2G and a 2009 17" MacBook Pro over WiFi, and I need at least 4 gigabit ethernet ports plus a WAN port. What do people recommend? In particular, are there any compatibility issues between different WiFi N drafts that would limit the speed of the MacBook, and is it worth getting dual-band?

    Read the article

  • Only allow ssh connections to a specific domain

    - by Jared
    Hi, I have a server setup with several domains and subdomains. I'd like to limit ssh and sftp access so a user can only connect to xxx.domain1.com but I'm not sure where this is configured. Connecting via ay other domain/subdomain on the server should be refused. Thanks, J

    Read the article

  • Tying down a cloud by virtualizing everything and then locking VMs to real hardware as necessary

    - by tudor
    I'm looking for a cloud software solution that: Can run on both server and desktop machines; Virtualizes hardware and has the option of exposing each real machine to the cloud; Allows a VM to be "locked" to a set of real hardware capabilities and stay there until moved (e.g. a user's "real" desktop); Allows a VM to link to some types of devices elsewhere (e.g. USB/serial via ethernet); and Is geography-aware to control movement of VMs between real networks. I'm aware that this may be the holy grail of virtualization, and I've searched alot. Some solutions appear to meet some criteria but not others. Most cloud implementations appear to ignore real hardware, for example. I realise that this may be solved by using three different implementations in combination: A standard cloud server farm. A bare-metal network backup utility with PXEBoot. VNC and/or VDI. (VNC obviously would require the real hardware to be running.) This combination, however, has some serious drawbacks that I'd like to solve by treating it as one system. My explanation follows... I have a network of real servers and desktops in multiple locations. I've virtualized servers before using Virtualbox and that's worked quite well. I've even connected USB devices to VMs on servers. I would like to virtualize the desktops in all my offices to facilitate movement of desktops, remote access (e.g. VDI) and bare-metal backups. However, I know that there are problems with this. For example, some desktops have specific hardware (e.g. 3D graphics cards, USB devices, etc) that limit their mobility. Geographic constraints also limit movement in that VMs can be moved easily within offices, but transferring between offices is not always preferable. What I would like to find is a system that can virtualize everything from bare-metal easily by maintaining an abstraction layer on each client and server machine that exposes the hardware available and runs as a cloud. Then certain VMs would be "locked" to specific hardware (so that, e.g. the VM runs only on their own desktop.) This would be required for situations where speed is important (e.g. 3D graphics pass-through). In addition, abstracted low-speed devices (e.g. USB) could be piped from real hardware to a VM in the cloud. This is important since if a VM is taken down, another VM can connect to the real hardware for minimum downtime.

    Read the article

  • What can I do to speed up my HTC Hero?

    - by Nick Bolton
    When I first got my HTC Hero phone it was very fast. But after a couple of weeks it's become slow. Typically (on any device), I've noticed that this is caused by having too much running at once... Is there a way I can limit the number of applications that are open? I think there's a task manager application in the market, so I'll try this. Does anyone have any further suggestions?

    Read the article

  • Remote IIS Administration - "Not enough storage available to process this command"

    - by Hainesy
    I'm trying to do Remote Administration of IIS in C#.NET using System.Web.Administration tools. Everything works fine on a test server (windows 2008), however when I try using our live server (windows 2003) it fails giving the message: System.Runtime.InteropServices.COMException : Not enough storage is available to process this command. (Exception from HRESULT: 0x80070008) The server itself has plenty of memory free, so I believe this is some kind of memory limit with the RPC itself. http://support.microsoft.com/kb/890425 Is there any way around this?

    Read the article

  • What is the rules of ports?

    - by Jake
    Hi, I mean the port to connecting.. just like SSH port, nginx port, etc. Im not clear about the port. So far I can see port running not more than 5 characters (port xxxxx). So, when choosing port number, what is the rules and the character limit of port? Is 5 characters the maximum? Thanks.

    Read the article

  • How to deal with 100's or 1000's of virtual hosts

    - by anton
    I am curious to know how services such as heroku manage 1000's of virtual hosts - ie if you create a web site/app, and put it up on these services, you get your own virtual host name - foo.heroku.com etc (the same applies to many other sites that have vanity URLs). I know with various web servers and proxies you can configure as many virtual hosts as you want - but there must be some upper limit to this ? Do they programmatically add virtual hosts - perhaps spreading the load? Or are there other solutions.

    Read the article

  • Default uTorrent upload limits?

    - by jasondavis
    I am using uTorrent for my torrents and when I open a new torrent with utorrent, I must manually right click the item click on bandwidth allocation set it to HIGH then I must repat the first 2 steps again and then go to click on Set upload limit and click the proper desired amount which for me is 1 Kb/s This is very annoying to do this on every single torrent, is there a way to have these set as the default values in utorrent?

    Read the article

  • 4 GB DDR2 vs 2 GB DDR 3...........

    - by metal gear solid
    I 'm going to purchase new PC. due to my budget limit either i can purchase 2 x 2GB = 4GB DDR 2 or 2 GB Single stick DDR 3. Will 2 GB DDR 3 will give almost same performane compare to 4 GB DDR 2? In future I will upgrade RAM upto 8 GB Which option would be better for me for now and why?

    Read the article

  • is there an alternative to outlook rules

    - by oo
    i have asked a number of questions around outlook rules and no matter how small i make the names and how efficient i make the rules, i ultimately still hit the 32 limit at about 40 rules. Is there any alternative to do this job since outlook rules just doesn't seem scalable to to keep up with the way people are emailing over the past 10 yearsoutlo.

    Read the article

  • How to run commands when mac is idle and when it resumes

    - by tig
    I want to run script when my mac is idle (for example after 5 minutes or screen saver start time is also ok) and when I resume it from idle state. I know that I can write daemon using NSDistributedNotificationCenter and com.apple.screenIsLocked and com.apple.screenIsUnlocked, but I hope that there is already solution without creating new daemon. I need this to for example turn on/off speed limit for Transmission (as it is sometimes hard to work when hashing/downloading on full speed).

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >