Search Results

Search found 3240 results on 130 pages for 'groupwise maximum'.

Page 34/130 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • What is preventing my computer from going idle?

    - by brianberns
    When I first boot my Windows 7 computer, it will go idle if I stop using it - first the screensaver comes on, then the computer goes to sleep after a certain amount of time. This is the expected behavior. However, after I've used the computer for awhile without rebooting (after about a day or so), I've noticed that it stops going idle - the screensaver won't come on, and the computer won't sleep, no matter how long it sits unused. I've confirmed that the idle timer is increasing as expected via GetLastInputInfo. However, it looks like something is interfering with the results from CallNtPowerInformation. Every 14 or 16 seconds, the TimeRemaining value jumps back up to its maximum value when I query SystemPowerInformation. I've used the SysInternals Process Monitor to detect any unusual events that might be happening to trigger this reset, but come up empty. Does anyone know exactly what are the possible causes of TimeRemaining resetting to its maximum value? I'm fairly sure that it's not my mouse, keyboard, or network sending spurious events, because I've disabled each one and the problem continues to occur. This would also reset the GetLastInputInfo timer, which is not happening. I'm looking for something that affects SystemPowerInformation TimeRemaining, but does not affect GetLastInputInfo. Thanks.

    Read the article

  • How to Change the Kerberos Default Ticket Lifetime

    - by user40497
    Our KDC servers are running either Ubuntu Dapper (2.6.15-28) or Hardy (2.6.24-19). The Kerberos software is the MIT implementation of Kerberos 5. By default, a Kerberos ticket lasts for 10 hours. However, we'd like to increase it a bit (e.g. 14 hours) to suit our needs better. I had done the following but the ticket lifetime still stays at 10 hours: 1) On all the KDC servers, set the following parameter under [realms] in /etc/krb5kdc/kdc.conf and restarted the KDC daemon: max_life = 14h 0m 0s 2) Via "kadmin", changed the "maxlife" for a test principal via "modprinc -maxlife 14hours ". "getprinc " shows that the maximum ticket life is indeed 14 hours: Maximum ticket life: 0 days 14:00:00 3) On a Kerberos client machine, set the following parameters under [libdefaults], [realms], [domain_realm], and [login] in /etc/krb5.conf (everywhere basically since nothing I tried had worked): ticket_lifetime = 13hrs default_lifetime = 13hrs With the above settings, I suppose that the ticket lifetime would be capped at 13 hours. When I do "k5start -l 14h -t ", I see that the end time for the "renew until" line is now 14 hours from the starting time: Valid starting Expires Service principal 04/13/10 16:42:05 04/14/10 02:42:05 krbtgt/@ renew until 04/14/10 06:42:03 "-l 13h" would make the end time in the "renew until" line 13 hours after the starting time. However, the ticket still expires in 10 hours (04/13 16:42:05 - 014/14 02:42:05). Am I not changing the right configuration file(s)/parameter(s), not specifying the right option when obtaining a Kerberos ticket, or something else? Any feedback is greatly appreciated! Thank you!

    Read the article

  • Is this memory compatible with this motherboard???

    - by ClarkeyBoy
    Hi, I have a Foxconn P35AP-S as seen here. I need to get some more RAM since I only have 1 2GB stick. The current one is 1066MHz. I would like to get the memory situated here: www.scan.co.uk/Products/6GB-(3x2GB)-Corsair-XMS3-Classic-DDR3-PC3-10666-(1333)-Non-ECC-Unbuffered-CAS-7-7-7-20-165V memory. It is 6GB of Corair 1333MHz memory. According to the motherboard website it is able to take 1333MHz, but it says oc** next to it (which means achieved when overclocked). So my question is: are they still compatible without overclocking, or does the motherboard require overclocking to be compatible? If it requires overclocking (which I have no idea how to do) can anyone recommend any other memory (in the region of 6GB) which the motherboard is compatible with? I'd rather it were from Scan, but to be honest it doesnt need to be. Many thanks in advance. Regards, Richard Edit: I just realised that the motherboard has a maximum capacity of 4GB of RAM. Scrap the RAM given above, I'd like to go for something like that but only 4GB. Edit: Scrap that last edit - its only if I go for DDR3 that I need to take this into account. DDR2 is a maximum of 8GB.

    Read the article

  • Windows 7 machine, can't connect remotely until after ping

    - by rjohnston
    I have a Windows 7 (Home Premium) machine that doubles as a media centre and subversion server. There's a couple of problems with this setup, when connecting to the server from an XP (SP3) machine: Firstly, the machine won't respond to it's machine name until after it's IP address has been pinged. Here's an example: Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\Rob>ping damascus Ping request could not find host damascus. Please check the name and try again. C:\Documents and Settings\Rob>ping 192.168.1.17 Pinging 192.168.1.17 with 32 bytes of data: Reply from 192.168.1.17: bytes=32 time=2ms TTL=128 ... Ping statistics for 192.168.1.17: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 2ms, Average = 1ms C:\Documents and Settings\Rob>ping damascus Pinging damascus [192.168.1.17] with 32 bytes of data: Reply from 192.168.1.17: bytes=32 time<1ms TTL=128 .... Ping statistics for 192.168.1.17: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 1ms, Average = 0ms C:\Documents and Settings\Rob> Likewise, subversion commands with either the machine name or IP address will fail until the machine's IP address is pinged. Occasionally, the machine won't respond to pings on it's IP address, it'll just come back with "Request timed out". The svn server is VisualSVN, if that helps... Any ideas?

    Read the article

  • Getting 2560x1600 out of an ATI Radeon HD 4670 on Windows 7

    - by Alexey
    Greetings, I've got a Dell Studio XPS 1640 laptop (with an ATI Radeon HD 4670 graphics card) running Windows 7, and just bought a Dell 3007HPC 30-inch monitor for it. I'm trying to figure out how to get the full 2560x1600 experience out of this setup. Here's what I've done so far: Plug in using an HDMI cable and an HDMI--D-DVI converter on the monitor side. Open up Screen Resolution. Maximum supported setting is 1920x1080. Tried that (several times) - sometimes it doesn't work at all (blank screen); other times, it only shows the first 1280x800 pixels on the bigger screen. Tried using the Catalyst control center - played with various settings there, couldn't get the screen to show anything interesting. Tried using PowerStrip to set a custom resolution, again, no luck. Spoke to a Dell Preferred Custom Support guy for about an hour before giving up. He remote-accessed my computer, and told me that (1) The maximum supported resolution for XPS 1640 is 1920x1080, and (2) 'it seems to be working from where he sees it, must be a connection issue'. None of this has helped. Does anybody have ideas? Should I be using a different cable set up? Am I using Powerstrip wrong?

    Read the article

  • Win 7: apps crash, then explorer crashes, then services fail, then boom

    - by snorfys
    Periodically, every 2-3 days one of my systems will go haywire: every app will crash search will fail via the start menu and then explorer will fail. Restarting explorer via taskmanager will cause it to fail again, then it'll BSOD and restart. The eventlog for when this happens goes something like this every time: ERROR: Session "ReadyBoot" stopped due to the following error: 0xC0000188 (supposedly not a problem) WARNING: The maximum file size for session "ReadyBoot" has been reached... (forget where I found out, but also 'not a problem') ERROR: Session "Circular Kernel Context Logger" stopped due to the following error: 0xC0000188 (again, supposedly not a problem) WARNING: The maximum file size for session "Circular Kernel Context Logger" has been reached... ERROR: Faulting application name: Explorer.EXE, version: 6.1.7600.16450, time stamp:... ERROR: Faulting application name: explorer.exe, version: 6.1.7600.16450, time stamp:... ERROR: Faulting application name: svchost.exe_iphlpsvc, version: 6.1.7600.16385, time stamp:... ERROR: The Service Name service terminated unexpectedly. It has done this 1 time(s) That last one happens a number of times but with a different service name. Then finally we have: ERROR: The Service Control Manager tried to take a corrective action (Restart the service) after the unexpected termination of the Server service, but this action failed with the following error: An instance of the service is already running. After that, I have my BSOD and logs complaining that windows started up without shutting down. It's a new machine: Intel i3 530 4gb RAM (Ran memtest for 4 hrs, no problems) 320GB WD/250GB Seagate HDDs (Happened on fresh installs on 2 separate HDDs) Win7 Pro/Ultimate x64 (wife's copy of pro, my copy of ult, no change) Fresh install + driver and windows update (happened without updates as well) I'm at a bit of a loss as to what I can look at next. Especially since it'll work like a charm for 2-3 days and then it's hooped for a night (I'm on it now in fact - no problems).

    Read the article

  • Traffic shaping on Linux with HTB: weird results

    - by DADGAD
    I'm trying to have some simple bandwidth throttling set up on a Linux server and I'm running into what seems to be very weird stuff despite a seemingly trivial config. I want to shape traffic coming to a specific client IP (10.41.240.240) to a hard maximum of 75Kbit/s. Here's how I set up the shaping: # tc qdisc add dev eth1 root handle 1: htb default 1 r2q 1 # tc class add dev eth1 parent 1: classid 1:1 htb rate 75Kbit # tc class add dev eth1 parent 1:1 classid 1:10 htb rate 75kbit # tc filter add dev eth1 parent 1:0 protocol ip prio 1 u32 match ip dst 10.41.240.240 flowid 1:10 To test, I start a file download over HTTP from the said client machine and measure the resulting speed by looking at Kb/s in Firefox. Now, the behaviour is rather puzzling: the DL starts at about 10Kbyte/s and proceeds to pick up speed until it stabilizes at about 75Kbytes/s (Kilobytes, not Kilobits as configured!). Then, If I start several parallel downloads of that very same file, each download stabilizes at about 45Kbytes/s; the combined speed of those downloads thus greatly exceeds the configured maximum. Here's what I get when probing tc for debug info [root@kup-gw-02 /]# tc -s qdisc show dev eth1 qdisc htb 1: r2q 1 default 1 direct_packets_stat 1 Sent 17475717 bytes 1334 pkt (dropped 0, overlimits 2782 requeues 0) rate 0bit 0pps backlog 0b 12p requeues 0 [root@kup-gw-02 /]# tc -s class show dev eth1 class htb 1:1 root rate 75000bit ceil 75000bit burst 1608b cburst 1608b Sent 14369397 bytes 1124 pkt (dropped 0, overlimits 0 requeues 0) rate 577896bit 5pps backlog 0b 0p requeues 0 lended: 1 borrowed: 0 giants: 1938 tokens: -205561 ctokens: -205561 class htb 1:10 parent 1:1 prio 0 **rate 75000bit ceil 75000bit** burst 1608b cburst 1608b Sent 14529077 bytes 1134 pkt (dropped 0, overlimits 0 requeues 0) **rate 589888bit** 5pps backlog 0b 11p requeues 0 lended: 1123 borrowed: 0 giants: 1938 tokens: -205561 ctokens: -205561 What I can't for the life of me understand is this: how come I get a "rate 589888bit 5pps" with a config of "rate 75000bit ceil 75000bit"? Why does the effective rate get so much higher than the configured rate? What am I doing wrong? Why is it behaving the way it is? Please help, I'm stumped. Thanks guys.

    Read the article

  • What does this diagnostic output mean?

    - by ChrisF
    I recently had a fault with my broadband connection. It turned out to be a fault with the ISP's or teleco's equipment. My ISP posted this diagnostic, but while I understand it in general, I'd like to to know more about the details. I'm assuming that ATM means Asynchronous Transfer Mode and PPP means Point to Point Protocol. It was this that my router was indicating as the fault. xDSL Status Test Summary Sync Status: Circuit In Sync General Information NTE Status: NTE Power Status: Unknown Bypass Status: Upstream DSL Link Information Downstream DSL Link Information Loop Loss: 9.0 17.0 SNR Margin: 25 15 Errored Seconds: 0 0 HEC Errors: 0 Cell Count: 0 0 Speed: 448 8128 TAM Status: Successfully executed operation Network Test: Sub-Test Results Layer Name Value Status Modem pass Transmitter Power (Upstream) 12.4 dBm Transmitter Power (Downstream) 8.8 dBm Upstream psd -38 dBm/Hz Downstream psd -51 dBm/Hz DSL pass Equipment Vendor Name TSTC Equipment Vendor Id n/a Equipment Vendor Revision n/a Training Time 8 s Num Syncs 1 Upstream bit rate 448 kbps Downstream bit rate 8128 kbps Upstream maximum bit rate 1108 kbps Downstream maximum bit rate 11744 kbps Upstream Attenuation 3.5 dB Downstream Attenuation 0.0 dB Upstream Noise Margin 20.0 dB Downstream Noise Margin 19.0 dB Local CRC Errors 0 Remote CRC Errors 0 Up Data Path interleaved Down Data Path interleaved Standard Used G_DMT INP INP Upstream Symbols n/a INP Upstream Delay 4 ms INP Upstream Depth 4 INP Downstream Symbols n/a INP Downstream Delay 5 ms INP Downstream Depth 32 ATM Reason: No ATM cells received fail Number of cells transmitted 30 Number of cells received 0 number of Near end HEC errors 0 number of Far end HEC errors n/a PPP Reason: No response from peer fail PAP authentication nottested CHAP authentication nottested (I'm not sure that Super User is the best place to ask this, but two people have suggested I ask it here so here I am).

    Read the article

  • HD video editing system with Truecrypt

    - by Rob
    I'm looking to do hi-def video editing and transcoding on an unencrypted standard partition, with Truecrypt on the system partition for sensitive data. I'm aiming to keep certain data private but still have performance where needed. Goals: Maximum, unimpacted, performance possible for hi-def video editing, encryption of video not required Encrypt system partition, using Truecrypt, for web/email privacy, etc. in the event of loss In other words I want to selectively encrypt the hard drive - i.e. make the system partition encrypted but not impact the original maximum performance that would be available to me for hi-def/HD video editing. The thinking is to use an unencrypted partition for the video and set up video applications to point at that. Assuming that they would use that partition only for their workspace and not the encrypted system partition, then I should expect to not get any performance hit. Would I be correct? I guess it might depend on the application, if that app is hard-wired to use the system partition always for temporary storage during editing and transcoding, or if it has to be installed on the C: system partition always. So some real data on how various apps behave in the respect would be useful, e.g. Adobe, Cyberlink, Nero etc. etc. I have a Intel i7 Quad-core (8 threads) 1.6Ghz (up to 2.8Ghz turbo-boost) 4Gb, 7200rpm SATA, nvidia HP laptop. I've read the excellent posting about the general performance impact of truecrypt but the benchmarks weren't specific enough for my needs where I'm dealing with HD-video and using a non-encrypted partition to maintain max performance.

    Read the article

  • Why does redis report limit of 1024 files even after update to limits.conf?

    - by esilver
    I see this error at the top of my redis.log file: Current maximum open files is 1024. maxclients has been reduced to 4064 to compensate for low ulimit. I have followed these steps to the letter (and rebooted): Moreover, I see this when I run ulimit: ubuntu@ip-XX-XXX-XXX-XXX:~$ ulimit -n 65535 Is this error specious? If not, what other steps do I need to perform? I am running redis 2.8.13 (tip of the tree) on Ubuntu LTS 14.04.1 (again, tip of the tree). Here is the user info: ubuntu@ip-XX-XXX-XXX-XXX:~$ ps aux | grep redis root 1027 0.0 0.0 66328 2112 ? Ss 20:30 0:00 sudo -u ubuntu /usr/local/bin/redis-server /etc/redis/redis.conf ubuntu 1107 19.2 48.8 7629152 7531552 ? Sl 20:30 2:21 /usr/local/bin/redis-server *:6379 The server is therefore running as ubuntu. Here are my limits.conf file without comments: ubuntu@ip-XX-XXX-XXX-XXX:~$ cat /etc/security/limits.conf | sed '/^#/d;/^$/d' ubuntu soft nofile 65535 ubuntu hard nofile 65535 root soft nofile 65535 root hard nofile 65535 And here is the output of sysctl fs.file-max: ubuntu@ip-XX-XXX-XXX-XXX:~$ sysctl -a| grep fs.file-max sysctl: permission denied on key 'fs.protected_hardlinks' sysctl: permission denied on key 'fs.protected_symlinks' fs.file-max = 1528687 sysctl: permission denied on key 'kernel.cad_pid' sysctl: permission denied on key 'kernel.usermodehelper.bset' sysctl: permission denied on key 'kernel.usermodehelper.inheritable' sysctl: permission denied on key 'net.ipv4.tcp_fastopen_key' as sudo ubuntu@ip-10-102-154-226:~$ sudo sysctl -a| grep fs.file-max fs.file-max = 1528687 Also, I see this error at the top of the redis.log file, not sure if it's related. It makes sense that the ubuntu user isn't allowed to change max open files, but given the high ulimits I have tried to set he shouldn't need to: [1050] 23 Aug 21:00:43.572 # You requested maxclients of 10000 requiring at least 10032 max file descriptors. [1050] 23 Aug 21:00:43.572 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.

    Read the article

  • MySQL consuming all system memory on INSERT ... SELECT

    - by siete
    The mysql daemon is getting killed because Linux is reaching out of memory: Oct 24 07:41:23 <hostname> kernel: [82297.673701] Out of memory: kill process 13816 (mysqld) score 1839626 or a child There is a link with some workaround on this. That only happen when executing a query INSERT ... SELECT with a very huge resulset. MySQLTuner script displays that maximum theorical memory is less than 8GB, but top and munim shows that is getting over all RAM and swap available: [--] Total buffers: 560.0M global + 72.2M per thread (100 max threads) [OK] Maximum possible memory usage: 7.6G (43% of installed RAM) I'm tried to tune some options with not results, there are the relevant ones: skip-locking max_connections = 100 key_buffer_size = 512M max_allowed_packet = 32M table_open_cache = 2000 open_files_limit = 3000 sort_buffer_size = 16M read_buffer_size = 16M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 4 query_cache_size = 16M query_cache_limit = 2M thread_concurrency = 4 join_buffer_size = 32M tmp_table_size = 32M max_heap_table_size = 32M query_cache_limit = 8M bulk_insert_buffer_size = 64M myisam_max_sort_file_size = 50GB myisam_mmap_size = 10GB And there is a system resume: OS: Linux Debian "Squeeze" 6.0.8 (upgraded yesterday) RAM: 18GB Swap: 18GB MySQL: 5.1.72-2 (official Debain release) At this moment, update or change OS or MySQL version is not possible, there is any option that can help and i missed? Sorry by my english, and thank you in advance! Edit: I'm only using MyISAM tables, and cannot change to InnoDB.

    Read the article

  • Difference between CurrentClockSpeed and MaxClockSpeed

    - by Ben
    Rationale this belongs on ServerFault rather than StackOverflow - I already have my program which gets the value, I am querying the value returned and what it means. I have an in-house program which audits our company PCs, and one of the things it checks is the speed of the processor. To do this, it queries the Win32_Processor WMI class and gets the value of CurrentClockSpeed. We were playing with the data today and found an anomaly with some of the speeds being reported incorrectly (for example, CurrentClockSpeed said 1.0GHz, whereas the CPU name said Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz [Confirmed it is in fact 1.83GHz]). I did a bit of digging on the internet and found this blog post which might explain what is going on. My initial thought was that I could change the program to instead get the value for MaxClockSpeed instead of CurrentClockSpeed, but Microsoft's documentation doesn't clearly define what this will return. What I mean by that is will this return a value which is its actual maximum speed (say if it were overclocked) but which it would not normally be running at, or would it return what I expect, which is its maximum speed under normal (not overclocked) conditions?

    Read the article

  • Does IIS Sometimes Allocate More Worker Processes Than Configured?

    - by Paul Williams
    We have an IIS 7.5 web service on Windows Server 2008 that handles WCF requests from C# clients. This service is configured to have Maximum Worker Processes = 1, so it is not a web garden. IIS is setup to recycle itself at the same time every day (3 AM). I am trying to debug gnarly connection issues, so I wanted to be sure the application pool was not recycling itself. I configured the pool to log an event when it recycles itself. To my surprise, I see the following entries in the System event log: Level: Information Date/Time: 3/23/2012 3:00:00 AM - Source: WAS - Event ID: 5076 A worker process with process id of '6636' serving application pool 'MyAppPool' has requested a recycle because it reached its scheduled recycle time. Level: Information Date/Time: 3/23/2012 2:59:39 AM - Source: WAS - Event ID: 5076 A worker process with process id of '9364' serving application pool 'MyAppPool' has requested a recycle because it reached its scheduled recycle time. IIS is correctly recycling the application pool at 3 AM. However, I do not understand why I would be getting two recycle events in the log within a few seconds of each other. The maximum number of processes is 1. Does IIS sometimes allocate multiple processes for an application pool that is specified as having 1 process? -- edit -- I connected at about 4 PM today and only saw 1 w3wp.exe process. There are no other event log entries that would indicate a crash.

    Read the article

  • Configuring wsgi for a simple Python based site

    - by jbbarnes
    I have an Ubuntu 10.04 server that already has apache and wsgi working. I also have a python script that works just fine using the make_server command: if __name__ == '__main__': from wsgiref.simple_server import make_server srv = make_server('', 8080, display_status) srv.serve_forever() Now I would like to have the page always active without having to run the script manually. I looked at what Moin is doing. I found these lines in apache2.conf: WSGIScriptAlias /wiki /usr/local/share/moin/moin.wsgi WSGIDaemonProcess moin user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup moin And moin.wsgi is as listed: import sys, os sys.path.insert(0, '/usr/local/share/moin') from MoinMoin.web.serving import make_application application = make_application(shared=True) QUESTION: Can I create a similar section in apache2.conf pointing to another wsgi file? Like this: WSGIScriptAlias /status /mypath/status.wsgi WSGIDaemonProcess status user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup status And if so, what is required to convert my simple_server script into a daemonized process? Most of the information I find about wsgi is related to using it with frameworks like Django. I haven't found a simple howto detailing how to make this work. Thanks.

    Read the article

  • Improve wireless performance

    - by djechelon
    Hello, I have a Trust Speedshare Turbo Pro router, which is running on channel 6. I found that the wireless signal (and network performance) dramatically drops from my PDA (I can barely attach to the network, even if I set the PDA's energy settings to maximum wireless performance) when I even exit my room, and I don't have shielded walls or something like that. I can't even stream a SD video from my desktop (connected via LAN) to my laptop using WiFi, while via LAN it works fine. I read that changing router's channel could improve performance due to interference reducing. I found that almost all wireless networks around here run on channels 6 and 11. I tried to go to my router's settings page to change channel, but I found that the combo box only allows me to select 6!! I'm not sure, but I may have been able in the past to change channel, though not to all of the available channels. A few minutes ago I tried a firmware upgrade, but it didn't solve my problem. My question is Is it possible that my router is someway locked to its channel? I bought it on my own, I didn't receive it from my ISP Apart from boosting the antenna power to the maximum (which, by the way, increases the EM radiation my and my family's bodies absorb 24/7 and is little more environment-unfriendly), do you have any tips on getting high quality transmission up to 5 metres from the antenna? Thank you

    Read the article

  • Web Hosting: Any web host that supports files more than 50,000 in number?

    - by Devner
    Hi all, For my PHP & mySQL based application, I am trying to buy website hosting from a host who does not have a limit on the number of files I carry in my hosting account. Almost all the websites have a common limit of 50,000 files (some websites call it 50,000 nodes). The rest(to the extent of my search) are not even close. I have gone through the various websites, Googled lot of information, have spoken with the customer service of the hosting companies and they said that they have a limit of 50,000 files and that's why they call it the LIMIT. Now I have my application, which is a kind of social networking website, where people can upload various files of varying file size. So say if 50,000 users were to join the website and upload 1 file each, the limit of 50,000 will be reached very easily and my 50,001 customer will start facing file upload problems (& so will my account). So I would like to know if there's any website hosting services that do NOT levy such restrictions. In summary, I need the following options: No maximum file limit (more than 50,000 files in account). No maximum file upload limit in server setting (10MB, 12MB, 15MB, 20MB, etc.). Ability to upload files of various types (zip, flv, jg, png, etc.). Ability to stream Audio and Video (live audio & video not necessary). Access to .htaccess Access to php.ini, my.cnf or my.ini (this would be a plus) Supports SSL. Provides dedicated hosting(& IP) as well. Monthly payments without contracts are a plus. If you know of any such website hosting services, please post a reply ( a link to the same will be appreciated ). Thank you.

    Read the article

  • Apache suddenly very slow on http and faster on https

    - by hsnm
    Background: I have Apache 2 running on ubuntu. There is a low usage on it and mostly being accessed for a web service URL from mobile apps. It was working fine until I installed SSL certificates. I now have both http and https. When I access the server using https, I get a fairly quick response (but probably not as fast as before). When I use http, it's so slow. What I tried: From this post: I curl localhost from the host and it takes some time, meaning there is no routing issue. The server runs on Amazon EC2 instance and is managed by me only. Also: I see that Apache once running, creates the maximum number of processes it is allowed to, which was not the case before. I lowered the MaxClients to 20 and I think I'm getting faster responses but it still takes over a minute and I always have MaxClients Apache processes. dmesg returns many [ 1953.655703] TCP: Possible SYN flooding on port 80. Sending cookies. When I netstat I get many entries with SYN_RECV. Possibly a DDoS attack? From EC2's monitoring diagrams I see a pattern of high "Maximum Network In (Bytes)" since 2 days ago. By the way the server is still being tested, the actual traffic is very low and not consistent. I tried to go with this solution to limit incoming connections using iptables, still no luck, but I'm trying. Question: What could be the problem? Is this a DDoS attack?

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • What does the 'Burst Rate' stat mean in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks

    Read the article

  • What are the advantages of registered memory?

    - by odd parity
    I'm browsing for a few low-end servers for a startup and I'm a bit confused about the different memory types. The advantage of ECC is clear - single-bit error correction. When it comes to registered memory it seems more vague, especially in systems that support both registered and unbuffered memory. A Google search mostly finds copies of the Wikipedia article, which states that registered memory chips "...place less electrical load on the memory controller and allow single systems to remain stable with more memory modules than they would have otherwise". However I can't find any quantification of this. What I'm wondering about is: Is registered memory an improvement over unbuffered when it comes to soft error rate, or is it purely about the maximum number of modules supported? If yes, at what point (amount of modules or GB of memory) do these improvements start to become noticeable? For a specific example, the HP ProLiant DL 120 G6 server manual states that maximum supported memory configuration is 16 GB unbuffered (4x4GB) or 12 GB registered (6x2GB). In this case I'd rather have the extra 4GB of memory if the reliability difference is negligible.

    Read the article

  • Which RAM is faster (or, is Crucial's Memory Advisor giving non-optimal advice)?

    - by adpe
    In general, if a PC's motherboard is only specified for RAM up to a given core speed x, will that PC be faster with: RAM of latency y capable of running at a maximum core speed >x or RAM of latency <y capable of running at a maximum core speed of exactly x ? I would have thought the latter, but Crucial's Memory Adviser tool advises the former. So, which of us is correct - me, or the machine? (Here is a concrete example: I wish to upgrade a Toshiba Satellite Pro L300-155 laptop from its current 1GB RAM to 2GB Crucial RAM. The laptop's specifications are given here. I see from those specifications that the laptop is designed for DDR2-667 Ram. Crucial sells two compatible 2GB kits, priced exactly the same as each other: DDR2-667, CL=5; DDR2-800, CL=6. It seems to me that of these two upgrade kits, the first kit would run slightly faster on the L300-155 than the second, because both will presumably be capped at DDR2-667 core speed (see laptop specs), but the second kit has more latency. However, Crucial's Memory Advisor tool recommends the second kit.)

    Read the article

  • Throughput and why do ISPs sell too much bandwidth?

    - by jonescb
    I hope the question made sense how I worded it. :) I've been wondering, maximum theoretical bandwidth is measured as RWIN/RTT (Window size / round trip time) Source 1 and Souce 2 So if a major city only 100 miles away gives me a ping of 50ms, and I have the default 64kb TCP window size then my maximum throughput will be 12.5Mb/s. Everything further away would give me a higher ping and therefore a lower throughput. Is there any reason to buy something like FiOS with a 50Mb/s or greater connection? Will you ever be able to reach that kind of speed? I know you can increase the TCP window size to increase throughput, but it has to be at both ends which is a deal breaker because you can't control the server. I'm assuming other network protocols like UDP aren't quite as affected by latency as TCP is, but how much of overall network traffic does non-TCP make up vs TCP. Am I just misguided about how throughput works? But if the above is correct, then why should a consumer like me buy way more bandwidth than can be realistically used. Maybe the only reason is for downloading multiple things at once, or one thing from multiple servers/peers?

    Read the article

  • Using multiple USB webcams in Linux

    - by rachelderp
    Running more than one USB webcam in Debian/Linux results in the the following error: libv4l2: error turning on stream: No space left on device VIDIOC_STREAMON: No space left on device What initially seemed to be a programming issue in OpenCV turned into a quest for a mysterious hardware/software problem after the same errors were produced by running cheese and xawtv. Apparently it's caused by webcams requesting all the available bandwidth on the USB host controller. With that in mind I decided to run wireshark and capinfos to see just how much bandwidth a single camera used. 4 megabits per second at 320x240 14 megabits per second at 640x480 32 megabits per second at 1920x1080 Interesting! That might explain why two cameras at 320x240 work but any higher resolution fails. It's as if my USB controller is only operating at USB 1 speeds, yet lsusb shows both webcams belonging to a device which supposedly supports 480 megabits per second. One solution proposed forcing the webcams to calculate their bandwidth usage instead of requesting their maximum by running the following commands: sudo rmmod uvcvideo sudo modprobe uvcvideo quirks=128 Unfortunately that made no difference, so I decided to try another solution. A post on StackOverflow suggested telling my webcams to use a lower FPS or compressed video format like MJPEG, but after running v4lctl list it doesn't appear either of my webcams support changing their video mode. And that's where I'm stuck. Why would two webcams operating well below the maximum speed of USB 2 would produce this error? ps: It's not a disk space issue, df displays no change when the webcams are started. pps: If it makes a difference, here's the output of lsusb

    Read the article

  • Shell script to block proftp failled attempt

    - by Saif
    Hello, I want to filter and block failed attempt to access my proftp server. Here is an example line from the /var/log/secure file: Jan 2 18:38:25 server1 proftpd[17847]: spy1.XYZ.com (93.218.93.95[93.218.93.95]) - Maximum login attempts (3) exceeded There are several lines like this. I would like to block any attempts like this from any IP twice. Here's a script I'm trying to run to block those IPs. tail -1000 /var/log/secure | awk '/proftpd/ && /Maximum login/ { if (/attempts/) try[$7]++; else try[$11]++; } END { for (h in try) if (try[h] > 4) print h; }' | while read ip do /sbin/iptables -L -n | grep $ip > /dev/null if [ $? -eq 0 ] ; then # echo "already denied ip: [$ip]" ; true else logger -p authpriv.notice "*** Blocking ProFTPD attempt from: $ip" /sbin/iptables -I INPUT -s $ip -j DROP fi done how can I select the IP with "awk". with the current script it's selecting "(93.218.93.95[93.218.93.95])" this line completely. But i only want to select the IP.

    Read the article

  • Is 40+ Logons on Exchange 2003 per user normal?

    - by cbsch
    Hello! We've had a problem at work where users sometimes randomly can't connect to exchange. I've found out that it's because they reached the limit of 32 concurrent logons. I increased the maximum allowed connections by adding the key "Maximum Allowed Sessions Per User" in HKLM\SYSTEM\CurrentControlSet\Services\MSExchangeIS\ParametersSystem. But I'm not sure if this is a real good fix. Looking at the logons some users has as many as 15 logons with the exact same logon time. I know for sure that Outlook 2007 does this, as I was watching them while a user connected with Outlook after a restart on the Exchange service. Every user also has an iPhone connected to exchange, I don't know if these cause the same thing. Is this normal? Could there be a bug in the software? (The Outlook 2007 has nothing configured, except added the user, pure vanilla installs). The users are mobile, and when Outlook generates up to 15 connection every time it connects, and I've read (no sources, sorry) that Outlook doesn't time out connections before 2 hours. I might have to set this number real high to prevent it from being a problem.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >