Search Results

Search found 10018 results on 401 pages for 'sample rate'.

Page 198/401 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Can Internet data be used by malware when PC off?

    - by Val
    I have noticed over the last month that my off peak data has been used at a rate of approx 350MB per hour - this has meant that I have gone over my quota and slowed down by my ISP to 256k. There is no one in the house using it (2am-8am is my ISPs off peak hours) at that time. My PC and other wireless devices (ipad and iphone) are turned off. I have changed the wireless password on my modem 3 times and it is now 30 digits long. So I don't think someone else is using my wireless access between 2-8am. It has been suggested by my ISP that I may have malware/spyware on my computer. Sorry for my ignorance, but can malware still run if the PC is off? I did look at my modem's log and followed an IP address to a service called Amazon Simple server Storage. Could this company possibly be the culprit? I am not too tech savvy, so any assistance appreciated. I have run a barrage of spyware cleaning software eg malware bytes; spy bot etc.... Cheers Val

    Read the article

  • Notebook display problem (multiplication)

    - by SubniC
    Hi, I'm having some troubles with the display of a LG E500 notebook. The thing is that without any known reason the display starts to show the screen divided in eigth parts and each part show the display image as if I had a matrix of eigth displays :) I thought it could be some kind of refresh rate problem or driver related, but it is happening at boot-up as well and the BIOS. I got the computer completely unassambled yesterday and I check all the wires and connectors looking for something broken or unconected, but without luck... You can see a picture here of the problem (sorry for the low quality, but I think it illustrate the problem. EDIT 1: I uploaded a new pictrue, here you cans ee the problem better :) There are three horizontal lines that you can see just between the windows. You can see the grey line at the first moment you turn on the computer and the after the duplicated screens show up just like if they where arranged over a grid (over the horizontal lines...) I hope it makes any sense. Do you know what could be happening? or can you tell me what would you do? Thank you very much for your help.

    Read the article

  • Can a USB/IDE/SATA adapter be flaky?

    - by Ward
    I use USB/IDE/SATA converters a lot and on the two that I have now, I sometimes get errors copying files to drives. It only happens when I'm copying big files to the drive (big can mean as little as 100MB, I think it happens more often with bigger files - 300MB or more), and basically the copy will fail and I'll get one or more error messages about "Delayed write failed." But if I disconnect the drive and re-connect it, I'll usually be able to continue. (The file that was being copied will be corrupt, but otherwise the drive is fine.) I just noticed a new type of flakiness: the data transfer rate can vary widely. I copied one set of files (5x300MB files) and it took 10+minutes, then I copied another set (approx. the same sizes) and it took less than a minute. I haven't done systematic testing, the other things I'm doing on my laptop at the same time might have some impact, and I haven't cross-checked the two adapters I have and the 3 hard drives I'm working with to see if there's a pattern. I'm more wondering if anyone else has seen anything like this.

    Read the article

  • SSD for swap on Ubuntu server

    - by grs
    Currently I am reading SSD reviews and I wonder how much exactly I will benefit if I move the 24 GB swap from 7200rpm HDD to SSD. Does anyone implemented swap space on SSD? Is this generally good idea? On a side note: I read that ext4 has much better performance if the journal is on SSD. Anyone with such a setup? Thanks! Edit: Here I will answer the questions posted: Occasionally, relatively rare I am hitting the swap. I know what the swap is for and that is better to get more RAM. When the server begins to swap its performance degrades (not a surprise). The idea is if I have few memory hungry processes running, to improve the overall system performance at that time, using SSD for swap, instead of slower rotational media. At the end - I want to be able to login faster and check the server state during swapping, instead of waiting on the login prompt. And of what I see SSD is cheaper per GB than RAM. Would I have better server performance during swapping (as rare it is) using SSD compared to HDD? Where 10k or 15k rpm HDDs would rate in this scenario? Thank you all for your quick and prompt answers!

    Read the article

  • computer freezes but music continues

    - by Danny
    Recently I have had a problem where my computer will freeze completely but if I happen to be streaming Pandora in a tab that will continue playing. If I wait about 2-5 minutes it will eventually come back and start working normally. I also noticed that during the period that it is unresponsive that the HDD activity light stays lit the whole time, not flashing. I've ran memtest86+ and a diagnostic from Western Digital for my HDD model and none of them reported any errors. The specs for my computer are 1 x ASRock H55M/USB3 R2.0 LGA 1156 Intel H55 HDMI USB 3.0 Micro ATX Intel Motherboard 1 x CORSAIR Enthusiast Series CMPSU-550VX 550W ATX12V V2.2 SLI Ready CrossFire Ready 80 PLUS Certified Active PFC Compatible with Core i7 Power Supply 1 x Intel Core i3-540 Clarkdale 3.06GHz LGA 1156 73W Dual-Core Desktop Processor Intel HD Graphics BX80616I3540 1 x G.SKILL 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Dual Channel Kit Desktop Memory Model F3-12800CL9D-4GBNQ 1 x ASUS PCE-N13 PCI Express 150/300Mbps Transfer/Receive Rate Wireless Adapter 1 x EVGA 01G-P3-1556-KR GeForce GTX 550 Ti (Fermi) FPB 1GB 192-bit GDDR5 PCI Express 2.0 x16 HDCP Ready SLI Support Video Card I can't imagine what would be causing these problems.

    Read the article

  • xrandr fails when 3rd monitor has higher resolution

    - by Pi3cH
    I tried many combinations with xrandr command under lubuntu 12.04 to setup my three monitors DVI (DELL 1) left detected as HDMI1 HDMI (LG E2290) middle detected as HDMI2 VGA (DELL 2) right detected as VGA1 I can get the display with fix 1280x1024 on all the monitors. But once I setup 1280x1024 + 1920x1080 + 1280x1024, I get blank screen on all the monitors. Sometimes it throws crts fail error instead of blanking out. Anyone have similar issues? any solutions/workarounds? P.S. I can setup two monitors using 1280x1280 and 1920x1080 P.S.S. HDMI2 required at least 1920x1080 to display sharp picture. Outputs (it seems graphic card supports up to 8192x8192): xrandr Screen 0: minimum 320 x 200, current 3840 x 1024, maximum 8192 x 8192 VGA1 connected 1280x1024+2560+0 (normal left inverted right x axis y axis) 338mm x 270mm 1280x1024 60.0*+ 75.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 HDMI1 connected 1280x1024+0+0 (normal left inverted right x axis y axis) 376mm x 301mm 1280x1024 60.0*+ 75.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 HDMI2 connected 1280x1024+1280+0 (normal left inverted right x axis y axis) 477mm x 268mm 1920x1080 60.0 + 1680x1050 60.0 1280x1024 75.0 60.0* 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis) 3 CRTs (0,1 VGA, 0,1,2 for other HDMI) xrandr --verbose VGA1 connected 1280x1024+2560+0 (0x47) normal (normal left inverted right x axis y axis) 338mm x 270mm Identifier: 0x42 Timestamp: 51324 Subpixel: unknown Gamma: 1.0:1.0:1.0 Brightness: 1.0 Clones: CRTC: 0 CRTCs: 0 1 ... HDMI1 connected 1280x1024+0+0 (0x47) normal (normal left inverted right x axis y axis) 376mm x 301mm Identifier: 0x43 Timestamp: 51324 Subpixel: unknown Gamma: 1.0:1.0:1.0 Brightness: 1.0 Clones: CRTC: 1 CRTCs: 0 1 2 ... HDMI2 connected 1280x1024+1280+0 (0x47) normal (normal left inverted right x axis y axis) 477mm x 268mm Identifier: 0x44 Timestamp: 51324 Subpixel: unknown Gamma: 1.0:1.0:1.0 Brightness: 1.0 Clones: CRTC: 2 CRTCs: 0 1 2 Below command fails: xrandr --output VGA1 --auto --output HDMI2 --auto --left-of VGA1 --output HDMI1 --auto --left-of HDMI2 Below command passes: xrandr --output VGA1 --auto --output HDMI2 --mode 1280x1024 --rate 60.0 --left-of VGA1 --output HDMI1 --auto --left-of HDMI2 Graphic card VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09)

    Read the article

  • Why is my rsync so slow compared to pure cp or even scp?

    - by nfm
    I'm transfering the files from Linux to Windows 7 via a mounted share (the share is mounted from Windows on Linux).. I'm copying lots of data (i.e. nearly a TB) from the old to the new machine within my LAN. I'm unfortunate enough already that I only have 100MBit. Naturally I blindly used rsync but already wondered after a day why it feels so slow. Enabling the progress meter showed my a transfer rate of about 2MBit/s . So I took a reasonable big file (800MB) and tracked the transfer timing: cp : 05:33 scp (*): 06:33 rsync : 21:51 *) scp via localhost to the same Linux machine directly onto the share; completely useless but provided a progress meter The tests were as simple as (cp|scp|rsync) <source> <destination> No special arguments except host/port for scp. I even tried the -W switch for rsync but cancelled after ten minutes. rsync is 3.0.3 running on Lenny. To be able to interrupt the copy process anytime and resume lead me to rsync, but now I think I seriously need to reconsider this requirement. How's such a big difference possible?

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • iptables drops some packets on port 80 and i don't know the cause.

    - by Janning
    Hi, We are running a firewall with iptables on our Debian Lenny system. I show you only the relevant entries of our firewall. Chain INPUT (policy DROP 0 packets, 0 bytes) target prot opt in out source destination ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW Chain OUTPUT (policy DROP 0 packets, 0 bytes) target prot opt in out source destination ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Some packets get dropped each day with log messages like this: Feb 5 15:11:02 host1 kernel: [104332.409003] dropped IN= OUT=eth0 SRC= DST= LEN=1420 TOS=0x00 PREC=0x00 TTL=64 ID=18576 DF PROTO=TCP SPT=80 DPT=59327 WINDOW=54 RES=0x00 ACK URGP=0 for privacy reasons I replaced IP Addresses with and This is no reason for any concern, but I just want to understand what's happening. The web server tries to send a packet to the client, but the firewall somehow came to the conclusion that this packet is "UNRELATED" to any prior traffic. I have set a kernel parameter ip_conntrack_ma to a high enough value to be sure to get all connections tracked by iptables state module: sysctl -w net.ipv4.netfilter.ip_conntrack_max=524288 What's funny about that is I get one connection drop every 20 minutes: 06:34:54 droppedIN= 06:52:10 droppedIN= 07:10:48 droppedIN= 07:30:55 droppedIN= 07:51:29 droppedIN= 08:10:47 droppedIN= 08:31:00 droppedIN= 08:50:52 droppedIN= 09:10:50 droppedIN= 09:30:52 droppedIN= 09:50:49 droppedIN= 10:11:00 droppedIN= 10:30:50 droppedIN= 10:50:56 droppedIN= 11:10:53 droppedIN= 11:31:00 droppedIN= 11:50:49 droppedIN= 12:10:49 droppedIN= 12:30:50 droppedIN= 12:50:51 droppedIN= 13:10:49 droppedIN= 13:30:57 droppedIN= 13:51:01 droppedIN= 14:11:12 droppedIN= 14:31:32 droppedIN= 14:50:59 droppedIN= 15:11:02 droppedIN= That's from today, but on other days it looks like this, too (sometimes the rate varies). What might be the reason? Any help is greatly appreciated. kind regards Janning

    Read the article

  • How to recover deleted files on ext3 fs

    - by Mike
    I have a drive which was using the ext3 filesystem. I am told that about 10Gb of data was deleted off the drive (probably via rm). The drive is currently mounted as read-only to preserve all data. Does anyone know of a method to restore some or all of the data? Also if it helps, the OS was Fedora. I've also been told that the data is mostly ASCII fortan source code and Matlab files. Conclusion I have finally managed to get the data back, and with the simplest means ever! After weeks of trying and failing to bring back much of any data, I brought someone in today to take a look at it and offer suggestions, he simply cd'd to the directory and everything was there! It was never lost in the first place!!! Needless to say I feel really dumb right now, but I learned quite a lot with this whole fiasco. At any rate, while I was looking through data forensics solutions, I found that the Autopsy, or more specifically the SleuthKit was the most helpful. So I will accept that as the final answer. I would also like to note for anyone that comes across this later on that the most up-voted (currently) answer by sekenre was also helpful and I learned a lot, but ultimately it did not help with the type (very many, and some being very large) of files I was dealing with. So thank to all you that provided suggestions and wish you all the best!

    Read the article

  • What to do about old MacBook

    - by John
    I have a White MacBook version 10.5.8. I accidentally dropped it and about an inch of the most right side of the screen is blacked out. I can't see the clock and Trash Can because of this. I checked how much it would cost to repair, and the cost of repairing is a flat rate of $300. I thought that was kind of expensive, like a third of the cost of the MacBook itself. accidentally dropping a notebook is something that would happen sooner or later, so I think getting a Lenovo is the best. I ended up getting a MacBook pro 15 inch, but do you think it's worth the money to fix my old MacBook? I even saw some used MacBooks available on Ebay for about $400. Anyways I don't know what to do with it. I do like its screen and keyboard better than the MacBook Pro ones, because the screen is less glare-y and the keyboard is more crisp and fun to type on, and its more wieldy when I'm using it while lying down.

    Read the article

  • pdns-recursor allocates resources to non-existing queries

    - by azzid
    I've got a lab-server running pdns-recursor. I set it up to experiment with rate limiting, so it has been resolving requests openly from the whole internet for weeks. My idea was that sooner or later it would get abused, giving me a real user case to experiment with. To keep track of the usage I set up nagios to monitor the number of concurrent-queries to the server. Today I got notice from nagios that my specified limit had been reached. I logged in to start trimming away the malicious questions I was expecting, however, when I started looking at it I couldn't see the expected traffic. What I found is that even though I have over 20 concurrent-queries registered by the server I see no requests in the logs. The following command describes the situation well: $ sudo rec_control get concurrent-queries; sudo rec_control top-remotes 22 Over last 0 queries: How can there be 22 concurrent-queries when the server has 0 queries registered? EDIT: Figured it out! To get top-remotes working I needed to set ################################# # remotes-ringbuffer-entries maximum number of packets to store statistics for # remotes-ringbuffer-entries=100000 It defaults to 0 storing no information to base top-remotes statistics on.

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Does having TRIM enabled affect other hard drives on a computer (and how do you know when Windows is using it)?

    - by Breakthrough
    I recently purchased a solid state drive (an OCZ Vertex 2 (80 GB)) to use as my primary operating system partition. I also have three other SATA hard drives of assorted sizes. I successfully installed Windows 7 Professional onto the SSD (works awesome, great response time and transfer rate), and used the other three HDDs for data storage. I was browsing through the Bible of OCZ SSDs, and noticed the following in Section 60-76 - Tweaks and TRIM: Q. How do I know if TRIM is enabled on my OCZ SSD? A. In Windows 7, go to start/run/cmd), type the following: fsutil.exe behaviour query DisableDeleteNotify It should respond back with: DisableDeleteNotify=0 if TRIM support is ready and active. If it's not, then type: fsutil.exe behavior set DisableDeleteNotify 0 After a bit of searching on Google, I found similar results elsewhere (set DisableDeleteNotify to 0, which makes sense since for TRIM to work, the solid-state drive needs to be notified when deletes occur (for the garbage collector) unlike a normal hard drive). When I run the query on fsutil, I get the following result: DisableDeleteNotify = 48 Following the instructions I found, I set this to 0 instead of 48. However, I am beginning to wonder. Is this all the proof I really need that the OS is using TRIM? Also, since this applies globally for the computer, is TRIM data being sent to the other hard drives connected to the computer? And if so, would this cause any degradation in disk performance?

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • nginx with fail2ban and mod_security

    - by Mahesh
    I forgot to update my fail2ban config for nginx. I just moved to nginx from apache. Today, I got a lot of cals from a single IP. IP tried to access login pages with post and get methods IP tried to use nginx as a proxy (GET http:/...) IP searched images, js, css folders IP tried to inject -d url_allow_fopen =1 and something similar. Most of the calls ended with 404. http { limit_req_zone $binary_remote_addr zone=app:10m rate=5r/s; ... server { ... location / { limit_req zone=app burst=50; } I got approximately 50 requests from that ip for a second. So i updated my nginx like the above. Will it avoid too many connections per second now? I have updated my fail2ban jail.local to support nginx. I am confused with the nginx-noscript.conf [Definition] failregex = ^<HOST> -.*GET.*(\.php|\.asp|\.exe|\.pl|\.cgi|\scgi) ignoreregex = I am serving php with nginx. I checked apache's noscript.conf and which has .php extension on it too. I tested this above settings before restarting fail2ban and got thousands of ips matched. I removed php and nothing matched. Do i need .php| in nginx-noscript.conf? Using mod_security and fail2ban together bring any problem? When i was searching today, i came to know mod_security is available for nginx too. So i am planning to use it too.

    Read the article

  • Frequency of RAM

    - by Vignesh Palani
    I have a very old system hp compaq dx2080. It had 1gb of RAM. I recently bought a EVM DDR2 1 GB PC RAM which had a clock rate of 667Mhz. I have dual booting windows 7 and 8. When I installed it, windows 7 was still using the older 1gb. It showed as 2 gb available and 1gb usable in system properties. I searched around and found that I can change it to max in the msconfig. I did so. I set it 2048. Still, it was using only 1gb. When, I switched to 8, it was using the 2gb. Now, for my question: My system only supported 553Mhz and 667Mhz RAM. In the BIOS, I saw that the new RAM was showing as 800Mhz. Rechecked using speccy and cpu-z. It showed different values between the two. The RAM is labeled as 667Mhz over it. No mistake in that. But, am I missing something? Please help. And, can I continue using it? My point again. There are only two slots.

    Read the article

  • Apache https is slow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With https (KeepAlive on), I can get over 3000 requests per second. However, with https (KeepAlive off), I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • How can I prevent a DDOS attack on Amazon EC2?

    - by cwd
    One of the servers I use is hosted on the Amazon EC2 cloud. Every few months we appear to have a DDOS attack on this sever. This slows the server down incredibly. After around 30 minutes, and sometimes a reboot later, everything is back to normal. Amazon has security groups and firewall, but what else should I have in place on an EC2 server to mitigate or prevent an attack? From similar questions I've learned: Limit the rate of requests/minute (or seconds) from a particular IP address via something like IP tables (or maybe UFW?) Have enough resources to survive such an attack - or - Possibly build the web application so it is elastic / has an elastic load balancer and can quickly scale up to meet such a high demand) If using mySql, set up mySql connections so that they run sequentially so that slow queries won't bog down the system What else am I missing? I would love information about specific tools and configuration options (again, using Linux here), and/or anything that is specific to Amazon EC2. ps: Notes about monitoring for DDOS would also be welcomed - perhaps with nagios? ;)

    Read the article

  • Watchguard Firebox "split" fibre optic line into 2 interfaces

    - by fRAiLtY-
    We have a requirement on our Watchguard Firebox XTM505 to be able to split our incoming external interface, in this case a fibre optic dedicated leased line, 100/100. We use the line in our office of approx 30 machines however we also re-sell to an external company who utilise it to provide wireless internet solutions to the public. The current infrastructure is as follows: Data in (Leased Line) - Juniper SRX210 managed by ISP - 1 cable out into unmanaged Netgear switch - 1 cable into our firewall and office network, 1 cable to our external providers core router managed by them. We have been informed that having the unmanaged switch in the position it is poses a security risk and that a good option would be to get our Watchguard Firewall to perform the split, by separating our office onto a trusted interface, and by "passing through" the external line to their managed router. It is alleged that the Watchguard is capable of doing this and also rate limiting the interfaces, i.e. 20mbps for the trusted interface and 80mbps for the "pass-through", however Watchguard technical support don't seem to be able to understand what we're trying to achieve. Can anyone provide any advice on whether this is possible on a Watchguard device and how or perhaps if there's a better way of achieving this, perhaps with a managed switch instead of unmanaged? Cheers

    Read the article

  • Ubuntu 11.04 fresh install - "Input signal out of range" or "Mode not supported..."

    - by Dennis
    I recently installed Ubuntu 11.04 using a CD .iso. Installation went fine. Upon completion I rebooted and after a second or two I got a black screen with the message "Input signal out of range". And there it sits... Read a few things about how this could be related to screen resolution, refresh rate, etc. For the heck of it I tried a different monitor. The result is the same but the message provides some clues - "Mode not supported - H:92.7kHz, V:58.3Hz" (the latter is Hz; not kHz). So my thought is that I should probably be able to use the 11.04 install disc to "Try out Ubuntu", find and edit some file that was created by the install with the correct values. Problem is, I am not too sure what I am supposed to edit. Looked at the xorg.conf file but this is so minimal at this point I am not sure it is where I want to go. By the way, the monitor is an I-Inc ix191a. Anyone have any ideas on how to get around this?

    Read the article

  • Linux iptables / conntrack performance issue

    - by tim
    I have a test-setup in the lab with 4 machines: 2 old P4 machines (t1, t2) 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t3) Intel e1000 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t4) Intel e1000 to test linux firewall performance since we got bitten by a number of syn-flood attacks in the last months. All machines run Ubuntu 12.04 64bit. t1, t2, t3 are interconnected through an 1GB/s switch, t4 is connected to t3 via an extra interface. So t3 simulates the firewall, t4 is the target, t1,t2 play the attackers generating a packetstorm thorugh (192.168.4.199 is t4): hping3 -I eth1 --rand-source --syn --flood 192.168.4.199 -p 80 t4 drops all incoming packets to avoid confusion with gateways, performance issues of t4 etc. I watch the packet stats in iptraf. I have configured the firewall (t3) as follows: stock 3.2.0-31-generic #50-Ubuntu SMP kernel rhash_entries=33554432 as kernel parameter sysctl as follows: net.ipv4.ip_forward = 1 net.ipv4.route.gc_elasticity = 2 net.ipv4.route.gc_timeout = 1 net.ipv4.route.gc_interval = 5 net.ipv4.route.gc_min_interval_ms = 500 net.ipv4.route.gc_thresh = 2000000 net.ipv4.route.max_size = 20000000 (I have tweaked a lot to keep t3 running when t1+t2 are sending as many packets as possible). The result of this efforts are somewhat odd: t1+t2 manage to send each about 200k packets/s. t4 in the best case sees aroung 200k in total so half of the packets are lost. t3 is nearly unusable on console though packets are flowing through it (high numbers of soft-irqs) the route cache garbage collector is no way near to being predictable and in the default setting overwhelmed by very few packets/s (<50k packets/s) activating stateful iptables rules makes the packet rate arriving on t4 drop to around 100k packets/s, efectively losing more than 75% of the packets And this - here is my main concern - with two old P4 machines sending as many packets as they can - which means nearly everyone on the net should be capable of this. So here goes my question: Did I overlook some importand point in the config or in my test setup? Are there any alternatives for building firewall system especially on smp systems?

    Read the article

  • Splitting an HTTP request into multiple byte-range requests

    - by redpola
    I have arrived at the unusual situation of having two completely independent Internet connections to my home. This has the advantage of redundancy etc but the drawback that both connections max out at about 6Mb/s. So one individual outbound http request is directed by my "intelligent gateway" (TP-LINK ER6120) out over one or the other connection for its lifetime. This works fine over complex web pages and utilises both external connects fine. However, single-http-request downloads are limited to the maximum rate of one of the two connections. So I'm thinking, surely I can setup some kind of proxy server to direct all my http requests to. For each incoming http request, the proxy server will issue multiple byte-range requests for the desired data and manage the reassembly and delivery of that data to the client's request. I can see this has some overhead, and also some edge cases where there will be blocking problems waiting for data. I also imagine webmasters of single-servers would rather I didn't hit them with 8 byte-range requests instead of one request. How can I achieve this http request deconstruct/reconstruction? Or am I just barking mad?

    Read the article

  • Small maximum number of connections on a Linux router

    - by Eugene
    I have a Linux box acting as a router with no iptables or other firewall and no networking applications running on it, just pure router. I've put it in a test environment that generates many TCP connections, each having unique source and destination IP, and those connections go through this router. I'm observing that number of connections successfully created rise to approximately 500 and then no more connections can be created for several minutes, then another 100 connections can be created and there is another pause, and so on. If 10 connections for each source-destination pair are created, then maximum numbers go about 10 times up, so the problem is probably with many connections from different IPs. As traffic is simply routed, it doesn't have to do with number of file descriptors, iptables connection tracking and other things often proposed to check in similar cases. The box has plenty of free RAM and CPU, both NICs are gigabit. The kernel is 2.6.32. I've already tried increasing net.core.*mem_max, net.core.netdev_max_backlog and txqueuelen on both NICs, with completely no effect. What else should I check ? Is there some rate-limit in the kernel itself ?

    Read the article

  • What are the advantages of registered memory?

    - by odd parity
    I'm browsing for a few low-end servers for a startup and I'm a bit confused about the different memory types. The advantage of ECC is clear - single-bit error correction. When it comes to registered memory it seems more vague, especially in systems that support both registered and unbuffered memory. A Google search mostly finds copies of the Wikipedia article, which states that registered memory chips "...place less electrical load on the memory controller and allow single systems to remain stable with more memory modules than they would have otherwise". However I can't find any quantification of this. What I'm wondering about is: Is registered memory an improvement over unbuffered when it comes to soft error rate, or is it purely about the maximum number of modules supported? If yes, at what point (amount of modules or GB of memory) do these improvements start to become noticeable? For a specific example, the HP ProLiant DL 120 G6 server manual states that maximum supported memory configuration is 16 GB unbuffered (4x4GB) or 12 GB registered (6x2GB). In this case I'd rather have the extra 4GB of memory if the reliability difference is negligible.

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >