Search Results

Search found 3911 results on 157 pages for 'research papers'.

Page 117/157 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • The RTL8111/8168B NIC under Linux and the r8168 driver

    - by nik
    So I've got one of the infamous R8168 Realtek ethernet NIC, which have some problems under Linux. After some research, I found out I had to use the r8168 driver for this card (and not the r8169 which still loads when nothing else is available), which I did. So now everything works fine... Sort of. My download and upload rates are more than halved compared to what I should get. When I test (with eg. speedtest) I get something like 20M (often 15M) in download and 30M in upload, but if I test under Windows (everything is otherwise identical: same ethernet cable, same connection, at the same time of the day (well 5 min apart)...), I get 50M upload/download (which is what I expect). Where can it come from? Here's some info: ~ # lspci [...] 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) ~ # modinfo r8168 filename: /lib/modules/3.2.1-gentoo-r2/net/r8168.ko version: 8.027.00-NAPI license: GPL description: RealTek RTL-8168 Gigabit Ethernet driver author: Realtek and the Linux r8168 crew <[email protected]> srcversion: 0A6E9F1D4E8E51DE4B6BEE3 alias: pci:v00001186d00004300sv00001186sd00004B10bc*sc*i* alias: pci:v000010ECd00008168sv*sd*bc*sc*i* depends: vermagic: 3.2.1-gentoo-r2 SMP mod_unload [...] ~ # mii-tool -v eth0: negotiated 100baseTx-HD, link ok product info: vendor 00:07:32, model 17 rev 4 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD

    Read the article

  • Use one NIC to create multiple interfaces for Linux KVM

    - by Phanto
    I am working on a thesis research project, and I am having some difficulty figuring out how to make one NIC spawn several "bridge" interfaces such that each KVM VM can be seen on the local network. I am very new to KVM, and am still exploring what it can do. Below is the scenario that I am attempting to make (on a CentOS/RHEL 6 system): Linux KVM Host has 1 NIC (eth0) connected to a switch. Create multiple "bridge" or equivalent interfaces that are spawned off of eth0 that would provide a unique IP for each VM. This is so that each VM can communicate with other hosts on the network, and that other hosts on the network can communicate with the VM. IMPORTANT: I would like iptables on the KVM host to be able to manipulate/control/restrict the traffic that would be sent on those "bridge" interfaces. I would like to create a minimum of three VM's, each using their own unique "bridge" interfaces. I have previously made a br0 interface off of eth0, but unfortunately, I am unable to add any more to it. It appears that you can only bridge 1 interface to the NIC. I would like to bridge many to one. Would a tap device be able to do this? If so, how would it be set up? Effectively, I am attempting to replicate what can easily be created with VirtualBox on Windows, where each VM is given a "bridged" interface, and can live on the network. I want to achieve this very same thing with Linux KVM. Thank You EDIT: To be more descriptive, I want to achieve something that looks like this: This can be found on this page: http://en.gentoo-wiki.com/wiki/KVM#Networking_2 HOST +---------------+ | | KVM GUEST1 | | +--------------+ | +------+ | | | LAN ---+--- eth0 | +--+---+---- nic0 | KVM GUEST2 | | tap0----+ | |192.168.1.13 | +--------------+ | | tap1----+ | +--------------+ | | | +------+ | | | | | br0 +--+----------------------+---- nic0 | |192.168.1.12 | |192.168.1.14 | +---------------+ +--------------+

    Read the article

  • How to set up that specific domains are tunneled to another server

    - by Peter Smit
    I am working at an university as research assistant. Often I would like to connect from home to university resources over http or ssh, but they are blocked from outside access. Therefore, they have a front-end ssh server where we can ssh into and from there to other hosts. For http access they advise to set up an ssh tunnel like this ssh -L 1234:proxyserver.university.fi:8080 publicsshserver.university.fi and put the proxy settings of your browser to point to port 1234 All nice and working, but I would not like to let all my other internet traffic go over this proxy server, and everytime I want to connect to the university I have to do this steps again. What would I like: - Set up a ssh tunnel everytime I log in my computer. I have a certificate, so no passwords are needed - Have a way to redirect some wildcard-domains always through the ssh-server first. So that when I type intra.university.fi in my browser, transparently the request is going through the tunnel. Same when I want to ssh into another resource within the university Is this possible? For the http part I think I maybe should set up my own local transparent proxy to have this easily done. How about the ssh part?

    Read the article

  • Finding ALL currently used IP addresses of Website

    - by Patrick R
    What steps would you take to discover all (or close to all) IP addresses that are currently used by a website? How would you be as exhaustive as possible without calling a website admin and asking for the list of IP addresses? ;) nslookup works but will vary based on dns server queried. whois is another good tool. Dig, not bad. Let's use Facebook for example. I'm blocking that site for the majority our our company's users, but some are approved for "research". I can not easily use OpenDNS because we all appear to come from the same request IP address. I could change that but don't want to add more vlans than I already have. I also could use block something like regex facebook1 "facebook\.com" (I'm running a cisco firewall) but that's pretty easy to sidestep. All that being said, I'm asking about specifically about finding ip addresses for a domain and not for other methods that I can block a domain name.

    Read the article

  • Dynamic ARP Entries turning into Static ARP entries

    - by Zach
    I recently acquired a client that has a strange ARP caching issue on one of thier servers. I have a server that will eventually start turning it's dynamic ARP entries into static ARP entries. This causes problems because when the machine that has a static ARP entries on this server receives a new IP via DHCP, then the server is not able to communicate with the clients. Clearing the ARP cache resolves the issue and the server is fine for about a week and then it starts slowly turning ARP entries into static ARP entries. I haven't narrowed it down to when or how many it starts to do, but slowly you start seeing 1 static ARP and then 5 and then 10. The server in question is a Windows Server 2003 SP2. It is a DC, DHCP, and DNS server. I've checked the DHCP scope options and there's nothing in there that would indicate anything to do with static ARP entries. The only thing different between this DNS server and our other DNS server is that the 'Dynamically Update DNA A and PTR records for DHCP clients that do not request updates' is checked on the problematic server. I've done a bit of research about this and it seems that this may happen if any PXE type services are running, from what I can tell, there is nothing running a PXE server. I'm a bit lost as I have never seen dynamic ARP entries start to turn into static ARP entries. Right now my solution is a schedule task that runs every 24 hours to clear the ARP cache (arp -d *). I would like to not rely on this schedule task. Has anybody seen this before or have any suggestions on how to troubleshoot this?

    Read the article

  • Setting "Run WWW service in IIS 5.0 isolation mode" does not persist in IIS 6

    - by Saul Dolgin
    Our IIS server was recently patched with the latest Microsoft Security Updates and since then, I am unable to enable the "Run WWW service in IIS 5.0 isolation mode" setting. This setting was enabled prior to patching and somehow changed during the updates. I have tried both using the IIS Manager console and the adsutil.vbs approach to change it. Either way, after resetting IIS for the change to take effect, when I go to verify that the isolation mode setting is enabled (true) I find that is reverts back to being disabled (false). Now... The patches have already been rolled back, however the setting still does not persist when I enable it. While I am trying to research the patches that were applied to see if there is a known issue (or perhaps a change in this setting's behavior) I was hoping someone else might have come across the same problem. Any help towards a workaround would be greatly appreciated! >cscript adsutil.vbs set W3SVC/IIs5IsolationModeEnabled TRUE IIs5IsolationModeEnabled : (BOOLEAN) True >iisreset Attempting stop... Internet services successfully stopped Attempting start... Internet services successfully restarted >cscript adsutil.vbs get W3SVC/IIs5IsolationModeEnabled IIs5IsolationModeEnabled : (BOOLEAN) False

    Read the article

  • Disable XP disk check using FAT32

    - by mike xie
    Right now I'm using Windows XP and Macintosh on my MacBook Pro via Bootcamp. Sometimes my XP would crash and when I restarted it it would have to go through disk check, although it says I can skip it by pushing a key, but this never worked for me. I did a bit of research online on how to disable disk check and found chkntfs /x c: but when I tried this out in my cmd it said the disk is FAT32 format. I tried to convert my C: drive from FAT32 to NTFS by using convert c: /FS:NTFS but when I tried this it told me to locate my C: drive. I tried to type C: and Bootcamp but couldn't really get past it. I later saw someone said to use this: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager] "AutoChkTimeOut"=dword:0000000 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager] "BootExecute"=hex(7):61,00,75,00,74,00,6f,00,63,00,68,00,65,00,63,00,6b,00,20,\ 00,61,00,75,00,74,00,6f,00,63,00,68,00,6b,00,20,00,2a,00,00,00,00,00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon] "SFCScan"=dword:00000000 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\cleanuppath] @=hex(2):25,00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,74,00,25,\ 00,5c,00,73,00,79,00,73,00,74,00,65,00,6d,00,33,00,32,00,5c,00,63,00,6c,00,\ 65,00,61,00,6e,00,6d,00,67,00,72,00,2e,00,65,00,78,00,65,00,20,00,2f,00,44,\ 00,20,00,25,00,63,00,00,00 (Save it as .reg and execute it) I have just tried running it but am not really sure if it did anything (my laptop hasn't crashed yet :) ) Firstly, I am wondering if someone can tell me how to check if that script worked? Secondly, if that script didn't work, does anyone have any solution for these problems? Is there another way to disable disk check or is there another way for me to change my FAT32 to NTFS?

    Read the article

  • Small Business HP Virtualisation and iSCSI SAN Options

    - by Robin Day
    We are a small business that hosts our core product on a number of HP servers. Our core production setup is 1x HP DL380, high powered for a SQL Server Database 1x HP DL360, mid powered for our core application server 6x HP DL320, low powered for our front ends We run our training / testing / support systems on a similar setup, the servers are just older and less powerful. Unfortunately this is now causing us issues as the system has grown beyond the capabilities of these older servers. Upgrading these servers would be expensive and we believe that virtualisation is probably the way to go for the future. Locally we run a number of test / dev environments on ESXi using Direct Storoage on a couple of high powered DL360's and these are performing fairly well. We're thinking that instead of replacing all of our test servers that we can implement an iSCSI SAN and one or two high powered hosts. Hopefully looking that when it comes to replace our live servers as well that we can just expand the virual environment to cope. So my question is... Can anyone offer any advice on some suitable options? We have generally always been extremely happy with HP servers, all of our kit is currently HP, therefore our preference would be to stick with HP, however, I'm always happy to hear about other options. I'm hoping that initially a budget of around 15-25k (GBP) would be suitable, this could potentially be increased if I had confidence that the system would pave the way for a cost effective upgrade of our live systems in the future as well. I am new to SAN's and my only real experience is playing with OpenFiler on some old desktops. I think iSCSI should be suitable, but I've not done any research into how SQL server may perform. I've had a browser through HP's sites and see plenty of information about EVA, MSA, LeftHand, etc. However, from looking at all that, I don't see which options would be best and more importantly I don't know exactly what I would need to buy. Any help, links, opinions would be much appreciated. Thanks

    Read the article

  • ADSL to T1, Is it worth it for us?

    - by Jack Hickerson
    The company I work for has roughly 45-55 simultaneous users (local and remote/VPN) logged in at a given time. We currently subscribe to an ADSL connection but we have been experiencing slower upload/download speeds as our number of users increase. So, I have a few questions with regards to upgrading our connection to a t1 line. I am aware that the number of channels on a t1 line are much greater then that of our current ADSL connection, but I have heard that the number of active users on a t1 line should be no greater than ~30 for optimal performance. I would think this statement is dependent on what each user was using the connection for and could change depending on this variable. That being said, I have tried to break down how the line would be used in our organization based on our major departments: Sales (~60% of total users) - Everyday surfing, email, research, occasional streaming media Marketing (~15% of total users) - Heavy reliance on uploading/downloading, streaming media, file sharing Other (~25% of total users) - email, rare use of any connection intensive activities. I have considered keeping the ADSL for our local users and dedicating the t1 to our remote users (or vice versa) but the cost is significantly higher then what we had hoped for. All factors being equal (# of users, frequency of downloads/uploads from our current activities) Would you suspect a significant performance increase in making the transition to a t1 line from our current ADSL line? What are your thoughts or recommendations?

    Read the article

  • Hosting WCF over Internet

    - by karthik
    I am pretty new to exposing the WCF services hosted on IIS over internet. I will be deploying a WCF service over IIS(6 or 7) and would like to expose this service over the internet. This will be hosted in a corporate network having firewall, I want this service to be accessible over the internet(should be able to pass through the firewall) I did some research on this and some of the pointers I got: 1. I could use wsHTTPBinding or nettcpbinding (the client is intended to be .net client). Which of the bindings is preferable. 2. To overcome the corporate I came across DMZ server, what is the purpose of this and do I really need to use this). 3. I will be passing some files between the client and server, and the client needs to know the progress of the processing on server and the end result. I know this is a very broad question to ask, but could anyone give me pointers where I could start on this and what approach to take for this problem. Any help will be appreciated. Thanks Karthik

    Read the article

  • Campus VLAN Segmentation - By OS?

    - by Moduspwnens
    We've been thinking through re-arranging our network and VLAN configuration. Here's the situation. We already have our servers, VoIP phones, and printers on their own VLANs, but our problem lies with end user devices. There are just too many to lump on the same VLAN without being hammered with broadcasts! Our current segmentation strategy has them split into VLANs like this: Student iPads Staff iPads Student Macbooks Staff Macbooks Gaming devices Staff (Other) Student (Other) *Note that our network has many more iPads and MacBooks than most. Since the primary reason we're splitting them is just to put them in smaller groups, this has been working for us (for the most part). However, this required our staff to maintain access control lists (MAC addresses) of all devices belonging in these groups. It also has the unfortunate side effect of illogically grouping broadcast traffic. For example, using this setup, students on opposite ends of campus using iPads will share broadcasts, but two devices belonging to the same user (in the same room) will likely be on completely separate VLANs. I feel like there must be a better way of doing this. I've done a lot of research and I'm having trouble finding instances of this kind of segmentation being recommended. The feedback on the most relevant SO question seems to point toward VLAN segmentation by building/physical location. I feel like that makes sense because logically, at least among miscellaneous end users, broadcasts will typically be intended for nearby devices. Are there other campuses/large-scale networks out there segmenting VLANs based on end-system OS? Is this a typical configuration? Would VLAN segmentation based on physical location (or some other criteria) be more effective? EDIT: I've been told that we will soon be able to dynamically determine device OS without maintaining access lists, although I'm not sure how much that affects the answers to the questions.

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything. Thanks.

    Read the article

  • iPhone Lag Terrible - SLOW - What's going on with the iPhone OS?

    - by Sam Schutte
    I've had my iPhone 3G for about a year now, and it seems like at least once a month, it gets bogged down and gets slower and slower - horrible lag when typing, going back to the home screen or opening an app can take 20 seconds. Has anyone else run into this and found "the" solution. What you always read on other boards is to reboot the handset (hold down home and the power button), but that doesn't improve anything for me. I've reinstalled the OS like 5 times now, and I'm getting pretty sick of doing it so often. And I don't buy that it's a hardware issue really, since it works fine for weeks after a fresh install. Anyone have a solution or an idea of what specific actions cause this kind of evident data corruption (OS corruption?) and slowness? Note - I'm looking for specific things here. That is, has anyone done the research to see exactly what on the phone operating system is getting messed up that causes this lag (which is discussed all over the internet, with no working solutions). I don't own a mac, so I can't delve into the guts of the iPhone very well to see what's up with it... Some additional info: Reboots (hold down power/home) and "Sleeps" (slide off) do nothing. Only fresh re-installs help I only have about 15 apps installed - sometimes you see the answer to uninstall apps if you have too many, I'd hope that 15 isn't too many, and even when I've had none installed, it still gets hung up after a period of time. This phone is not jailbroken, and it is running the 3.0.1 release.

    Read the article

  • Make case-sensitive SMB share case-insensitive

    - by fungs
    I am running a legacy XP app that I would like to move on a network share. It is very simple and works in theory but the server providing the share is based on Linux (cannot configure) and the software does not work correctly because it is programmed case-insensitively, it seems. After some research, network shares behave like the filesystem they use underneath. This is normal. Unfortunately I cannot fix the software myself. Is there any way to turn the case-sensitivity into case-insensitivity for a Windows network drive on the client side? I fould two approaches: First, something like icasefile (http://wnd.katei.fi/icasefile/) that wraps around the program and intercepts the file I/O. This is for UNIX only. Secondly, a proxy virtual file system (e. g. something using Dokan). Unfortunately I couldn't find any suitable fs, the only possibility would be to put a case-insensitive filesystem on an image file and put this on the share using for example lmdisk (http://www.ltr-data.se/opencode.html/#ImDisk). Do you have any better ideas?

    Read the article

  • 10Gbe sfp+ Cross Over Cable required? Is there such a thing?

    - by dc-patos
    To preface, this is my first experience with 10GBe networking and I have encountered an issue which research does not seem to document a solution for... I have two servers (older DL580G5 and DL380G5), each with a HP NC522SFP 10Gbe dual sfp+ port adapter. I have purchased copper "passive" direct connect adapter cables (which look like twinax), which seem to work well when I connect them to the sfp+ ports on my Dell 5524 switch. However, if I directly connect the two servers with the same cable, the link doesn't come up. I am running WS2012 standard on each server. My intention is to use one of these servers as a home brew SAN and I would like to enable mutiple 10Gbe paths for iSCSI traffic. My question(s): Can I connect the two adapters to each other, such as I would with other less speedy generations of ethernet? If I can, do I require a crossover cable, or some type of other sfp+ cable solution to do this? My 10Gbe sfp+ switch ports are premium, but server to server connections are doable in small numbers for me and I would really like the multiple paths this would give me. Is there a simple solution?

    Read the article

  • How to pipe internet radio into a tuner?

    - by JW
    UPDATE: Thanks everyone for the ideas! This was an area I knew very little about but now I can talk with a little more expertise about it. Much appreciated! Visited my dad this weekend and he wants to pipe some internet radio he's found down to a tuner on quite a distance away in the house. He uses computers for only very basic things: e-mail, getting the Post crossword, checking Yahoo!, checking recipes, etc. There's currently one computer in the house (no router). My initial suggestion (without any research whatsoever) was to get a wireless router and a netbook for downstairs near the tuner, but he initially wasn't too keen about having another computer down there. Anyway, is there any computer hardware that could magically pipe the audio output from the computer down to one set of (RCA) audio inputs on the tuner? Wireless isn't necessary but it probably would be easier. Anyway, thanks for your suggestions! UPDATE Thanks everyone! Voted up all of your suggestions now that I have 15 rep. Much appreciated.

    Read the article

  • Collect temperature and fan speed with munin from Windows 7 PC?

    - by mfn
    Hi, I'm quite fond of munin and using it also at home to monitor my PCs. What was super-duper easy under Linux is pretty much unsolvable for me under Windows: I'd like to monitor CPU and Motherboard temperatures as well as fan speed. On Linux I'm using lm-sensors and the plugin for munin was basically there. I access already some information from my Windows machine via SNMP (disk space, CPU usage, memory usage); the graphs are simple as is the information exposed via SNMP, but they do their job. But when it comes to temperature and fan speed I'm running against a wall. My research so far resulted in that Windows does not by default provide out of the box ability to retrieve temperature/fan speed data. Third party applications are necessary which have know-how how to communicate with the Motherboard chips. The best I cam up with is that SpeedFan exposes a shared memory interface and there exists a library which hooks into Windows SNMP facility and bridges over to SpeedFans shared memory interface; it's called SFSNMP (site currently down). Unfortunately the library doesn't work, there's a bug report at SpeedFan open about it, but it's currently not moving (although the SFSNMP author is active there) . So, unless that's going to work like anytime soon, are there any alternatives? I'm not found of buying any software to get that feature, given that I take it as granted that my system exposes me the information to properly monitor it, but anyway don't just not answer because of this.

    Read the article

  • Centos running Apache Tomcat keep getting "java.net.SocketException: Too many open files"

    - by Gerard Moroney
    We're running Apache Tomcat 7.0.41 on CentOS 6 with java version "1.7.0_21". We were getting a lot of too many open files errors so I did some research. The consensus was that it was to to with the number of open files. So I did the following: Increased max files in /etc/security/limits.conf soft nofile 100000 hard nofile 100000 Rebooted the server Checked the limits were valid for the user which was to run the process [app_admin@xxx ~]$ ulimit -Hn 100000 [app_admin@xxx ~]$ ulimit -Sn 100000 Monitored open files on the server using the lsof command What I observed was when the total open files reached circa 13000 and tomcat had around 4500 open files the error reappeared. I am confused. I thought it would have resolved the problem but clearly I don't fully understand the root cause and also how to set the parameter correctly. To (maybe) help I have not modified the server.xml file for Tomcat (although I'm tempted). I don't want to start fiddling with that and make things worse. I'm more than happy to share any more information if someone can give me some hints on where to start looking.

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • Websockets Server with Fault-Tolerance and Durable Message Store

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for. EDIT So, after some research the following installed options seem to be the most robust: Kaazing Migratory Migratory (http://migratory.ro) Hosted services that seem "real" Pusher (great API but no history feature yet) PubNub (has history) All of the above services have graceful fallback to other communication methods if websockets are not available. I was not able to find any open source that provided "out of the box" clustering, fail-over, and a durable message store to play back history. There are some projects that may serve as good starting points, but not exactly what I am looking for.

    Read the article

  • Best tool for monitoring backups, etc. and trending statstics from that data

    - by Randy Syring
    I have done some research on nagios, opennms, and zenoss but am not confident that I have found what I am looking for. The main driving force for me right now is being able to monitor backups. This includes mysql, mssql, and eventually some file system backups. We have a tool that wraps the backup process for these different systems and collects statistics. So, items like: number of databases backed up size of db backup file size of db backup file compressed time to make backup time to zip file I want to be able to A) have notifications if the jobs are not run according to schedule B) be able to set thresholds on the statistics which would trigger notifications C) I want to be able to trend and graph the statistics I am planning on sending this information to the monitoring application through an HTTP POST. Or, the monitoring application could pull it from a log file as well. However, we will have other processes with other "arbitrary" (from the monitoring system's perspective) statics that will want to monitor and trend, so flexibility is very important. The tool or tools should also be able to do general monitoring and trending of network interfaces, server load, etc. Once we get the backup monitoring in place, we will want to include those items as well. Thanks.

    Read the article

  • Production monitoring for EC2 instances

    - by Janine
    I'm setting up my first production instance on EC2 and want to make sure I have all necessary monitoring in place. There are three different types of things I want to monitor: Is the instance running? EC2 instances can be terminated without warning if the underlying hardware fails, and as far as I know they aren't automatically restarted. So if not, start it back up. Is UNIX running properly? This is the usual stuff about CPU load, disk space, etc. Is the website responding? If not, restart it. I initially set up Nagios on a physical server outside the cloud, but it is really only helpful for item 2. It can tell me if the instance is gone or if the website is not responding, but as far as I can tell it can't execute any commands to fix the situation. My Googling on this subject has yielded a plethora of options - Cacti, Monit, God, Ganglia, and probably more I'm forgetting now. I don't have time to research them all. I am aware of Amazon's Cloudwatch but it doesn't seem to do anything that my Nagios installation doesn't already do. If you already have something like this in place, can you please share what has worked well for you?

    Read the article

  • Pros and Cons of a proxy/gateway server

    - by Curtis
    I'm working with a web app that uses two machines, a BSD server and a Windows 2000 server. When someone goes to our website, they are connected to the BSD server which, using Apache's proxy module, relays the requests & responses between them and the web server on the Windows server. The idea (designed and deployed about 9 years ago) was that it was more secure to have the BSD server as what outside people connected to than the Windows server running the web app. The BSD server is a bare bones install with all unnecessary services & applications removed. These servers are about to be replaced and the big question is, is a cut-down, barebones server necessary for security in this setup. From my research online I don’t see anyone else running a setup like this (I don't see anyone questioning it at least.) If they have a server between the user and the web app server(s), it is caching, compressing, and/or load balancing. Is there anything I’m overlooking by letting people connect directly from the internet ** to a Windows 2008 R2 server that’s running the web application? ** there’s a good hardware firewall between the internet with only minimal ports open Thank you.

    Read the article

  • Is there an SSL equivelent to an ssh agent?

    - by Matthew J Morrison
    Here is my situation: There are a number of developers who all need to have access to be able to install ruby gems and python eggs from a remote source. Currently, we have a server inside our firewall that hosts the gems and eggs. We now want the ability to be able to install things hosted on that server outside of our firewall. Since some of the gems and eggs that we host are proprietary I would like to somewhat lock access to that machine down, as unobtrusively as possible to the developers. My first thought was using something like ssh keys. So, I spent some time looking at SSL mutual authentication. I was able to get everything set up and working correctly, testing with curl, but the unfortunate thing was that I had to pass extra arguments to curl so it knows about the certificate, key and certificate authority. I was wondering if there is anything like the ssh agent that I can set up to provide that information automatically so that I can push the certificates and keys to the developer's machines so the developers don't have to log in or provide keys each time they try to install something. Another thing that I want to avoid is having to modify the 'gem' command and the 'pip' command to provide keys when they make the http connection. Any other suggestions that may solve this problem (not related to ssl mutual auth) are also welcome. EDIT: I've been continuing to research this and I came across stunnel. I think this may be what I'm looking for, any feedback regarding stunnel would also be great!

    Read the article

  • Nexus One WiFi connection problems.

    - by sunocky
    I have two new Nexus One for a research project. For the projects I need to keep a server running on the phone. But soon I found out that both the phones have inconsistent WiFi connection problems at my home. It can connect to my WiFi network, but will drop off in a random time. And in order to reconnect to my WiFi, I may need to reboot my router, or the phone will say "obtaining IP address" and then "Unsuccessful". I also own a G1 with firmware version 1.6, it has no such connection problems. Well, to my surprise, the two Nexus One works fine with connecting to the WiFi network at my work place, which is a WEP type WiFi connection. By the way, it is a WPA type connection at my home. Anyone knows what's the problems with the Nexus One? Any suggestions on what should I do if I want to keep the WiFi connection live all the time at my home? Thanks very much!

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >