Search Results

Search found 16560 results on 663 pages for 'high tech resources'.

Page 520/663 | < Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >

  • Redundant OpenVPN connections with advanced Linux routing over an unreliable network

    - by konrad
    I am currently living in a country that blocks many websites and has unreliable network connections to the outside world. I have two OpenVPN endpoints (say: vpn1 and vpn2) on Linux servers that I use to circumvent the firewall. I have full access to these servers. This works quite well, except for the high package loss on my VPN connections. This packet loss varies between 1% and 30% depending on time and seems to have a low correlation, most of the time it seems random. I am thinking about setting up a home router (also on Linux) that maintains OpenVPN connections to both endpoints and sends all packets twice, to both endpoints. vpn2 would send all packets from home to vpn1. Return trafic would be send both directly from vpn1 to home, and also through vpn2. +------------+ | home | +------------+ | | | OpenVPN | | links | | | ~~~~~~~~~~~~~~~~~~ unreliable connection | | +----------+ +----------+ | vpn1 |---| vpn2 | +----------+ +----------+ | +------------+ | HTTP proxy | +------------+ | (internet) For clarity: all packets between home and the HTTP proxy will be duplicated and sent over different paths, to increase the chances one of them will arrive. If both arrive, the first second one can be silently discarded. Bandwidth usage is not an issue, both on the home side and endpoint side. vpn1 and vpn2 are close to each other (3ms ping) and have a reliable connection. Any pointers on how this could be achieved using the advanced routing policies available in Linux?

    Read the article

  • Remote desktop connection to network printer

    - by andand
    I'm trying to print a document from a remote WinXP machine to a network printer I use on a local Win7 machine using Remote Desktop. The network printer does not appear in the list of those available on the WinXP box. In more detail, the local machine runs Windows 7 (no admin rights) and connects to a network printer managed by a print server (i.e. not using a local TCP/IP Port). I have access to a Windows XP host on a separate network which I access using Remote Desktop. I would like to have print requests from the remote XP box forwarded to the network printer I use on the Windows 7 machine. The XP machine cannot access the print server I use on the Win7 machine nor can it create a TCP/IP port to connect directly to the printer (network configuration issues). After having consulting the KB312135 I confirmed the "Printers" option was selected in the Remote Desktop Client, Local Resources Tab, yet the network printer does not appear on the list of available printers on the XP box. Is this a lost cause or is there something else I haven't managed to locate yet?

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • Find out which task is generating a lot of context switches on linux

    - by Gaks
    According to vmstat, my Linux server (2xCore2 Duo 2.5 GHz) is constantly doing around 20k context switches per second. # vmstat 3 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 2 0 7292 249472 82340 2291972 0 0 0 0 0 0 7 13 79 0 0 0 7292 251808 82344 2291968 0 0 0 184 24 20090 1 1 99 0 0 0 7292 251876 82344 2291968 0 0 0 83 17 20157 1 0 99 0 0 0 7292 251876 82344 2291968 0 0 0 73 12 20116 1 0 99 0 ... but uptime shows small load: load average: 0.01, 0.02, 0.01 and top doesn't show any process with high %CPU usage. How do I find out what exactly is generating those context switches? Which process/thread? I tried to analyze pidstat output: # pidstat -w 10 1 12:39:13 PID cswch/s nvcswch/s Command 12:39:23 1 0.20 0.00 init 12:39:23 4 0.20 0.00 ksoftirqd/0 12:39:23 7 1.60 0.00 events/0 12:39:23 8 1.50 0.00 events/1 12:39:23 89 0.50 0.00 kblockd/0 12:39:23 90 0.30 0.00 kblockd/1 12:39:23 995 0.40 0.00 kirqd 12:39:23 997 0.60 0.00 kjournald 12:39:23 1146 0.20 0.00 svscan 12:39:23 2162 5.00 0.00 kjournald 12:39:23 2526 0.20 2.00 postgres 12:39:23 2530 1.00 0.30 postgres 12:39:23 2534 5.00 3.20 postgres 12:39:23 2536 1.40 1.70 postgres 12:39:23 12061 10.59 0.90 postgres 12:39:23 14442 1.50 2.20 postgres 12:39:23 15416 0.20 0.00 monitor 12:39:23 17289 0.10 0.00 syslogd 12:39:23 21776 0.40 0.30 postgres 12:39:23 23638 0.10 0.00 screen 12:39:23 25153 1.00 0.00 sshd 12:39:23 25185 86.61 0.00 daemon1 12:39:23 25190 12.19 35.86 postgres 12:39:23 25295 2.00 0.00 screen 12:39:23 25743 9.99 0.00 daemon2 12:39:23 25747 1.10 3.00 postgres 12:39:23 26968 5.09 0.80 postgres 12:39:23 26969 5.00 0.00 postgres 12:39:23 26970 1.10 0.20 postgres 12:39:23 26971 17.98 1.80 postgres 12:39:23 27607 0.90 0.40 postgres 12:39:23 29338 4.30 0.00 screen 12:39:23 31247 4.10 23.58 postgres 12:39:23 31249 82.92 34.77 postgres 12:39:23 31484 0.20 0.00 pdflush 12:39:23 32097 0.10 0.00 pidstat Looks like some postgresql tasks are doing 10 context swiches per second, but it doesn't all sum up to 20k anyway. Any idea how to dig a little deeper for an answer?

    Read the article

  • How can I regain control over a "stuck" scroll bar?

    - by jonsca
    I normally have a lot of tabs open in the browser at one time, which normally doesn't cause any problems. Occasionally, though, the page will have fully rendered, the scroll bar is at the top of its track on the right, but placing the the mouse pointer on the scroll bar does not allow me to get control of it (and using the arrow at the bottom of the scroll bar doesn't work either). This seems to happen to me in both Firefox and Chrome, both latest versions. There is normally no load on the CPU, and while my memory usage can be high (especially with Chrome), I still usually have 25% left. I fear that this is one of those "for your own good" features, to prevent scrolling down a page that may still have elements loading and/or a stuck script, but I'm wondering whether or not there is a setting in Firefox or Chrome (or perhaps Windows in general) that will allow the scroll bar to be "released" even though the browser may be busy? It may be "2002" of me, but I wouldn't mind being able to scroll along gradually, even if the page hasn't had time to fully render all of the images, etc., yet. Does such an advanced setting exist "under the hood"?

    Read the article

  • Setting up subdomain to respond on :443 with apache2

    - by compucuke
    I read through some guides on this and I believe it is possible to have apache respond to a subdomain through ssl. I have domain.com responding on 80 and I do not need domain.com responding on 443. Rather, the only use I have for ssl is for the subdomain sub.domain.com. So my site should be http://domain.com http://www.domain.com https://sub.domain.com https://www.sub.domain.com My CNAME records are as follows sub.domain.com xxx.xx.xx.xxx *.sub.domain.com xxx.xx.xx.xxx The A record exists but should not matter for the example. I set up a separate config file in sites-enabled for sub.domain.com NameVirtualHost xxx.xx.xx.xxx:443 <VirtualHost xxx.xx.xx.xxx:443> SSLEngine on SSLStrictSNIVHostCheck on SSLProtocol -ALL +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:-MEDIUM ServerAlias sub.domain.com DocumentRoot /usr/local/www/ssl/documents/ SSLCertificateFile /root/sub.domain.com.crt SSLCertificateKeyFile /root/sub.domain.com.key Alias /robots.txt /usr/local/www/ssl/documents/robots.txt Alias /favicon.ico /usr/local/www/ssl/documents/favicon.ico Alias /js/libs /usr/local/www/ssl/documents/js/libs Alias /media/ /usr/local/www/documents/media/ Alias /img/ /usr/local/www/ssl/documents/img/ Alias /css/ /usr/local/www/ssl/documents/css/ <Directory /usr/local/www/ssl/documents/> Order allow,deny Allow from all </Directory> WSGIDaemonProcess sub.domain.com processes=2 threads=7 display-name=%{GROUP} WSGIProcessGroup sub.domain.com WSGIScriptAlias / /usr/local/www/wsgi-scripts/script.wsgi <Directory /usr/local/www/wsgi-scripts> Order allow,deny Allow from all </Directory> </VirtualHost> Now, it is important to mention that https://domain.com responds with what I have running from script.wsgi above instead of on https://sub.domain.com. It does not respond to sub.domain.com. checking https://sub.domain.com causes a 105 error. This is a DNS error but I am convinced the DNS does not have a problem with the CNAME records, they just point to my IP. Am I doing something that Apache can not do?

    Read the article

  • How can I get an AdWords ad to show up for a specific term ASAP?

    - by Eric
    I have a very specific situation... I have a client who has a site, backed by a celebrity, selling a comment product.... so imagine my site is all about "Martha Stewart used cars" (that's not it-- but you get the idea). My client wants to see their site show up ASAP in Google search results. While I'm waiting for organic search to kick in and recognize my site, index it properly, etc, I want to buy some adwords for keywords like "Martha Stewart used cars" and "Martha Stewart used car" and so forth and have the ads show up on the 1st page of search results. I've done this. The problem is that many, many other advertisers have set up ads on the keyword "used cars" so my Martha-specific ads never are shown. Even when I bid specifically on the keyword phrase "Martha Stewart used cars" and I enter that directly into Google, it doesn't show my ad. SO MY QUESTION.... how/what can I do to get my ads to show... or really, can I do anything else to get my client's site to show on the result page? (I'm not interested in anything black-hat or illegal; I'm just trying to throw some resources at this situation so when those folks looking SPECIFICALLY for "Martha Stewart used cars" will get to the site quickly.) thanks-- Eric

    Read the article

  • Virtual Lan on the Cloud -- Help Confirm my understanding?

    - by marfarma
    [Note: Tried to post this over at ServerFault, but I don't have enough 'points' for more than one link. Powers that be, move this question over there.] Please give this a quick read and let me know if I'm missing something before I start trying to make this work. I'm not a systems admin professional, and I'd hate to end up banging my head into the wall if I can avoid it. Goals: Create a 'road-warrior' capable star shaped virtual LAN for consultants who spend the majority of their time on client sites, and who's firm has no physical network or servers. Enable CIFS access to a cloud-server based installation of Alfresco Allow Eventual implementation of some form of single-sign-on ( OpenLDAP server ) access to Alfresco and other server applications implemented in the future Given: All Servers will live in the public internet cloud (Rackspace Cloud Servers) OpenVPN Server will be a Linux disto, probably Ubuntu 9.x, installed on same server as Alfresco (at least to start) Staff will access server applications and resources from client sites, hotels, trains, planes, coffee shops or their homes over various ISP, using their company laptops or personal home desktops. Based on my Research thus far, to accomplish this, I'll need: OpenVPN with Bridging Enabled to create a star shaped "virtual" LAN http://openvpn.net/index.php/open-source/documentation/miscellaneous/76-ethernet-bridging.html A Road Warrior Network Configuration, as described in this Shorewall article (lower down the page) http://www.shorewall.net/OPENVPN.html Configure bridge addressesing (probably DHCP) http://openvpn.net/index.php/open-source/faq.html#bridge-addressing Configure CIFS / Samba to accept VPN IP address http://serverfault.com/questions/137933/howto-access-samba-share-over-vpn-tunnel Set up Client software, with keys configured for access (potentially through a OpenVPN-Sa client portal) http://www.openvpn.net/index.php/access-server/download-openvpn-as/221-installation-overview.html

    Read the article

  • Is basing storage requirements based on IOPS sufficient?

    - by Boden
    The current system in question is running SBS 2003, and is going to be migrated on new hardware to SBS 2008. Currently I'm seeing on average 200-300 disk transfers per second total across all the arrays in the system. The array seeing the bulk of activity is a 6 disk 7200RPM RAID 6 and it struggles to keep up during high traffic times (idle time often only 10-20%; response times peaking 20-50+ ms). Based on some rough calculations this makes sense (avg ~245 IOPS on this array at 70/30 read to write ratio). I'm considering using a much simpler disk configuration using a single RAID 10 array of 10K disks. Using the same parameters for my calculations above, I'm getting 583 average random IOPS / sec. Granted SBS 2008 is not the same beast as 2003, but I'd like to make the assumption that it'll be similar in terms of disk performance, if not better (Exchange 2007 is easier on the disk and there's no ISA server). Am I correct in believing that the proposed system will be sufficient in terms of performance, or am I missing something? I've read so much about recommended disk configurations for various products like Exchange, and they often mention things like dedicating spindles to logs, etc. I understand the reasoning behind this, but if I've got more than enough random I/O overhead, does it really matter? I've always at the very least had separate spindles for the OS, but I could really reduce cost and complexity if I just had a single, good performing array. So as not to make you guys do my job for me, the generic version of this question is: if I have a projected IOPS figure for a new system, is it sufficient to use this value alone to spec the storage, ignoring "best practice" configurations? (given similar technology, not going from DAS to SAN or anything)

    Read the article

  • ASUS EAH5450 Graphics Card (ATI Radeon HD5450 - 1 GB DDR3) on Windows 2003? Anybody got it to work?

    - by JJarava
    Hi all! I've just bought an ASUS EAH5450 Graphics Card (ATI Radeon HD5450, 1 GB DDR3) for my main system, but I haven't been able to make it work under Windows 2003 (my OS in that system). When I plugged the card, I got a couple of "installing drivers" prompt for things such as "ATI High Definition Audio Device" that got themselves sorted out of the Internet, and then a "Standard VGA Graphics Adapter". The CD that came with the card installs something called "ATI Catalyst Install Manager" and .net 2.0, but no drivers. I've downloaded the latest (WinXP 32bits) drivers from ATI, and the experience is the same: I don't get any drivers installed. My Motherboard is an ASUS A8N-SLI with nVidia nForce 4 chipset (for an Athlon 64X2, somewhat old), but my previous card was an ATi Radeon X700, so it's been working with ATI cards before. On POST, during boot I see a "Display Card" Device (Vendor ID 1002-68F9-0300) and a "Multimedia Device" (1002-AA68-0403), and when viewing the properties of the "Standard VGA", they match the device ID. Any hints? I'd really hate having to get rid of the card, and I'm sure it's not that strange what I'm trying to do...

    Read the article

  • Why are FMS logs filled with 'play' event status code 408 for a failed webcast?

    - by Stu Thompson
    Recently we had a live webcast event go horribly wrong. I'm doing the technical post-mortem, with limited information. We know that the hardware encoder (a Digital Rapid Touch Stream Web HDI) was unable to send upstream at a sustained reliable high rate. What we don't know is if the encoder's connection was problematic (Zürich), or that of the streaming server (in Frankfurt). Unfortunately, I've got three different vendors all blaming each other (the CDN who runs the server, the on-site ISP and the on-site encoding team.) In the FMS log files I see a couple of interesting things: Zillions of Status Code 408 on play event entries for clients. Adobe's documentation stats that this "Stream stopped because client disconnected". ("Zillions" would be a ratio of 10 events for every individual IP address.) Several unpublish / (re)publish events per hour for the encoder I'd like to know if all those 408s could tell me with authority that the FMS server was starved for bandwidth, or that the encoding signal was starved (and hence the server was disconnecting clients.) Any clues?

    Read the article

  • java max heap size, how much is too much

    - by brad
    I'm having issues with a JRuby (rails) app running in tomcat. Occasionally page requests can take up to a minute to return (even though the rails logs processed the request in seconds so it's obviously a tomcat issue). I'm wondering what settings are optimal for the java heap size. I know there's no definitive answer, but I thought maybe someone could comment on my setup. I'm on a small EC2 instance which has 1.7g ram. I have the following JAVA_OPTS: -Xmx1536m -Xms256m -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled My first thought is that Xmx is too high. If I only have 1.7gb and I allocated 1.5gb to java, i feel like I'll get a lot of paging. Typically my java process shows (in top) 1.1g res memory and 2g virtual. I also read somewhere that setting the Xms and Xmx to the same size will help as it eliminates time spend on memory allocation. I'm not a java person but I've been tasked with figuring out this problem and I'm trying to find out where to start. Any tips are greatly appreciated!! update I've started analyzing the garbage collection dumps using -XX:+PrintGCDetails When i notice these occasional long load times, the gc logs go nuts. the last one I did (which took 25s to complete) I had gc log lines such as: 1720.267: [GC 1720.267: [DefNew: 27712K->16K(31104K), 0.0068020 secs] 281792K->254096K(444112K), 0.0069440 secs] 1720.294: [GC 1720.294: [DefNew: 27728K->0K(31104K), 0.0343340 secs] 281808K->254080K(444112K), 0.0344910 secs] about 300 of them on a single request!!! Now, I don't totally understand why it's always GC'ng from ~28m down to 0 over and over.

    Read the article

  • Ubuntu xrandr rotate issue

    - by user83544
    I've just bought a second monitor for my PC which happens to be a pivot monitor. I've already read lots of forums related to my problem but haven't come across a solution - I have the same symptoms as dozens of posts but no matter whatever I try it just doesn't work. I've already changed the xorg.conf file and added in the device section just under Driver "nvidia" the following for my second monitor: Option "RandRRotation" "on" When I save and reboot I try to rotate my screen with the nvidia X server settings by choosing the second monitor and clicking either "left" or "right" for the rotation. It immediately exits the nvidia settings window and does nothing. I tried within the terminal by typing: xrandr -o right I get the following error: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 154 (RANDR) Minor opcode of failed request: 2 (RRSetScreenConfig) Serial number of failed request: 14 Current serial number in output stream: 14 I actually manage to rotate it with Option "Rotate" "CCW" instead of "RandRRotation". The problem with this solution is that you get the second monitor in the right position, but any window you open on that screen is practically unchangeable. You can't change the size nor move it, making it useless for reading PDFs, which is the main reason why I bought this second screen to help me write my thesis. Any help is really appreciated. sudo lshw -c video hiram@hiram-linux:~$ sudo lshw -c video *-display description: VGA compatible controller product: nVidia Corporation vendor: nVidia Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:f8000000-f9ffffff memory:d8000000-dfffffff memory:d4000000-d7ffffff ioport:dc00(size=12 memory:fbd80000-fbdfffff

    Read the article

  • Color Printer: Laser vs Inkjet

    - by Mike
    I am about to buy a color printer. I had a B&W Laserjet printer in the past but since then I've used inkjets for decades. I need a printer that can deliver high quality as these photo inkjet printers, but I'm tired of paying for ink that costs $9,000 per gallon (1 gallon = 3.785 liters = 300 cartridges = $9,000). So, I was thinking about buying a color laser printer, but I'm not sure these printers can deliver the same quality and are worth the investment in terms of toner consumption. I remembered that my old Laserjet printer was able to print 1100 pages per toner cartridge. The inkjet printers I have can print 500 pages per cartridge. Price by price, 2 inkjet cartridges have more or less the same cost as one toner cartridge and in theory prints almost the same. I am not sure if this is true for color lasers. What can you guys tell me about quality, toner cost and cost per page for laser or inkjet printer? Is it worth the change? (Keep in mind that an inkjet printer costs $50 and a laser printer costs $200.) Thanks.

    Read the article

  • File download speed issue over a dedicated fibre link

    - by nixnotwin
    My ISP has installed a fibre based dedicated internet connection at the place where I work. In the beginning the connection terminated at one of the ISP's core routers. It resulted in a strange issue. Eventhough the assigned speed was 5mbps, when tests were done by downloading large files over http and ftp from multiple locations, the speed never went above 2mbps. But bittorrent downloads reached 5mbps. Even file download from the ISP servers were fine. So, at the ISP our link was attached directly to their edge router. After this file downloads from high bandwidth servers, like Google and MS, reached the 5 mbps limit. Sometimes the speed would fall down below 2 mbps and suddenly it will go up to the 5 mbps limit ( it keeps on happening during any single file download). But other downloads like ubuntu apt repositories still struggle to go above 2 mbps. The engineers at the ISP have not been able to sort out the issue. After they moved us to their edge router instead of giving us 8 public ip's, they just gave 4 ip's. When we enquired about it, they told us that giving more ip's would result in arp overload at their edge router. But somehow I was able to convince them to give us the 8 ip's which we wanted. But the file download issue has remained. What might be the reason for files from different location getting downloaded with different speeds, that too with heavy fluctuation in speeds? I have downloaded files from same url's from a connection belonging to another smaller ISP, and the speeds were fine and reached full 5 mbps limit.

    Read the article

  • Why is this APC installation failing so badly?

    - by Matt
    I have multiple instances of APC running on my server with similar configurations (albeit with different cache sizes. However, one of the instances is performing extremely poorly, and I have no idea why (100% cache fragmentation, high miss rate). The runtime settings I'm using are as follows (pretty much out of the box): apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 10M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1 APC is version 3.1.6, PHP is 5.3.3-1ubuntu9.5. I've tried restarting Apache multiple times, so this isn't a freak instance. The instance with problems is simply running Wordpress with a few plugins installed. All other instances (~4) on the server are running perfectly fine with almost 100% hit rates and 0% fragmentation; for example this instance is holding a website built using the Symfony framework. Any help would be much appreciated; I haven't had much experience with APC and was hoping for it to be an out-of-the-box speed boost ;).

    Read the article

  • cURL hangs trying to upload file from stdin

    - by SidneySM
    I'm trying to PUT a file with cURL. This hangs: curl -vvv --digest -u user -T - https://example.com/file.txt < file This does not: curl -vvv --digest -u user -T file https://example.com/file.txt What's going on? * About to connect() to example.com port 443 (#0) * Trying 0.0.0.0... connected * Connected to example.com (0.0.0.0) port 443 (#0) * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DHE-RSA-AES256-SHA * Server certificate: * subject: serialNumber=jJakwdOewDicmqzIorLkKSiwuqfnzxF/, C=US, O=*.example.com, OU=GT01234567, OU=See www.example.com/resources/cps (c)10, OU=Domain Control Validated - ExampleSSL(R), CN=*.example.com * start date: 2010-01-26 07:06:33 GMT * expire date: 2011-01-28 11:22:07 GMT * common name: *.example.com (matched) * issuer: C=US, O=Equifax, OU=Equifax Secure Certificate Authority * SSL certificate verify ok. * Server auth using Digest with user 'user' > PUT /file.txt HTTP/1.1 > User-Agent: curl/7.19.4 (universal-apple-darwin10.0) libcurl/7.19.4 OpenSSL/0.9.8l zlib/1.2.3 > Host: example.com > Accept: */* > Transfer-Encoding: chunked > Expect: 100-continue > < HTTP/1.1 100 Continue

    Read the article

  • 16TB Volumes and SNMP On Windows

    - by John K
    As volumes larger than 16TB became more common, it was recognized that the 32 bit value used to report disk size and usage within the standard "HOST-RESOURCES" MIB in SNMP was not large enough to report the proper disk size. Net-SNMP seems to have addressed this issue by simply manipulating the value of "AllocationUnits" to maintain a 32 bit value for disk utilization (since total disk size/usage is equal to the 32 bit space value times the allocation unit), to allow for the calculation of a volume larger than 8/16TB. Presuming you don't have any reporting interest in the allocation unit, this seems like a fine solution. https://bugzilla.redhat.com/show_bug.cgi?id=654384 Window's built in SNMP service, however, seems to continue to suffer from this error, simply reporting the modulo of the used/assigned disk space, resulting in inaccurate disk size reporting. Is there a way to enable Windows to correctly report disk usage for volumes over 16TB? We attempted to simply install Net-SNMP 5.5 x64 and disable Windows SNMP service entirely, however this unfortunately did not fix our issue. I've seen people in the Cacti community mention simply scripting out a solution. Unfortunately, we're using Observium for quick and basic systems monitoring. If the issue can't be correct on the Window's side, can Observium be made to report custom MIBs?

    Read the article

  • Dedicated server with a lot of storage and good support - and cost-effective

    - by Martin Burger
    Hello, I am from Germany and looking for a dedicated server located in the US with a lot of storage: 750 - 1500 GB. CPU speed and amount of memory are secondary, the server will host large amounts of media files via http and ftp - the basic task is to help people exchange media files. In Germany, there are some good offers, like "Root Server EQ6" at www.hetzner.de. For example, that company provides support of high quality, and their plans are very cost-effective. The plan mentioned above costs about $90 per month and provides two 1500 GB SATA-II HDDs (Software-RAID 1). In the US, I found (amongst others) Go Daddy and rackspace. Go Daddy offers some "Storage Monster" plans that include 2 x 1,000 GB hard drives for about $180 per month - already twice as much as Hetzner above. However, I found some blog and forum entries that complain about the support provided by Go Daddy. Rackspace seems to provide decent support, but they are very "upscale". Their dedicated servers are customizable and start at $419 - thus, about 4.5 times as much as Hetzner. Can anybody recommend a solution / plan that is comparable to the one by Hetzner? Or are prices for dedicated servers in general much higher than in Germany? Regards, Martin

    Read the article

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • DansGuardian/Squid Traffic doesn't get back to user

    - by DKNUCKLES
    I've purchased a Squid appliance that I'm attempting to implement, however the lack of documentation has left me a bit high and dry. Forgive me if this is a silly question, but this is my first attempt at implementing Squid. From what I can ascertain from the documentation (or lack thereof), the users connect to DansGuardian first at port 8080 where the filtering is done, at which point it forwards it to the Squid appliance at port 3128. The traffic is then sent to the internet. The setup I have is as follows Gateway (MikroTik router) : 192.168.88.1 Squid/DansGuardian :192.168.88.100 Client : 192.168.88.238 Client --- Gateway --- Proxy --- Internet I have set up a simple NAT rule to forward all traffic from the client machine (for testing purposes) to go to the DansGuardian. The traffic seems to get there, although I see a lot of SYN_RECV w/ a netstat -antp command on the virtual appliance machine. From this I gather that the traffic is NOT being routed back to the client machine. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN - tcp 0 0 192.168.88.100:8080 192.168.88.238:55786 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55787 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55785 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55788 SYN_RECV - tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - Is this a routing issue or an issue with the Squid Appliance?

    Read the article

  • nginx connection pool race condition?

    - by wlf
    I have a shared hosting server with high traffic. I have a lightweight apache mod_proxy for static content that from time to time has a "504 proxy error" problem proxing to apache/mod_php. Error log says: error reading status line from remote server 127.0.0.1:8080 Error reading from remote server returned by / This is what the apache documentation says about it. proxy-initial-not-pooled If this variable is set no pooled connection will be reused if the client connection is an initial connection. This avoids the "proxy: error reading status line from remote server" error message caused by the race condition that the backend server closed the pooled connection after the connection check by the proxy and before data sent by the proxy reached the backend. It has to be kept in mind that setting this variable downgrades performance, especially with HTTP/1.0 clients. I am really concerned about this downgrade in performance therefore I started to look at nginx immediately. I am new to nginx and time is crucial right now, I can't afford to waste days to study it just to find out there is the same race condition issue. Is nginx affected by this connection pool race condition? Thanks

    Read the article

  • Please advise on VPS config choice (with SQL Server Express)

    - by tjeuten
    Hi all, I might be interested in getting a VPS hosting plan for some small personal sites and .NET projects. Was thinking of Softsys Bronze Plan, as my current shared host plan is with them too. The stuff I want to host has grown beyond the capabilities of a Shared hosting plan, and I also want more control over the IIS/ASP.NET configuration, that's why I'm considering VPS. The main config details would be: Hyper-V 30 GB of diskspace 1 GB of RAM More info here: http://www.softsyshosting.com/Windows-VPS-HyperV.aspx Does anyone have experience with this plan (or something similar from another host), and maybe could answer these couple of questions: Bronze has a total diskspace of 30GB. Is the OS part of this quota or not ? If so, how much does a base configuration with Windows 2008 take up in diskspace ? Would you advise Windows 2008 R2 or Normal. Or would you advise to use Windows 2003 with this config. I'm planning on running a SQL Server Express install too. Would 1 GB of RAM be enough for both the Windows 2008 (R2) and SQL Express. The database load will not be that very high (a couple of 1000 records returned each day). The DB will most likely be far away from the 4GB limit, that's why I'd go for a SQL Express instead of paying extra licensing costs for a SQL Web install. But I'm more concerned about performance. Would you recommend Softsys as a VPS host ? I've been with them for one year for my Shared hosting plan, and have no complaints so far. Also, as I have no VPS experience, what are the pitfalls I need to be aware of, in terms of performance mainly, but maybe in other areas too ? Mathieu

    Read the article

  • Java Swing over Remote Desktop - Strange, weird GUI squashing

    - by ADTC
    I thought this question fits SuperUser more than StackOverflow because it's not about actual Java programming, though programmers might be more likely to encounter the problem. Anyway, let me start of with some stats before I ask the actual question: Laptop: Windows 7 x32 Screen resolution 1024 x 768; Nvidia GeForce Go 6200 Connected to desktop via ad-hoc wireless network Access internet via desktop Desktop: Windows 7 x64 Screen resolution 1920 x 1080 Connected to laptop via ad-hoc wireless network Access internet via cable modem I'm connecting to my laptop via Remote Desktop from my desktop to take advantage of the large screen. I'm doing programming on my laptop (for portability reasons). Everything else runs smooth and fast over Remote Desktop as both computers are connected directly over the ad-hoc wireless. The only problem is this: Java Swing apps don't display the GUI properly. I acquired a Java Swing application and I'm debugging it in Eclipse. Here's what I got when I ran the app: Apparently there doesn't seem to be anything wrong with the GUI application I'm debugging, because the Java Control Panel exhibits the same problem. I've searched high and low in Google about this; the closest I came to a solution is this. But sadly, the use of -Dsun.java2d.nodraw=true has no effect at all. This only happens over Remote Desktop. I have tried locally and the GUI apps display properly. This isn't a dealbreaker for me as I can stop using Remote Desktop when developing Java Swing apps. However, I would like to know if anyone has encountered this and found any solution. PS: All software involved (Eclipse, Java JRE, etc.) are latest versions.

    Read the article

  • linux mint VIA sound issue

    - by user2699451
    So I installed linux Mint 15 "Olivia" 64 bit on my Mecer W550EU laptop I have HD Audio with a VIA chipset charles-W55xEU charles # lsmod | grep snd snd_hda_codec_hdmi 36913 1 snd_hda_codec_via 51018 1 snd_hda_intel 39619 5 snd_hda_codec 136453 3 snd_hda_codec_hdmi,snd_hda_codec_via,snd_hda_intel snd_hwdep 13602 1 snd_hda_codec snd_pcm 97451 4 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30180 1 snd_seq_midi snd_seq 61554 2 snd_seq_midi_event,snd_seq_midi snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi snd_timer 29425 2 snd_pcm,snd_seq snd 68876 19 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_hda_codec_via,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device soundcore 12680 1 snd And my sound card charles-W55xEU charles # aplay -l **** List of PLAYBACK Hardware Devices **** card 0: PCH [HDA Intel PCH], device 0: VT1802 Analog [VT1802 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 2: VT1802 HP [VT1802 HP] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 and my audio device charles-W55xEU charles # lspci -v | grep -A7 -i "audio" 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family High `Definition Audio Controller (rev 04)` Subsystem: CLEVO/KAPOK Computer Device 0550 Flags: bus master, fast devsel, latency 0, IRQ 47 Memory at f7c10000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel Sometimes when I boot up, soundworks, other times it doenst, it is completely random, so far, no-one on xchat linux help or linux mint forums was able to help me, I have always had issues with sound on VIA chipsets I have: sudo apt-get upgrade && apt-get install mint-meta-cinnamon it seemed to help but after 2-3 reboots, the problem came back, btw, everytime I checked, pulse audio is selected to Duplex Audio Input & Output and alsa mixer is always unmuted!

    Read the article

< Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >