Search Results

Search found 6198 results on 248 pages for 'traffic filtering'.

Page 166/248 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Reputable web based ssh client? [closed]

    - by Doug T.
    I'm connected to a coffee shop's wireless network right now, and I suspected I'd be able to use my laptop and ssh somewhere. Unlucky me they seem to be blocking everything but web traffic (my testing seems to show everything but port 80 is working, can't ping, ftp, etc). I googled "web based ssh clients" however I have reservations about entering my login credentials on any Joe Schmoe's web app. I was wondering if anyone has had any experience with any reputable web based ssh clients? If so could you please point me at one that I could trust?

    Read the article

  • Can a virtual mikrotik box bridge a hyper-v internal network with a hyper-v external network?

    - by mcfrosty
    I am trying to set up a Mikrotik router as a transparent firewall on my network. I got the machine working on a hardware MT box, but my boss wants the MT virtualized. I have been trying the set up where my virtual windows box talks to the Mikrotik via private or internal network on the Hyper-V host. I can get the two machines to talk, but as soon as I set up a bridge on the MT, all traffic ceases between the two. Is it possible to create a bridge for this purpose (having the MT silently in front of my firewalled server)? I could really use some help.

    Read the article

  • Replace an IP address with it's whois using bash

    - by user2099762
    I have a traffic log similar to this "page visited" for xxx.xxx.xxx.xxx at 2013-10-30 and I would like to replace the ip address with the result of it's whois lookup. I can export the ip addresses to a separate file and then do a whois on each line, but im struggling to combine them all together. Ideally i'd like to replace the ip address in the same string and print the new string to a new file. So it would look like "page visited" for example.com at 2013-10-30 Can anyone help Here's what I have so far grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' clean_cites.txt > iplist.txt for i in `cat iplist.txt` do OUTPUT=$(geoiplookup -f /usr/share/GeoIP/GeoIPOrg.dat $i) echo $i,$OUTPUT >> visited.txt done Like I said,this produces a separate file with a list of ip addresses and their relevant hostnames, so I either need to search for the ip address in file and and replace it with the text in file b (which will give the ip address and hostname) or replace the ip address in place. Thanks

    Read the article

  • Getting started with webserver clustering.

    - by Ernie
    I work for a small ISP, and we host about 250 domains and all the stuff that goes along with that: DNS, mail, spam filtering, and backups. Currently, we have separate DNS servers (two of them) and mail servers (outgoing mail is actually on the secondary DNS server, but was previously on its own server). In the past, this was done as an insurance measure. The last thing we need is for some doofus (usually yours truly) to hose a server, taking out DNS and mail right along with it, or for spammers to jam our incoming SMTP server, preventing outgoing mail from being sent too. In the past, this was a problem, and our servers were set up the way they are now to combat it. However, clustering solutions like Sun's Cobalt RAQ (in days of olde) and Virtualmin appear to cater to an all-in-one approach, then deal with failures through redundant servers. I have avoided this thus far, but we've been using Virtualmin on our web server for a while now, and I'd like to expand into using it for a high availability cluster. Our networking partner has recently built a datacenter that has eliminated all of our other bugaboos like network, cooling, and power issues, so now the only thing left to go wrong is me hosing a server, which happened earlier this month. One of the bigger reasons we've avoided going this route is because our hardware requirements aren't particularly high. One server easily handles all the sites we host (most of them are flat sites). Also, load-balancing routers tend to be expensive and complicated. All that I'm really expecting to do is building a two-node cluster for redundancy so that when I hose a server (however rare that might be), we're not out for 8-12 hours while I rebuild it. What I need to know is how to get started, and if I'm really in a position to bother with this kind of thing at all.

    Read the article

  • Easy Deployment Split Tunnel VPN Connection

    - by Joey Harris
    I was wondering if anybody could offer some insight as to how I can mass deploy VPN connection settings that support split tunneling. It has to work on both Mac and Windows systems though if a script is used, it obviously can be 2 separate scripts for both platforms. I will be setting up a Windows server with a file server and Exchange server and to access the file server I will have the clients go through VPN because we will have sensitive data. I don't want the servers network to be bogged down with the clients normal internet traffic so I will be needing some way to setup split tunneling on the clients without them having to put in a few commands every time to setup the static routes. Ive looked at Cisco VPN client but I want to try and stick with windows RRAS and avoid buying a Cisco VPN endpoint. Im basically looking for a good VPN client that can support split tunneling and mass deployment.

    Read the article

  • Apache2 Segfault - need help interpreting this coredump (suspect cause is memcache / php session related)

    - by WayneDV
    Three Apache2 web servers running a PHP 5.2.3 web site. We're using Memcache to cache rendered pages but also as the storage engine of the PHP Sessions. At peak traffic times we're getting Apache segmentation faults on all three web servers and all HTTPD child processes segfault. My gut tells me that the increased Memcache traffic is stopping PHP sessions from being created or cleaned up and thus the processes die. Is it possible for someone to confirm that from the following? : #0 _zend_mm_free_int (heap=0x7fb67a075820, p=0x7fb67a011538) at /usr/src/debug/php-5.3.3/Zend/zend_alloc.c:2018 #1 0x00007fb665d02e82 in mmc_buffer_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:50 #2 mmc_request_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:169 #3 0x00007fb665d031ea in mmc_pool_free (pool=0x7fb67a00e458) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:917 #4 0x00007fb665d0a2f1 in ps_close_memcache (mod_data=0x7fb66d625440) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_session.c:185 #5 0x00007fb66d1b0935 in php_session_save_current_state () at /usr/src/debug/php-5.3.3/ext/session/session.c:625 #6 php_session_flush () at /usr/src/debug/php-5.3.3/ext/session/session.c:1517 #7 0x00007fb66d1b0c1b in zm_deactivate_session (type=<value optimized out>, module_number=<value optimized out>) at /usr/src/debug/php-5.3.3/ext/session/session.c:2171 #8 0x00007fb66d2a719c in module_registry_cleanup (module=<value optimized out>) at /usr/src/debug/php-5.3.3/Zend/zend_API.c:2150 #9 0x00007fb66d2b1994 in zend_hash_reverse_apply (ht=0x7fb66d629d60, apply_func=0x7fb66d2a7180 <module_registry_cleanup>) at /usr/src/debug/php-5.3.3/Zend/zend_hash.c:755 #10 0x00007fb66d2a5c0d in zend_deactivate_modules () at /usr/src/debug/php-5.3.3/Zend/zend.c:866 #11 0x00007fb66d2541b5 in php_request_shutdown (dummy=<value optimized out>) at /usr/src/debug/php-5.3.3/main/main.c:1607 #12 0x00007fb66d32e037 in php_apache_request_dtor (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:509 #13 php_handler (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:681 #14 0x00007fb6784166f0 in ap_run_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:158 #15 0x00007fb678419f58 in ap_invoke_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:372 #16 0x00007fb6784254f0 in ap_process_request (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/modules/http/http_request.c:282 #17 0x00007fb678422418 in ap_process_http_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/modules/http/http_core.c:190 #18 0x00007fb67841e1b8 in ap_run_process_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/server/connection.c:43 #19 0x00007fb678429f4b in child_main (child_num_arg=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:662 #20 0x00007fb67842a21a in make_child (s=0x7fb679cd7860, slot=153) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:758 #21 0x00007fb67842aea4 in perform_idle_server_maintenance (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:893 #22 ap_mpm_run (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:1097 #23 0x00007fb678402890 in main (argc=1, argv=0x7fff6fecacb8) at /usr/src/debug/httpd-2.2.15/server/main.c:740 PHP.INI Follows: [PHP] engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED display_errors = Off display_startup_errors = Off log_errors = Off log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = Off variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_max_filesize = 2M allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 sendmail_path = /usr/sbin/sendmail -t -i mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [MySQL] mysql.allow_persistent = On mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_links = -1 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.save_path = "/var/lib/php/session" session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 1 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.entropy_file = session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 /etc/php.d/memcached.ini : session.save_path="tcp://memcache1:11211?persistent=1&weight=1&timeout=3&retry_interval=15"

    Read the article

  • Outlook for Mac (2011) - Exchange requirements, MAPI support

    - by benc
    I am looking at Outlook for Mac (in Office 2011). I wanted to find out what versions of Exchange are supported, as well as whether MAPI is supported. I googled and also searched the Microsoft site. I also checked in wikipedia, but it was probably too soon. I found announcement about Outlook for Mac I found the system requirements on the official site, but they don't say anything on these topics. I saw some forum traffic saying: MAPI is not supported, but there was no attribution, so I won't bother to quote it. Can anyone point me to some official documents that address these concerns?

    Read the article

  • Chrome is reporting GMail has Invalid Server Certificate, how do I find out who's fiddling with my certs?

    - by chillitom
    Chrome is giving the following warning when ever I try and visit GMail or a bunch of other SSL sites. Invalid Server Certificate You attempted to reach mail.google.com, but the server presented an invalid certificate. You cannot proceed because the website operator has requested heightened security for this domain. This is the certificate the Chrome reports as invalid: -----BEGIN CERTIFICATE----- MIIDIjCCAougAwIBAgIQK59+5colpiUUIEeCdTqbuTANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0xMTEwMjYwMDAwMDBaFw0x MzA5MzAyMzU5NTlaMGkxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRgw FgYDVQQDFA9tYWlsLmdvb2dsZS5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJ AoGBAK85FZho5JL+T0/xu/8NLrD+Jaq9aARnJ+psQ0ynbcvIj36B7ocmJRASVDOe qj2bj46Ss0sB4/lKKcMP/ay300yXKT9pVc9wgwSvLgRudNYPFwn+niAkJOPHaJys Eb2S5LIbCfICMrtVGy0WXzASI+JMSo3C2j/huL/3OrGGvvDFAgMBAAGjgecwgeQw DAYDVR0TAQH/BAIwADA2BgNVHR8ELzAtMCugKaAnhiVodHRwOi8vY3JsLnRoYXd0 ZS5jb20vVGhhd3RlU0dDQ0EuY3JsMCgGA1UdJQQhMB8GCCsGAQUFBwMBBggrBgEF BQcDAgYJYIZIAYb4QgQBMHIGCCsGAQUFBwEBBGYwZDAiBggrBgEFBQcwAYYWaHR0 cDovL29jc3AudGhhd3RlLmNvbTA+BggrBgEFBQcwAoYyaHR0cDovL3d3dy50aGF3 dGUuY29tL3JlcG9zaXRvcnkvVGhhd3RlX1NHQ19DQS5jcnQwDQYJKoZIhvcNAQEF BQADgYEANYARzVI+hCn7wSjhIOUCj19xZVgdYnJXPOZeJWHTy60i+NiBpOf0rnzZ wW2qkw1iB5/yZ0eZNDNPPQJ09IHWOAgh6OKh+gVBnJzJ+fPIo+4NpddQVF4vfXm3 fgp8tuIsqK7+lNfNFjBxBKqeecPStiSnJavwSI4vw6e7UN0Pz7A= -----END CERTIFICATE----- I think someone or something (proxy, anti-virus, browser extension) is snooping on my SSL traffic. How can I determine who/what is doing this?

    Read the article

  • Static Routes and the Routing Table

    - by TheD
    This is very much a learning question if someone would be happy to explain a couple of concepts. My question is - the default routing table that exists in, in my case, a default Windows 7 install, what do each of the routes in the table do? Here is a screenshot: The 10.128.4.0 is just a route I've added while messing. I understand from a question I posted on Superuser the first route is just a default route that will route all traffic for any IP to my default gateway on my Interface in use. But what about the others? And how would the routing table handle a machine with multiple NIC's, perhaps connected to two different networks, or maybe even two NIC's on the same network so a VM can have a physical Network card instead of each VM sharing the hosts. Thanks!

    Read the article

  • Social-Networking Startup, Hosting Plan

    - by pws5068
    I've created a social networking community which is soon ready to release, and I'm trying to decide on a type of hosting plan. I have considered options such as VPS and Reseller plans. I anticipate (or hope for at least) a significant amount of traffic/bandwidth in the not-too-distant future. If I open a reseller, will I receive the same amount of server lag during busy hours that I do with a shared account? How significant is the profit margin with the reseller option? Aside from generalized "configurability", what advantages merit purchasing a VPS? Is there anything stopping me from reselling space on a VPS account? Features I need Include: PHP, MySql, Unlimited Domains, Ruby on Rails, Remote Database Connections

    Read the article

  • Listing group members using ldapsearch

    - by colemanm
    Our corporate LDAP directory is housed on a Snow Leopard Server Open Directory setup. I'm trying to use the ldapsearch tool to export an .ldif file to import into another external LDAP server to authenticate with externally; basically trying to be able to use the same credentials internally and externally. I've got ldapsearch working and giving me the contents and attributes of everything in the "Users" OU, and even filtering down to only the attributes I need: ldapsearch -xLLL -H ldap://server.domain.net / -b "cn=users,dc=server,dc=domain,dc=net" objectClass / uid uidNumber cn userPassword > directorycontents.ldif That gives me a list of users and properties that I can import to my remote OpenLDAP server. dn: uid=username1,cn=users,dc=server,dc=domain,dc=net objectClass: inetOrgPerson objectClass: posixAccount objectClass: organizationalPerson uidNumber: 1000 uid: username1 userPassword:: (hashedpassword) cn: username1 However, when I try the same query on an OD "group" instead of a "container," the results are something like this: dn: cn=groupname,cn=groups,dc=server,dc=domain,dc=net objectClass: posixGroup objectClass: apple-group objectClass: extensibleObject objectClass: top gidNumber: 1032 cn: groupname memberUid: username1 memberUid: username2 memberUid: username3 What I really want is a list of users from the top example filtered based on their group memberships, but it looks like membership is set from the Group side, rather than the user account side. There must be a way to filter this down and only export what I need, right?

    Read the article

  • Create Google Maps screenshots at regular intervals

    - by Dave Jarvis
    Background People are concerned that building a pipeline to the West Coast of Canada will increase the number of oil tankers, thus increasing the probability of a major oil spill, thereby creating an environmental catastrophe. The AIS Live Ships Map website captures real-time Marine Traffic updates using a Google Maps interface. While it is possible to obtain data from an AIS data feed, often the feeds are either pay-for-use, or otherwise encumbered with license restrictions. Problem The AIS Live Ships website presents a map in the browser: The map above has had its location interactively changed to focus on the area in question: the northern straight of Vancouver Island. Question How would you create a service that captures the map every 30 minutes and that could run, with neither user-intervention nor a significant memory footprint, for a few years? Idea #1 Create a virtual machine. Install and run a light-weight browser. Use Shutter to take captures at regular intervals. Idea #2 Use Python's Ghost Webkit to automate the captures. Thank you!

    Read the article

  • Listing group members using ldapsearch

    - by colemanm
    Our corporate LDAP directory is housed on a Snow Leopard Server Open Directory setup. I'm trying to use the ldapsearch tool to export an .ldif file to import into another external LDAP server to authenticate with externally; basically trying to be able to use the same credentials internally and externally. I've got ldapsearch working and giving me the contents and attributes of everything in the "Users" OU, and even filtering down to only the attributes I need: ldapsearch -xLLL -H ldap://server.domain.net / -b "cn=users,dc=server,dc=domain,dc=net" objectClass / uid uidNumber cn userPassword > directorycontents.ldif That gives me a list of users and properties that I can import to my remote OpenLDAP server. dn: uid=username1,cn=users,dc=server,dc=domain,dc=net objectClass: inetOrgPerson objectClass: posixAccount objectClass: organizationalPerson uidNumber: 1000 uid: username1 userPassword:: (hashedpassword) cn: username1 However, when I try the same query on an OD "group" instead of a "container," the results are something like this: dn: cn=groupname,cn=groups,dc=server,dc=domain,dc=net objectClass: posixGroup objectClass: apple-group objectClass: extensibleObject objectClass: top gidNumber: 1032 cn: groupname memberUid: username1 memberUid: username2 memberUid: username3 What I really want is a list of users from the top example filtered based on their group memberships, but it looks like membership is set from the Group side, rather than the user account side. There must be a way to filter this down and only export what I need, right?

    Read the article

  • Multiple WAN interfaces in same subnet on Sonicwall NSA220?

    - by Ttamsen
    (eta salutation, which keeps getting eaten.) Hi, all. I see a bunch of related questions, so I'm hesitant to ask, but: I have a situation where I have a Sonicwall NSA220 serving as firewall/router for two internal subnets to two external WAN connections. In some locations this is two separate ISPs. In others, it's the same ISP but with multiple circuits. The problem is that one ISP has been unable to provide unique subnets for each WAN interface. Is there any possibility that I might be able to bond the two WAN interfaces into a single virtual interface, and then use source-routing to get internal subnets communicating out the appropriate physical interface? Or even just use traffic-shaping to give each internal network appropriate shared bandwidth? I haven't found anything in the docs, but it seemed like it might be worth asking. Thanks for any help! -Steve.

    Read the article

  • mod_jk fails to detect error state because JBoss gives 404, not 500

    - by Ilya Sher
    Configuration: Apache + mod_jk, several workers on other machines (load balancing). When JBoss fails to deploy an application for example because of failed connection to the database, requests to /myapp/somepage generate 404. How do I configure JBoss to return 500 for everything under /myapp when the application failed to deploy? Additional info: Since 404 is not an "error state" code, mod_jk does not mark the worker as failed and continues to route the traffic there. Since there might be valid requests to this application also generating 404, I can not configure mod_jk to treat 404 as an "error state"

    Read the article

  • Why do password entries over ssh take so long?

    - by Dean
    When I'm ssh'd into my server, any time I enter my password, there's a 40 second delay before the server responds. This occurs when logging in, as well as whenever I run a command via sudo. The delay does not happen when I run su and enter my password however. Using the -v flag for ssh doesn't show anything during this time. Looking at Wireshark, all traffic between the two machines stops while this is happening. Any idea what's happening, or advice on how to investigate this? The server is running Debian squeeze (6.0.4)

    Read the article

  • Can iptables allow Squid to process a request, then redirect the response packets to another port?

    - by Dan H
    I'm trying to test a fancy traffic analyzer app, which I have running on port 8890. My current plan is to let any HTTP request come into Squid, on port 3128, and let it process the request, and then just before it sends the response back, use iptables to redirect the response packets (leaving port 3128) to port 8890. I've researched this all night, and tried many iptables commands, but I'm missing something and my hair is falling out. I thought something like this would work: iptables -t nat -A OUTPUT -p tcp --sport 3128 -j REDIRECT --to-ports 8990 This rule gets created ok, but it never redirects anything. Is this even possible? If so, what iptables incantation could do it? If not, any idea what might work on a single host, given multiple remote browser clients?

    Read the article

  • Windows Server - Dual NIC Bandwidth Pooling

    - by tsilb
    I have a Windows Server 2008 machine with dual NICs. Both are plugged into the same switch in a typical one-switch, one-gateway home network. This server is used almost exclusively for inbound connections. It hosts a web server (IIS 6), SQL server, and file server (via LAN UNC paths and mapped drives). How do I make best use of inbound bandwidth across both NICs? For example, if I connect to it by hostname and one of the interfaces has high traffic, I'd like the new connection to use the other interface.

    Read the article

  • How low-power can a home server get?

    - by Halik
    I've got quite simple question actually. How green, low-power and efficient x86 home server can I build using consumer parts with rather constrained budget. After looking through some Google hits I've found out that system based on dual-core atom, some modest mITX board (gigabit lan, integrated audio and gfx etc), one RAM module and one 'green' WD HDD, powered by picoITX PSU uses about 30W at idle up to 40 at load. Can you get lower (or how much lower) then that? Maybe some VIA nano chips, or single core atom? My home server would take care of some back-upping mixed with little ftp/http traffic.

    Read the article

  • Virtualbox PXE Boot Failing with a Windows Server 2008 R2 Server

    - by Vbitz
    Some fast help on this would be good, I have been on this problem for 14 hours. In a Virtualbox test environment I have 2 virtual machines networked together using a internal network (no traffic runs though the host, it is all at a software level). One is a fresh client with 512mb of ram and a dual core set-up, the other is the server with 1.5GB of ram and running server 2008 r2. The server is configured as a dns server, dchp server, domain controller and also serves PXE booting though WDS (Windows Deployment Services). Both machines can see each other and I am able to start a network boot. The issue comes at the second to last stage of the pre windows PE install. On TFTP download of boot.sdi it starts it but stops during the boot process.

    Read the article

  • Apache, mod_proxy_ajp and IE

    - by eduard-schnittlauch
    Hi! I have an Apache 2.2 using mod_proxy_ajp as a reverse proxy for a Tomcat 6, running on RHEL5. On tomcat runs an application that does NTLM authentication. Using Firefox, everything works ok, but IE7 says "cannot display the web page". Without Apache, IE7 works fine. What is going on here? Unfortunately, I have very limited access rights and can't capture tcp traffic or anything like that. Thanks!

    Read the article

  • Will more memory help my CPU-peaking SQL Server 2008 R2

    - by Tor Haugen
    I'm supporting a system running against a SQL Server 2008 R2. The server is a single-CPU box with 8 GB of memory. As traffic has increased, the server has started saturating, peaking to 100% CPU ever more often. Disk I/O remains moderate (somewhat surprisingly). Obviously, a new server would be the best option. But failing that, can I expect a noticable improvement from installing more RAM? Or does RAM only help for I/O issues (through caching)?

    Read the article

  • Co-location vs. cloud hosting

    - by RainyRat
    We have a decent-sized server estate: several ProLiants, plus IBM BladeCentre and SANs running a VMWare environment. Due to an imminent premises move, we're not going to have enough space for a server room, so I've been looking at moving everything out to a colo. My boss is more keen on the idea of cloud-hosting everything, which would include a couple of high-traffic websites (about 9m pageviews/month), our Exchange sever (about a million clean emails sent/received monthly), as well as file/print/AD/all the ususal stuff. This doesn;t sound like a good idea to me, but I'm new to the ways of the cloud. Can anyone offer any advice?

    Read the article

  • Some URLs fail to load on Windows web portal

    - by jpolache
    I’m working in a large data center and have been assigned to troubleshoot and issue with a windows (IIS) web server that acts as a portal for a customer of the data center. This portal server is on a DMZ at the local data center. I don’t have access to the portal desktop and am relying on an off-site administrator to work with me to do testing and report the condition of the portal. He tells me there are no software firewalls or other filtering configured. While most of the remote web pages work fine, several of the URSs the portal is suppose to serve up fail to load. I had wireshark installed on the portal system and had a capture taken of one of the failures. I used IE to access one of the remote web servers at issue. I could see the TCP SYN-ACK coming back from the remote server, but after several HTTP GETs fail to get a response the portal server sends a reset. The webmaster of the remote web server assures me that no sites are being blocked. I had a capture taken outside the local firewall, so there should be no issue there. Another tech set up a laptop and used the IP address of the portal (we took the portal off-line for the test). The laptop loads the URL as expected. I tried having Firefox loaded to make sure that the HTTP GET was not mal-formed. Same failure as with IE. So, it seems it is not the remote web server or the network, because there was no problem with the laptop. At this point, I’m not sure what other questions to ask or tests to do.

    Read the article

  • Not Able To Connect to Shared Resource

    - by bobber205
    We are using an older version of BartPE and are not able to connect to shared folders on our subnet. It says the network name could not be found. Connecting to the shared folder on the machine that is hosting it works fine. Any ideas on what might cause this? Thanks! Edit1: Got wireshark running and monitored traffic from the offending machine and tried to map. ZERO packets from the other machine were seen. :(

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >