Search Results

Search found 21640 results on 866 pages for 'local storage'.

Page 47/866 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Understanding Linux SCSI queue depths

    - by Troels Arvin
    I'm experimenting with the effects of different SCSI queue depth values on a Dell server running CentOS Linux 5.4 (x86_64). The server has two QLogic QLE2560 FC HBAs connected via multipathing to a storage system. The storage system has allocated two LUNs to the server, each connected through four paths in an active-active-active-active round-robin configuration. All in all, the two LUNs exist as eight /dev/sdX devices, represented by two devices in /dev/mpath. I currently adjust the queue depth values in /etc/modprobe.conf and check the result (after rebooting) by looking in the seventh column of /proc/scsi/sg/devices. Two questions related to that: Is there a way to adjust queue depths without rebooting or unloading the qla2xxx kernel module? E.g., can I echo a new queue depth value into some /proc or /sys-like file to update the queue depth? If I set the queue depth to 128, is that 128 in total for all devices handled by the qla2xxx module?, or 128 for each HBA? (256 in total), or 128 for each of the eight /dev/sdX devices (1024 in toal)?, or 128 for each of the two /dev/mpath/... devices (256 in total)? This is important for me to know so that my server doesn't flood the storage system, affecting other servers connected to it.

    Read the article

  • Probability of Blade Chassis Failure

    - by ChrisZZ
    In my organisation we are thinking about buying blade servers - instead of rack servers. Of course technology vendors also make them sound very nice. A concern, that I read very often in different forums, is, that there is a theoretical possibility of the server chassis going down - which would in consequence take all the blades down. That is due to shared infrastructure. My reaction on this probability would be to have redundancy and by two chassis instead of one (very costly of course). Some people (including e.g. HP Vendors) try to convince us, that the chassis is very very unlikely to fail, due to many redundancies (redundant power supply, etc.). Another concern on my side is, that if something goes down, spare parts might be required - which is difficult in our location (Ethiopia). So I would ask to experienced administrators, that have managed blade server: What is your experience? Do they go down as a whole - and what is the sensible shared infrastructure, that might fail? That question could be extended to shared storage. Again I would say, that we need two storage units instead of only one - and again the vendors say, that this things are so rock solid, that no failure is expected. Well - I can hardly believe, that such a critical infrastructure can be very reliable without redundancy - but maybe you can tell me, whether you have successfull blade-based projects, that work without redundancy in its core parts (chassis, storage...) At the moment, we look at HP - as IBM looks much to expensive... thanks a lot best regards Christian

    Read the article

  • Linux - real-world hardware RAID controller tuning (scsi and cciss)

    - by ewwhite
    Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array). They're all running RHEL or CentOS. I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks). I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side? Historically, I've made changes to the I/O scheduling elevator, recently opting for the deadline and noop schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has had an impact on the recommended storage subsystem settings over time. However, it's been awhile since I've seen any clear recommendations. And I know that the OS defaults aren't optimal. For example, it seems that the default read-ahead buffer of 128kb is extremely small for a deployment on server-class hardware. The following articles explore the performance impact of changing read-ahead cache and nr_requests values on the block queues. http://zackreed.me/articles/54-hp-smart-array-p410-controller-tuning http://www.overclock.net/t/515068/tuning-a-hp-smart-array-p400-with-linux-why-tuning-really-matters http://yoshinorimatsunobu.blogspot.com/2009/04/linux-io-scheduler-queue-size-and.html For example, these are suggested changes for an HP Smart Array RAID controller: echo "noop" > /sys/block/cciss\!c0d0/queue/scheduler blockdev --setra 65536 /dev/cciss/c0d0 echo 512 > /sys/block/cciss\!c0d0/queue/nr_requests echo 2048 > /sys/block/cciss\!c0d0/queue/read_ahead_kb What else can be reliably tuned to improve storage performance? I'm specifically looking for sysctl and sysfs options in production scenarios.

    Read the article

  • Windows 2003 Storage Server Hanging on Large File Transfers

    - by user25272
    In one of our offices we have a Dell PowerVault 745N NAS device which acts as the main file server. Its running 32bit Windows 2003 Storage Server SP2 with 3GB RAM. The server holds around 60 users HOME folders, which are mapped via AD. The office clients are a mix of XP SP3, Vista and Windows 7. Occasionally the server will completely hang when transferring large files. When the hang happens the console becomes unresponsive with only the mouse active and blank wallpaper. Sometimes stopping the copy frees the server, sometimes not. The hanging can last around 20 minutes. During this time other servers also become unresponsive with blank wallpaper at the console. If you do manage to get onto another server the taskbar and run commands are unresponsive. This also transcends to the client computers sometimes with explorer crashing. I'm guessing this is due to the HOME folder mapping. Eventually the NAS server with free up and everything will be back to normal. The server is configured as follows: PERC 4/DC DATA 2 - 12 SCSI HDD - RAID5 SHADOWCOPY 2 SCSI HDD - RAID1 CERC SATA DATA 11 4 SATA HDD - RAID5 OS 4 SATA HDD - RAID5 All the drivers and firmware is up to date. I've been through all the diagnostics with Dell and the hardware has come up clean including full HDD tests on the arrays. The server has NOD32 installed as the AV, but the hanging happens when it is uninstalled. There are no errors in the event log when this happens and we don't have any errors logged on any of our ProCurve switches. DNS is fine on the domain and AD from what I can tell is running happily. There are no DFS or NFS shares setup either. All the shares are standard Windows. I've unchecked the allow the computer to turn off this device to save power box under Power Management on the NIC. "Set Link Speed and Duplex to Auto-negotiate 1000 " Increased Receive Descriptors buffer from 256 to 352 (reserves more CPU resource for handling data) I've run network traces using network monitor and have found the following: 417 8.078125 {SMB:192, NbtSS:25, TCP:24, IPv4:23} 192.168.2.244 192.168.5.35 SMB SMB:R; Nt Create Andx - NT Status: System - Error, Code = (52) STATUS_OBJECT_NAME_NOT_FOUND I've tried different cabling; NICs and switch ports all with the same result. Transferring files from other servers on the domain is fine. All I haven't done is run CHKDSK on the drives to look for any file system errors. On the Vista clients I have also run netsh interface tcp set global autotuning=disabled with no result. Could it be that the server has a faulty drive or that the I/O is too much for it to handle? Any ideas why would the hang cause issues with the other servers on the LAN? Many Thanks.

    Read the article

  • Ubuntu getting wrong hostname from DHCP

    - by sam
    When provisioning new Ubuntu Precise (12.04) servers, the hostname they're getting seems to be generated from the DNS search path, not a reverse lookup on the hostname. Take the following configuration BIND is configured with the hostname, and reverse name Normal zone $TTL 600 $ORIGIN srv.local.net. @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. @ IN MX 5 mail.local.net. my-new-server IN A 10.32.2.30 And reverse @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. $ORIGIN 32.10.in-addr.arpa. 30.2 IN PTR my-new-server.srv.local.net. Then DHCPD is configured to hand out static leases based on mac addresses like so subnet 10.32.2.0 netmask 255.255.254.0 { option subnet-mask 255.255.254.0; option routers 10.32.2.1; option domain-name-servers 10.32.2.1; option domain-name "util.of1.local.net of1.local.net srv.local.net"; site-option-space "pxelinux"; option pxelinux.magic f1:00:74:7e; if exists dhcp-parameter-request-list { option dhcp-parameter-request-list = concat(option dhcp-parameter-request-list,d0,d1,d2,d3); } group { option pxelinux.configfile "pxelinux.cfg/pxeboot"; host my-new-server { fixed-address my-new-server.srv.local.net; hardware ethernet aa:aa:aa:bb:bb:bb; } } } So the hostname should be my-new-server.srv.local.net, however when building a Ubuntu 12.04 node, the hostname ends up as my-new-server.util.of1.local.net When building Lucid (10.04) hosts, the hostname will be correct, it's only on Precise/12.04 nodes we have the problem. Doing a normal and reverse lookup on the host and IP returns the correct result Sams-MacBook-Pro:~ sam$ host my-new-server my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host my-new-server.srv.local.net my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host 10.32.2.30 30.2.32.10.in-addr.arpa domain name pointer my-new-server.srv.local.net. The contents of the hosts file is incorrect too 127.0.0.1 localhost 127.0.1.1 my-new-server.util.of1.local.net of1.local.net srv.local.net my-new-server So it looks like when it creates the hosts file, it puts the entire contents of the DNS search path into the local address so the FQDN according to the server is the short hostname as defined, then the first domain in the search path. Is there a way to get around this behaviour, or fix this so it gets the hostname correctly? It's picking up the first part of the hostname, then the rest is wrong.

    Read the article

  • Cannot execute "LOAD DATA LOCAL INFILE" Mysql query in Rails after a connection reconnection

    - by Ngan
    On Rails 2.3.8 (but I think Rails 3 might have this issue as well, not sure): I get an error when trying to execute a LOAD DATA LOCAL INFILE query after reconnecting to a database. I have a process that parses a file that can potentially take a bit of time. During the parsing, Mysql closes the connection due to timeout. This is fine, I do a ActiveRecord::Base.verify_active_connections! and I get the connection back (I do this in several places through my app). However, running a LOAD DATA LOCAL INFILE statement, I get this error: Mysql::Error: The used command is not allowed with this MySQL version It's not a permission issue, I know that for sure. Check out my test in console: ActiveRecord::Base.connection.execute("LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users") [Sat Jan 08 00:09:29 2011] (9990) SQL (1.7ms) LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users => nil > ActiveRecord::Base.connection.disconnect! => #<Mysql:0x104c6f890> > ActiveRecord::Base.verify_active_connections! [Sat Jan 08 00:09:58 2011] (9990) SQL (0.2ms) SET SQL_AUTO_IS_NULL=0 => {...connection stuff...} > ActiveRecord::Base.connection.execute("LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users") [Sat Jan 08 00:10:00 2011] (9990) SQL (0.0ms) Mysql::Error: The used command is not allowed with this MySQL version: LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users ActiveRecord::StatementInvalid: Mysql::Error: The used command is not allowed with this MySQL version: LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users from ~/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract_adapter.rb:221:in `log' from ~/gems/activerecord-2.3.8/lib/active_record/connection_adapters/mysql_adapter.rb:323:in `execute' from (irb):6 I am able to do other queries like SELECT and whatnot, and I will get the correct result. It's just this one that giving me the error. I even tested this with a fresh rails app. You'll notice that I am able to do the exact same query before the disconnect. Thanks for the help!

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • Whats the difference between local and remote addresses in 2008 firewall address

    - by Ian
    In the firewall advanced security manager/Inbound rules/rule property/scope tab you have two sections to specify local ip addresses and remote ip addresses. What makes an address qualify as a local or remote address and what difference does it make? This question is pretty obvious with a normal setup, but now that I'm setting up a remote virtualized server I'm not quite sure. What I've got is a physical host with two interfaces. The physical host uses interface 1 with a public IP. The virtualized machine is connected interface 2 with a public ip. I have a virtual subnet between the two - 192.168.123.0 When editing the firewall rule, if I place 192.168.123.0/24 in the local ip address area or remote ip address area what does windows do differently? Does it do anything differently? The reason I ask this is that I'm having problems getting the domain communication working between the two with the firewall active. I have plenty of experience with firewalls so I know what I want to do, but the logic of what is going on here escapes me and these rules are tedious to have to edit one by one. Ian

    Read the article

  • Setting up dnsmasq for a local network

    - by WishCow
    Me, and a small group of developers have just moved to a new office, and I'd like to set up dnsmasq on our development server, so when we deploy web apps there, we don't have to edit our own hosts files. We have a router at 192.168.3.1 which we don't have access to. I figured I'd install a DNS server on the development box, and we all record it's IP as a secondary DNS server. Unfortunately I'm strugling to make this work. The name of the devel server is devbox, it's IP is 192.168.3.99, and it's running the latest Ubuntu Server (Karmic) My computer is running Ubuntu Desktop (Karmic) What I'd like to achieve Let's say I have three websites, website1, website2, website3, running on the development box. I'd like to access them by the urls: http://website1.devbox http://website2.devbox http://website3.devbox So I have configured Apache on the devel box, installed dnsmasq, and put the following lines into it's hosts file: 192.168.3.99 website1.devbox 192.168.3.99 website2.devbox 192.168.3.99 website3.devbox and edited my own resolv.conf file to include the devel box as a nameserver: nameserver 192.168.3.99 It's working fine, I can access the sites. The problem is that it doesn't scale well. I'd like all the domains ending with .devbox forwarded to the development box, and this is what I'm struggling with. I have tried putting 192.168.3.99 devbox into the hosts file, and editing the line in dnsmasq.conf: # Add local-only domains here, queries in these domains are answered # from /etc/hosts or DHCP only. local=/devbox/ But I cannot get it working. If I try any url that is not explicitly present in the development box's hosts file, the dns lookup fails. Is the local directive for something else? Am I looking at the wrong place?

    Read the article

  • Setting up dnsmasq for a local network

    - by WishCow
    Me, and a small group of developers have just moved to a new office, and I'd like to set up dnsmasq on our development server, so when we deploy web apps there, we don't have to edit our own hosts files. We have a router at 192.168.3.1 which we don't have access to. I figured I'd install a DNS server on the development box, and we all record it's IP as a secondary DNS server. Unfortunately I'm strugling to make this work. The name of the devel server is devbox, it's IP is 192.168.3.99, and it's running the latest Ubuntu Server (Karmic) My computer is running Ubuntu Desktop (Karmic) What I'd like to achieve Let's say I have three websites, website1, website2, website3, running on the development box. I'd like to access them by the urls: http://website1.devbox http://website2.devbox http://website3.devbox So I have configured Apache on the devel box, installed dnsmasq, and put the following lines into it's hosts file: 192.168.3.99 website1.devbox 192.168.3.99 website2.devbox 192.168.3.99 website3.devbox and edited my own resolv.conf file to include the devel box as a nameserver: nameserver 192.168.3.99 It's working fine, I can access the sites. The problem is that it doesn't scale well. I'd like all the domains ending with .devbox forwarded to the development box, and this is what I'm struggling with. I have tried putting 192.168.3.99 devbox into the hosts file, and editing the line in dnsmasq.conf: # Add local-only domains here, queries in these domains are answered # from /etc/hosts or DHCP only. local=/devbox/ But I cannot get it working. If I try any url that is not explicitly present in the development box's hosts file, the dns lookup fails. Is the local directive for something else? Am I looking at the wrong place?

    Read the article

  • update all the servers through one virtual servers using Storage are network virtual machine

    - by Mr.Calm
    Using UBUNTU and Virtal Box by Oracle, and Using this script to start nginx in Virtual Box, and placing it in Virtual box inside~/init.d #!/bin/bash ### BEGIN INIT INFO # Provides: Testinit # Required-Start: # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO # RETVAL=0; start() { CurrentTime=$(date +%d/%m/%Y"-"%I:%M:%S) ./usr/local/nginx/sbin/nginx echo "Current Time:"$CurrentTime>>/home/server/Desktop/NginxLogs.txt echo "!Starting nginx!" >>/home/server/Desktop/NginxLogs.txt Like this i want to write auto script (setup.sh file) and place that script in all virtual boxes inside my system, for example 8 virtual boxes and in all Virtual boxes NGINX is installed. Now, The thing is i am facing problem when i want change something in setup.sh i have to go to each and every virtual box, or Communicate each Virtual machine through SSH from my main machine. i am thinking to write another script (ex: Update.sh),and inside that script we give one path of file which is saved and recently edited in main machine (ex: DummySetup.sh). as soon as i run that script all the setup.sh files which are saved in each virtual machines should update the change or replace contents with DummySetup.sh's contents. Hope this is possible thing. Help would be appreciated.Thanking you

    Read the article

  • Can't Connect To Local Mysql Using IP Address, but CAN connect from remote server

    - by user1782041
    Here's an interesting one that does not seem to fall into any of the mysql connection issues I've read about or searched for: On an Ubuntu 12.04 box I had some system updates waiting to install, and I took care of that this evening. After the install, I started seeing some errors in my syslog complaining about a particular php script that could no longer connect to the mysql instance on the box. Here is the specific error: PHP Warning: mysql_connect(): Can't connect to MySQL server on '192.168.0.40' (4) Now, the server's IP address is 192.168.0.40, and I've checked to make sure that I have mysql listening on 0.0.0.0 so that I can connect using either "localhost" or "192.168.0.40". Here's where things get odd: From the local machine, if I try the following: mysql -uroot -p -h192.168.0.40 I get this error: ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.0.40' (110) I've checked, and error 110 indicates an OS timeout, and error 2003 is the mysql generic "can't connect" error. This indicates that it is not permissions with the user. However, if I do the same thing from a remote machine (say, from 192.168.0.30), I log right in with no problems. Futher, other scripts on the local machine that connect to mysql using "localhost" for the host rather than "192.168.0.40" connect with no problems. Also, I can connect via the mysql socket with no problems both from the command line and php scripts. So, this feels like a networking issue of some kind on the local box, but there are no iptables rules on this box (it is firewalled externally) and I can't figure out what else may be causing this. This problematic script worked perfectly prior to the latest system update. For now, I'll simply change the script to connect via localhost, but I'd really like to know why it broke for 2 reasons: There may be other scripts that connect using 192.168.0.40 that don't run very often which are now broken. Auditing them all will take more time than I feel like devoting at the moment. I'm curious, and want to know why it broke so I can fix it correctly. Any help?

    Read the article

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • How do I import large sql file to local LAMP (xampp) environment

    - by mraslton
    I have used Linux to import a large mysql dump file (into a new database), but am new to how the process works in a local LAMP environment using xampp, as xampp does not support SSH. I've dowloaded the large_dump_file.sql from the Linux server to my local system. I'm using Windows XP and have used xampp to setup LAMP. I am able to access the local_database via phpMyAdmin, but the dump file is too large to import using that app. I'm trying to import the file via the command prompt, but so far with no success. At the prompt: cd .. cd .. cd xampp cd mysql cd bin I've found that mysqlimport is used to import .csv and .txt files, and mysql is used to import .sql files, but can't find documentation as to whether or not to use the -u -p options so I've tried many variations of the command with no luck. What would be the proper command? I've modified the hosts, virtual-hosts conf, and apache config files. Do I need to change any other config files on my local system? Thanks.

    Read the article

  • Local DNS and Apache Server Configuration Interferring - example.com / www.example.com

    - by nicorellius
    I have a domain for my site: example.com I am also running local DNS with these lines: www IN CNAME server.<host_provider>.com. dev IN CNAME server.<host_provider>.com. So www.example.com and dev.example.com go to production and development sites, respectively, that are hosted by a host company. In my Apache configuration for the main site, I'm running a rewrite rule like this: RewriteEngine ON RewriteCond %{HTTP_HOST} ^example\.com$|!dev\.example\.com$ [NC] RewriteRule ^(.*)$ http://www\.%{HTTP_HOST}/$1 [R=302,L,NE] This rule seems to work, as when you are off the network and go to example.com in the browser, you get redirected to www.example.com. The problem is when I'm on the network, and I go to example.com I get an error page, saying page can't be found. No server errors; just a page can't be found, as if the local DNS is causing it to stop looking at that point. I'm also using Nettica for DNS service and have this A record in place: example.com Host (A) Default xxx.xx.xxx.xx This handles the external DNS, but my problem seems to be related to my internal DNS. For example, inside my network, I can go to servers on the network with addresses like this: server.example.com server1.example.com server2.example.com These are configured in my local DNS. I'm just not sure how to get past the "empty" subdomain and go to example.com. Adding to this since it might not be clear. If I'm out side the example.com network, on another network, like example123.com, then when I go to example.com I'm redirected to www.example.com as expected, eg, the Apache rewrite rule is working. Thanks in advance for any information.

    Read the article

  • Redirect local, not internal, requests using SuSEfirewall2 or an iptables rule

    - by James
    I have a server that is running a web application deployed on Tomcat and is sitting in a test network. We're running SuSE 11 sp1 and have some redirection rules for incoming requests. For example we don't bind port 80 in Tomcat's server.xml file, instead we listen on port 9600 and have a configuration line in SuSEfirewall2 to redirect port 80 to 9640. This is because Tomcat doesn't run as root and can't open up port 80. My web application needs to be able to make requests to port 80 since that is the port it will be using when deployed. What rule can I add so that local requests get redirected by iptables? I tried looking at this question: How do I redirect one port to another on a local computer using iptables? but suggestions there didn't seem to help me. I tried running tcpdump on eth0 and then connecting to my local IP address (not 127.0.0.1, but the actual address) but I didn't see any activity. I did see activity if I connected from an external machine. Then I ran tcmpdump on lo, again tried to connect and this time I saw activity. So this leads me to believe that any requests made to my own IP address locally aren't getting handled by iptables. Just for reference he's what my NAT table looks like now: Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:http redir ports 9640 REDIRECT tcp -- anywhere anywhere tcp dpt:xfer redir ports 9640 REDIRECT tcp -- anywhere anywhere tcp dpt:https redir ports 8443 Chain POSTROUTING (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Local references to old server name remain after Windows 2003 server rename

    - by imagodei
    I have a standalone Win 2003 server with Windows Sharepoint Services (WSS3) running on it. I had to rename the server and I had bunch of problems resulting from this. Note that the server is not in AD environment. Most obvious problems were with Sharepoint, which didn't work. I was somewhat naive to think it will work in the first place, but OK - I've solved this using step 1 & 3 from this site (TNX) Other curious behavior/problems remain. Most disturbing is that Sharepoint isn't able to send email notifications to participants. I noticed there are several references to old server name everywhere I look: in Registry, in Windows Internal Database (MICROSOFT##SSEE). I see instances of old server name in the Sharepoint Central Administration - Operations - Servers in farm. There is reference to a servers: oldname.domain.local oldname.local On one of those servers there is also Windows SharePoint Services Outgoing E-Mail Service (Stopped). Also, when I try to telnet locally to the mail server (Simple Mail Transfer Protocol (SMTP) service), I get a response: 220 oldname.domain.local Microsoft ESMTP MAIL Service, Version: 6.0.3790.4675 ready at Tue, 15 Jun 2010 13:56:19 +0200 IMO these strange naming problems are also the reason why email notifications from within Sharepoint don't work. Can anyone tell me how to correct/replace those references to oldservername? Why is the email service insisting on old name? Of course I would like to try it without reinstalling the server. TNX!

    Read the article

  • Manual NAT on Checkpoint (Redirect all http requests to a local web server)

    - by B. Kulakli
    We have a proxy server in our internal network and I want to redirect all internet http requests to a web server in local network. It'll be like a Network Billboard that says "No direct connection is available. Set up your proxy etc." For example: A user starts the computer Opens the browser Tries to open www.google.com Should see web server output on local network Tries another web site on internet Should see web server output on local network Sets up proxy Tries to connect to a web site Web site should be loaded I have added a simple manual NAT rule to address translation in Checkpoint firewall but it simply does not work. Here is my address translation rule Source Destination Service T.Source T.Destination T.Service MY_PC A_GOOGLE_IP ALL ORIGINAL INT_WEB_SRV ORIGINAL Then when I ping A_GOOGLE_IP, replies come from INT_WEB_SRV, as I expected. However, when I try to connect A_GOOGLE_IP from browser (http://A_GOOGLE_IP), no replies come from SYN_SENT and falls into timeout. When I look at the firewall log of INT_WEB_SRV, I can see the incoming connection requests from MY_PC is accepted and NO denies. By the way, there is no problem to see INT_WEB_SRV (http://INT_WEB_SRV) from browser. My understanding is, my NAT rule at checkpoint NGX R60 does not include return packets. I definitely need some help.

    Read the article

  • Manual NAT on Checkpoint (Redirect all http requests to a local web server)

    - by kulakli
    Hi, We have a proxy server in internal network and I want to redirect all internet http requests to a web server in local network. It'll be like a Network Billboard that say "No direct connection is available. Set up your proxy etc." For example: A user starts the computer Opens the browser Trys to open www.google.com Should see web server output on local network Trys another web site on internet Should see web server output on local network Sets up proxy Trys to connect to a web site Web site should be loaded I have added a simple manual NAT rule to address translation in Checkpoint firewall but it simply does not work. Here is my address translation rule Source Destination Service T.Source T.Destination T.Service MY_PC A_GOOGLE_IP ALL ORIGINAL INT_WEB_SRV ORIGINAL Then when I ping A_GOOGLE_IP, replies come from INT_WEB_SRV, as I expected. However, when I try to connect A_GOOGLE_IP from browser (http://A_GOOGLE_IP), No replies come from SYN_SENT and falls into timeout. When I look at the firewall log of INT_WEB_SRV, I can see the incoming connection requests from MY_PC is accepted and NO denies. By the way, there is no problem to see INT_WEB_SRV (http://INT_WEB_SRV) from browser. My understanding is, my nat rule at checkpoint NGX R60 does not include return packets. I definitely need some help. Regards, Burak

    Read the article

  • Ubuntu 10.04 bind9 local zone include files and apparmor

    - by Gilgongo
    Rather than putting all my zones in one named.conf.local file, I'd like to have them in groups that I can manage as separate files. So, I've tried putting the following into named.conf.local: include "/home/zones/group1.conf"; include "/home/zones/group2.conf"; include "/home/zones/group3.conf"; However, when I restart named, I see "permission denied" errors in the logs. Ubuntu uses apparmor for bind, so I also added the following in /etc/apparmor.d/usr.sbin.named: /home/zones/group1.conf r, /home/zones/group1.conf r, /home/zones/group1.conf r, Now, when I re-start named, all appears to be well. Zones are loaded (I think). However, a day or two later, I see my secondary name server complaining that the primary is telling it that it's not authoritative for those domains. I then have to put all the domains back into the named.conf.local file again. How can I get bind9 to use include files in this way? I don't know much about apparmor, so that may or may not be the issue here, but I've used include files in this way on Debian OK.

    Read the article

  • Something like Dropbox for local use

    - by Casper
    I am looking for a solution to sync folder pairs between a NAS and multiple local macs. Each of the macs could edit files and the other macs should then get synced automatically. Basically my own local version of Dropbox without using "cloud-storage". I have looked into solutions using rsync. As I understand it rsync is not really capable of doing a bi-directional sync. I also do not want to necessarily invoke the sync process. I would prefer a daemon running in the background - waiting and checking for changes and then syncing them "live". The program should also be flexible enough to recognize that it sometimes (in the case with laptops) can not reach the NAS. It should then just wait for the connection to be back again, without bugging me ever few minutes. I have looked into synk, folderwatch, rsync and a few others, but I haven't really found a solution. Isn't there something like "offline folders" from microsoft for the mac? Thanks PS: just for clarification - I don't want to sync for backup purposes, instead I am wanting to sync so that all macs have a local copy of the most recent changes to files.

    Read the article

  • Where and how does Kindle Cloud Reader store downloaded books, on a Windows 7 system?

    - by einpoklum
    I use Firefox and sometimes Chrome, on Windows 7. Amazon's in-browser Kindle Cloud Reader lets you "download" books for local/offline viewing. Where are these stored, given my OS+browser combination? I've searched the Users subdirectory for my user, and could not find a relevant (separate) file in there, specifically not in the Firefox and Chrome profile directories. To clarify, the files are obviously not downloaded as-is and are stored in some potentially-obfuscated format, possibly in the browser's local store and possibly elsewhere. The question is, where and how exactly? (This was the first of this question, but wasn't answered there since it was not the main focus of the question.)

    Read the article

  • Choosing local versus public domain name for Active Directory

    - by DSO
    What are the pros and cons of choosing a local domain name such as mycompany.local versus a publicly registered domain name such as mycompany.com (assuming that your org has registered the public name)? When would you choose one over the other? UPDATE Thanks to Zoredache and Jay for pointing me to this question, which had the most useful responses. That also led me to find this Microsoft Technet article, which states: It is best to use DNS names that are registered with an Internet authority in the Active Directory namespace. Only registered names are guaranteed to be globally unique. If another organization later registers the same DNS domain name, or if your organization merges with, acquires, or is acquired by other company that uses the same DNS names, then the two infrastructures cannot interact with one another. Note Using single label names or unregistered suffixes, such as .local, is not recommended. Combining this with mrdenny's advice, I think the right approach is to use either: Registered domain name that will never be used publicly (e.g. mycompany.org, mycompany.info, etc). Subdomain of an existing public domain name which will never be used publicly (e.g. corp.mycompany.com). The "never used publicly" part is a business decision so its probably best to get sign off from those in the company authorized to reserve domain names and subdomains. E.g. you don't want to use a registered name or subdomain that the marketing dept later wants to use for some public marketing campaign.

    Read the article

  • Allied Telesis router: IP filtering for the LOCAL interface

    - by syneticon-dj
    Given an Allied Telesis router with an AlliedWare OS (2.9.1) I would like to disable access to all management services of the router except for a number of subnets (or alternatively have what is a "management VLAN" with other manufacturers' switch and router models). What I have tried so far: creating a new VLAN and an appropriate IP interface, setting the LOCAL IP into this subnet, creating an IP filter for the IP interface and specifying my exclusion subnets: it simply does not work as intended as I can access the LOCAL IP set from any of the other VLAN interfaces - the traffic is apparently not going through my defined filter set at all creating a new IP filter set and binding it to the LOCAL IP interface: this seems not to affect any kind of traffic at all, the counters for the filter set remain at zero packets setting the Remote Security Officer Level IP address range: this only restricts the ability for a user with the Security Officer privilege level to log in from any but the specified address ranges / subnets. Unfortunately, it does not prevent service availability (and thus DoS capacity) or the ability to log in as a less privileged user (e.g. a "manager") calling technical support: unfortunately no solution so far What I have not tried: creating a filter set for each and every IP interface defined on the router and excluding access to the router's management IP: I would like to reduce the overhead induced by IP filters as the router already is CPU-constrained at times. Setting up filters for every IP interface would mean that each and every traffic packet would have to pass the filters, thus consuming CPU cycles. If by any means possible, I would like to find a different solution.

    Read the article

  • Can't access apache from outsite my local network

    - by valter
    UPDATED: Now, when I type my external ip like xxx.xxx.xxx.xxx:8079, i can access xampp defaults page. But the strange is that when someone else from outside my network, try to access it using the same ip, it doesnt work. I Think it should, because its the external ip. I'm getting crazy. I have tried for hours to access xampp defaults page from outside my local network. My ISP blocks port 80 and 8080. So I changed apache to listen to port 8079 Listen 8079 My local computer ip is 10.1.1.2 I can access the webserver, from any computer on my local network when I type http://10.1.1.2:8079 I also oppended the port 8079 on my modem, as the image shows bellow. (I think i did it right) When apache is running on my computer, if I test the port 8079 at http://canyouseeme.org/ i get the message "Success: I can see your service on xxx.xxx.xxx.xxx on port (8079) Your ISP is not blocking port 8079" If apache is not running I get "Error: I could not see your service on xxx.xxx.xxx.xxx on port (8079) Reason: Connection refused". So, it's clear that the port 8079 is oppened. But when I type xxx.xxx.xxx.xxx:8079 on google chrome for example, I get Oops! Google Chrome could not connect to xxx.xxx.xxx.xxx:8079 What can I do to solve this, to allow apache to server the pages? I don't know what else I shoud configure. Please, help me. Thanks.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >