Search Results

Search found 6177 results on 248 pages for 'reputation points'.

Page 179/248 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • CNAME rule being ignored

    - by Ben
    On a server with Plesk installed I have added a CNAME rule pointing from one of the sites subdomains to an external website. I have checked the named configuration for that domain name and it shows the CNAME however the sub domain just points to the default server page and ignores the CNAME rule. Named has been restarted and I've also run the rvmng reconfigure-vhost command. I edited another server to test this, on cPanel, and it works fine. The conf file for the domain: ; *** Ts file is automatically generated by Plesk *** $TTL 86400 @ IN SOA ns.example.com. cf.example1.com. ( 1292946742 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 10800 ) ; Minimum example.com. IN NS ns.example.com. ns.example.com. IN A xx.xxx.xxx.xx example.com. IN A xx.xxx.xxx.xx webmail.example.com. IN A xx.xxx.xxx.xx mail.example.com. IN A xx.xxx.xxx.xx beta.example.com. IN A xx.xxx.xxx.xx ftp.example.com. IN CNAME example.com. www.example.com. IN CNAME example.com. login.example.com. IN CNAME socialize.gigya.com. example.com. IN MX 10 webmail.example.com. You can see the CNAME rule in the file but it just gets ignored? Thanks in advance for any help.

    Read the article

  • Looking for recommendations on OCR problem - tabular numeric data

    - by ldigas
    I have 20 pages of experiment measurement data which I need to digitalize. The results are in tabular form, scanned in 600 dpi resolution, and as far as scans go, they came up pretty clean and readable. For an example of how it looks see here (but beware: it is a rather big scan; about 5Mb; no problem for any broadband connection, but dialups should approach with caution!) ... and I need it finished by sunday afternoon (:-o) <-- smiley in a state of panic (then why did't you start sooner?)... yea, yeah ... I know ... but, it came up late, and I wasn't thinking I was gonna need this data also. So, I'm looking for recommendations. I haven't much experience with OCR programs, save scanning a page or two of pure text, but just to mention, I haven't the wish also to test out every OCR program out there. So this isn't a "name your OCR favourite". What I'm looking is advice from someone who's done something like that, and his/hers experience on what would be the best way to undertake. I need the data in txt form but since it will have to be checked (by drawing it, and just simply watching whether some points "jump out") I'll probably be entering it in Excel at first.

    Read the article

  • How to run a website domain without redirecting if IP is already used for another website? [duplicate]

    - by SSpoke
    This question already has an answer here: Hosting multiple distinct folders for distinct domains 1 answer I bought a VPS Host that gave me only 1 IP Address which I used on my first domain name and it works without any problems. Now my second domain name I can't use the same ip address as it points to the first domain name. So I figured my only option was to use a GoDaddy hosted iframe redirection which redirects to a sub folder on my first domain which worked so far. Now I'm trying to load paypal from <?php headers() ?> and I get a permission error because of that iframe Refused to display 'https://www.paypal.com/cgi-bin/webscr?notify_url=&cmd=_cart&upload=1&business=removed&address_override=1' in a frame because it set 'X-Frame-Options' to 'SAMEORIGIN'. How do I avoid the Iframe solution for my second domain while not messing up my first domain? Somebody I forgot once told me it doesn't matter if you have 1 IP Address you could host multiple websites on it? how it that possible the DNS doesn't seem to work off ports afaik, yes I could host multiple websites on different folders but that's not what I call hosting a real website it has to be pointed by a domain name, so this iframe issue doesn't happen My server configuration is httpd (apache) that comes with CentOS 6 (Linux) operating system

    Read the article

  • LYNC / OCS... problems getting edge server working.

    - by TomTom
    New setup Lync 2010 (i.e. OCS 2010). I have serious problems getting my edge system going. Internally things work fine. Externally I am stuck. I have used the tester at https://www.testocsconnectivity.com/ and it also fails. NOTE: I use the domain xample.com / xample.local here just as example. Here is the setup. I have 2 internal hosts (lync.xample.local, edge.xample.local). edge.xample.com is also correctly in dns. and points to the edge.xample.local external assigned ip address (external interface). Externally, I have the following dns entries: edge.xample.com _sip._tcp - edge.xample.com 443 _sipfederationtls._tcp - edge.xample.com 5061 _sipinternaltls._tcp - lync.xample.local 5061 _sip._tls - edge.xample.com 443 My problem is that the ocs connection test always ends up trying to contact lync.xample.local (i.e. the internal address) when connecting to [email protected]. The error is: Attempting to Resolve the host name lync.xample.local in DNS. This shows me it clearly manages to connect to SOMETHING, but it does either fall through to the _sipinternaltls._tcp entry, OR it does get that internal entry wrongly from the edge system. Am I missing some entries or have some wrong?

    Read the article

  • Integration of SharePoint 2010 with TFS2010

    - by Kabir Rao
    We have performed following steps as of now- Install TFS2010 10.0.30319.1 (RTM) on Windows Server 2008 R2 Enterprise(app tier) SQL 2008 SP1 with Cumulative update 2 on Windows Server 2008 R2 Enterprise(data tier) Reporting Service is installed on app tier. After this installation worked fine we installed SharePoint 2010 on app tier. After installation we followed http://blogs.msdn.com/b/team_foundation/archive/2010/03/06/configuring-sharepoint-server-2010-beta-for-dashboard-compatibility-with-tfs-2010-beta2-rc.aspx for configuration. We are not able to perform the last step described in the link as following error occured- TF249063: The following Web service is not available: http://apptier:31254/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The remote server returned an error: (404) Not Found.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://apptier:31254. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. We have also noticed that Document Folder in Team project also have red x. Please help. Thanks upfront.

    Read the article

  • Moving users folder on Windows-7 to another partition - bad idea?

    - by Donat
    Hi, I'd like to re-submit here a question posted by Benjol on Aug 17at 5:57 "Moving users folder on Windows Vista to another partition - bad idea?" (I can't post one than one link until I earn "10 reputation" and removed my "answer" there to post my follow-up questions here). I am anxiously getting ready at long last to to carry out a clean install (using custom install option) from Vista to Windows-7 Home Premium 64bit with the free upgrade I received late October. For my Vista system I successfully set-up last Summer a multi-partitions scheme with Users and Program Data on a a different partition than the operating system (see link below, and its subsequent links in my comment for details). http://tuts4tech.net/2009/08/05/windows-7-move-the-users-and-program-files-directories-to-a-different-partition/comment-page-1/#comment-562 I was planning a similar set-up for windows 7, a little more streamlined, with OS, Program Files on C:, Users and Program Data on D:, and TV media recording on a separate partition. Reading the Question submitted by Benjol, I am second guessing too. Is moving Users and Program Data on a different partition than the default primary partition with OS and Program Files such a good idea? The couple of people I talked to at the official Microsoft Windows 7 booth at CES 2010 gave the same answer to the intention of moving the Users profile folder to another partition. In a nutshell, they all told me that they used to do this in XP and less in Vista but not anymore with Windows 7... "It is stable, after two months still no problem" I had the feeling it was a scripted answer to emphasize how Windows 7 is so stable and efficient... (Will Windows-7 system not become bugged down over the course of several months to a year or two? Only time will tell) Long story short, I share the same view than Benjol expressed with respect to being "able to backup and restore system and user data independently." I just received a 2TB usb2, eSATA external hard drive as a back-up drive, which includes NTI Shadow 4 (4.1.0.150) for back-up solution. I took note of the issue with NTUSER.DAT and I will read more about Volume Shadow Copy Service (VSS) for Windows 7. I am willing to put the effort if placing Users and Program Data on a different partition would allow to restore a fresher OS+Program image when the system gets bugged down. Questions: Is it such a bad idea? What is the "easy route" referred by Benjol in his post? Is it to just relocate folders to another partition using the Folder property tool? (It is not practical for several users and might not provide a straightforward restore process of just OS and Program Files when needed.) I am starting to learn about Windows 7 libraries. Would Windows 7 libraries be another alternative to achieve this? All this reading to decide how to organize the partition scheme for my custom system is starting to be confusing. I apologize for this lengthy Question. It is my first day here on SuperUser and I am just learning how different from a discussion thread it is. Thank you in advance for all your suggestions and comments. Donat

    Read the article

  • PhpMyAdmin Missing parameter:

    - by Ali
    Everything was working fine until this moment I want to create a new database and I'm receiving this error db_create.php: Missing parameter: new_db (FAQ 2.8) Also when I was trying to export my database I also receive the following error export.php: Missing parameter: what (FAQ 2.8) export.php: Missing parameter: export_type (FAQ 2.8) When I looked it FAQ 2.8 from the suggested link in PHPMYADMIN 2.8 I get "Missing parameters" errors, what can I do? Here are a few points to check: In config.inc.php, try to leave the $cfg['PmaAbsoluteUri'] directive empty. See also FAQ 4.7. Maybe you have a broken PHP installation or you need to upgrade your Zend Optimizer. See http://bugs.php.net/bug.php?id=31134. If you are using Hardened PHP with the ini directive varfilter.max_request_variables set to the default (200) or another low value, you could get this error if your table has a high number of columns. Adjust this setting accordingly. (Thanks to Klaus Dorninger for the hint). In the php.ini directive arg_separator.input, a value of ";" will cause this error. Replace it with "&;". If you are using Hardened-PHP, you might want to increase request limits. The directory specified in the php.ini directive session.save_path does not exist or is read-only. I did tried with php.ini to make sure that I've session.save_path = "/tmp" I'm using Mac Xserver and running with MAMP I did tried everything to restart my server and nothing help. Please if anyone help me to give me some suggestion. Apologize if I post in a wrong place.

    Read the article

  • OpenWrt vs DDWrt

    - by Ioan Paul Pirau
    I have a TP-Link Wr1043ND router and I want to install one of these two firmwares: OpenWRT DD-WRT I read that I can install custom packages and do much more than I can with the original firmware. I would like to ask someone with experience in using both OpenWRT and DD-WRT which he would recommend and why. And to give a few reference points I'm interested in: reliability – network stability both on cable and wireless and on the usb drive performance – network speed, very important also usb drive speed configurability – the possibility to add extensions such as a torrent client, FTP, SSH, WWW and SVN server directly ease of use – the ease of installation and configuration of the router support/docs – how much info there is if you stumble upon a problem and you have to find some documentation, or if there's any free support (but that's a longshot) Of course I don't imagine that I will find the perfect firmware and that one is vastly superior over the other. Also if there's anyone out there who uses one of these firmwares on a TP-Link Wr1043ND, it would be great to get some feedback about the impact of the changes from the original firmware. P.S. I'm open also for Tomato if it's the better one.

    Read the article

  • Apache virtualhost - only apply script if file does not exist in document root

    - by Brett Thomas
    Sorry for the newbie apache question. I'm wondering if it's possible to set up the following non-conventional apache virtualhost (for a Django app): -- If a file exists in the DocumentRoot (/var/www) it will be shown. So if /var/www/foo.html exists, then it can be seen at www.example.com/foo.html. -- If file does not exist, it is served via a virtualhost. I'm using mod_wsgi with a WSGIScriptAlias directive that points to a Django app. So if there is no /var/www/bar.html, www.example.com/bar.html will be passed to the Django app, which may or may not be a 404 error. One option is to create an Alias for each individual file/directory, but people want to be able to post a file without adding an alias, and we want to keep the above URL structure for legacy reasons. Simplified Virtualhost is: <VirtualHost *:80> ServerName www.example.com DocumentRoot /var/www WSGIScriptAlias / /path/to/django.wsgi <Directory /path/to/app> Order allow,deny Allow from all </Directory> Alias /hi.html /var/www/hi.html </VirtualHost> The goal is to have www.example.com/hi.html work as above, without the Alias line

    Read the article

  • Blinking power button

    - by Mike Ramsey
    A friend asked me to look at his Gateway DX4640 desktop. When he presses the power button, power goes to the mobo (NVIDIA nForce 630i MCP73PV, GeForce 7100 chipset) and the CPU fan starts spinning. The power button slowely blinks on and off (blue) and the screen briefly says no signal and then goes black. And nothing else; no post code beeps. My initial two conjectures were: 1. Vista was stuck in sleep/hibernation mode, or 2. A power off had left the mobo in a bad state. The fix for both is to: a) Unplug the AC power cord b) hold the power button for 30 second to fully discharge the mobo It didn't help. I left the system unplugged from AC power for an hour. No change. I am out of ideas. Has anybody seen anything like this before? What does a blinking blue power button mean? How can I get more data points to guide trouble shooting? --Thank you, --Mike

    Read the article

  • Verify client certificate CN in Tomcat(APR)

    - by Petter
    I'm running a tomcat installation with the APR libraries installed (with the OpenSSL HTTPS stack that comes with it). What I'm trying to do is to lock a specific HTTPS connector down to users of a specific certificate. Adding client certificate verification is no issue, but I can't get it to validate against a specific Common name only. I was perhaps a bit naïve and thought the mod_ssl attribute SSLRequire typically used in Apache Httpd would work, but that property is not recognized by the Tomcat implementation. (http://tomcat.apache.org/tomcat-7.0-doc/config/http.html#SSL%20Support points to some mod_ssl docs, but the Tomcat implementation does not seem to cover all aspects of mod_ssl). I can get this to work by using the Java version of the connector instead of APR (losing some performance) and just add a trust store with that one certificate in it. However, using openssl without the SSLRequire expressions, I'm not sure how to do this with Tomcat7 (on Windows if that matters). <Connector protocol="HTTP/1.1" port="443" maxThreads="150" scheme="https" secure="true" SSLEnabled="true" SSLCertificateFile="mycert.pem" SSLCertificateKeyFile="privkey.pem" SSLCACertificateFile="CABundle.pem" SSLVerifyClient="require" SSLProtocol="TLSv1" SSLRequire="(%{SSL_CLIENT_S_DN_CN} eq &quot;host.example.com&quot;)"/> Can you suggest a way to make this work using Tomcat/APR/OpenSSL?

    Read the article

  • Resotre single users Exchange 2003 mailbox from backup

    - by Campo
    I take weekly backups of exchange in full. I also take complete weekly backups of the entire server. It is a Server 2003 R2 with AD and Exchange 2003 all on one box. One users inbox has disappeared. She has 19000+ junk items now. It is possible the inbox got mixed into the junk. Regardless it is such a huge mess she is not going to go through all of that.... I want to restore he mailbox from the backup. I followed this MS KB http://support.microsoft.com/kb/823176 I had to use Method 3. I have a VM of Server 2003 R2 with exchange but I am having failures on the restore from NT backup. The backup log just states to check the application log.... Application log points to backup log... Only info Is failed to restore Only thing different is the computer name... The only error I can find is in the Applicaiton log. Information Store Database not found All others just say that the backup failed. Any assistance is greatly appreciated.

    Read the article

  • Troubleshoot dropped wireless connections

    - by Jack
    I was recently hired in the IT department of a small company (~180 users) and one of the issues that people have been complaining about is having their wi-fi connections drop during meetings. The company is using an HP ProCurve Wireless LAN with 10 APs and a controller unit located in the server room. I don't have any experience troubleshooting WLAN in a multi-AP environment, so I'm trying to at least gather information using free or cheap tools. I did a basic site survey using the free version of Ekahau HeatMapper and discovered the following in one of the conference rooms that has been a problem. The program picked up three access points (plus a bunch of others with much lower signals that were out of range): AP 1: SSID: "Unknown SSID" - Signal strength: -48 dBm - -40 dBm. Channel: 2 AP 2: SSID "CompanyMain" - Signal strength: -35 dBm or greater. Channel: 2. Security: WEP (This is the main SSID for the company's WLAN.) AP 3: SSID: "CompanyGuest" - Signal strength: -40 dBm - -35 dBm. Channel: 2. Security: WPA2 (This SSID is the company's "guest" WLAN, which was setup to allow Internet access, but prevent network access.) Is there anything that you see that is clearly a problem from the above? I'm assuming that the unknown SSID might be a big problem, and that it is an AP from a neighboring office that is causing interference. Does that seem likely? Also, regarding channel, should we try changing the channels of our APs to avoid interference with that unknown SSID? (Since everything seems to be on Channel 2?) Should our APs be on different channels? In other words, should the CompanyMain and CompanyGuest APs be on different channels? Finally, any recommendations for free/cheap tools to help me figure this out, and/or a good methodology to follow? Thanks in advance for any help. Jack

    Read the article

  • Triple-monitor set-up (2 unique, 1 cloned): Can a VGA splitter be used on one output of a dual-head

    - by stakx
    Background: I'm currently researching hardware components for some kind of information terminal we're building. This application of ours makes use of three output screens: (1) A touch screen where all user input is made; (2) A regular LCD monitor where the requested information is being displayed; and (3) A projector which displays exactly the same signal as screen (2) does. (All screens will run at the same resolution of 1024x768 btw.) Now I figured that using a dual-head video card would be sufficient, let's say a Matrox P690 low-profile PCI card. This would involve having a Y cable connected to the graphics card itself, then two DVI-to-VGA adapters at each end of the Y cable, and then having a VGA splitter on one of the VGA outputs. The following shows the setup in question: 0--1---------2-> VGA (DSUB-15) \ \ ----2-3---------> VGA (DSUB-15) \ \ -----------------> VGA (DSUB-15) 0: graphics card (LFH60 jack) 1: LFH60 to DVI-I dual monitor Y cable 2: DVI-to-VGA adapters 3: VGA splitter cable Question(s): Will this work? I'm particularly concerned about the following points: Can a low-profile PCI video card output a signal which is strong enough for three monitors (even if it's a dual-head card)? Does the combination of so many adapters and splitter cables work? (The LFH-to-DVI cable comes with the video card) Will the VGA splitter cable degrade the signal on the output screen & projector significantly? (If so, would a USB-powered splitter cable remedy this problem?) I can't possibly expect anyone to answer all those questions, but any input is appreciated.

    Read the article

  • Is there a way to do something like LVM over NFS?

    - by warren
    I realize that since NFS is not block-level, LVM can't be used directly. However: is there a way to combine multiple NFS exports (from, say, 3 servers) into one mount point on a different server? Specifically, I'd like to be able to do this on RHEL 4 (or 5, and re-export the combined mount to my RHEL 4 server). expansion The reason I pegged lvm is that I want a bunch of exported mounts (servera:/mnt/export, serverb:/mnt/export, serverc:/mnt/export, etc) to all mount at /mnt/space so that my /mnt/space on this server (serverx) as one large filesystem. Yes, I know that re-exporting is generally a Bad Thing™ but thought it might work, if there was a way to accomplish this on a newer release as opposed to an older one From reading the unionfs docs, it appears that I can't use it over a remote connection - have I misread it? More accurately, since Union FS merges the contents of multiple branches, but makes them appear as one, it doesn't seem to go in reverse: I'm trying to mount a bunch of NFS points in a merged fashion, then write to them - not caring where data goes, a la LVM .

    Read the article

  • Ruckus wireless AP and Dell PowerConnect configuration problems

    - by DanielJay
    We are working on trying to get some Ruckus Access Points to work correctly on our network. Currently our network is as follows: VLAN 10 - Servers VLAN 11 – Computers/DHCP VLAN 12 – Voice VLAN 13 – Guest We use Dell PowerConnect 6248P switches for our switches. Port settings are as follows: ZoneDirector 1100 is plugged into this port. Should be accessing the server VLAN and then allowing all other traffic. interface ethernet 1/g2 classofservice trust ip-dscp description 'Ruckus ZoneDirector 1100' switchport mode general switchport general pvid 10 switchport general allowed vlan add 10 switchport general allowed vlan add 11-13 tagged exit Access point is plugged into this port. The port has to be on VLAN 11 in order to get DHCP. interface ethernet 1/g16 classofservice trust ip-dscp description 'Ruckus - IT' switchport mode general switchport general pvid 11 switchport general allowed vlan add 10-12 switchport general allowed vlan add 13 tagged exit If we tag the traffic from the SSID as VLAN 11 data fails. If we leave the SSID tagged as 1 the data flows correctly. Are there problems with passing tagged traffic to untagged ports? We are looking to see what we can do to get the SSID tagged as 11 instead of 1. Any suggestions?

    Read the article

  • Starting my own server - basic recommendations and questions [closed]

    - by Ilia Rostovtsev
    Possible Duplicate: Can you help me with my capacity planning? I'm planning to start my own high-performance server and then use collocation services for keeping it up and running. I'm planning to USE it for processing videos and keeping big video site up! (using FFMpeg, MENcoder and etc.) I just need recommendations on whether listed hardware is good enough and will work together well and fast enough. Do I need anything else (missed something). I remember about CPU coolers though! ;) I'm planning to use SSD drives so please tell me if it's going to work just as regular HDDs (but much faster)? Are they going to be used as RAID (is this possible for SSDs)? Here is what I would like to get: Intel ® Server System SR1600URHSR (Urbanna) or Intel® Server System SR1695WBAC 2 x Intel Xeon X5650 4 x 16Gb DDR-III 1333MHz Kingston ECC Reg (KVR13R9D4/16) 3 x (or maybe 4x) 480Gb SSD Intel 520 Series (SSDSC2CW480A3K5) Which server system would be better? Is listed hardware new/good enough and worth buying it at the moment? Should I probably take a look at something slightly more expensive but more up to date and powerful, may be? After all as software I would like to use CentOS 6 64 bit + WHM/CPanel? Any other suggestions on maybe cheaper and same/more powerful server management system but WHM? What most important points to keep in mind when starting/maintaining your own server?

    Read the article

  • Free/opensource application for charting stock prices?

    - by Homunculus Reticulli
    I am looking for a free or FOSS software application for SIMPLY charting stock prices. I am not interested in any of the other nonsense typically bundled with such packages (technical analysis, back testing, tracking etc, etc). All I want to do is the following: Import file from CSV and plot on the chart Ability to scroll the chart left/right (zoom feature would be nice to) Ability to draw straight line (between 2 points) on the plot Ability to plot the graph for different resolutions (for e.g weekly, monthly - or some other custom resolution that I want) print the displayed graph (I can always use screen capture if printing is too much to ask) Thats all I want to do. I am not interested in anything else. I would have thought I could have found something by now. I would have written my own tool (I still will do that at a later stage), but I am a bit short of time at the moment, so I just want something that will do all of the above. Can anyone recommend a package. Last but not the least, I am running on Linux (and would prefer to do so - BUT if I have to, I can run on ahem - you know, Windows)

    Read the article

  • (Zywall USG 300) NAT bypassed when accessing in-house-server From LAN Via domain name

    - by mschr
    My situations is like this; i host a number of websites from within our joint network solution. On the network is basically 3 categories: the known public, registered via mac, given static dhcp lease the anonymous lan connections, given lease from specific dhcp range switches, unix hosts firewall Now, consider following hosts which are of interest 111.111.111.111 (Zywall USG 300 WAN) 192.168.1.1 (ZyWall USG 300 LAN) load balances and bw monitors plus handles NAT 192.168.1.2 (Linux www) serves mydomain1.tld and mydomain2.tld 192.168.123.123 (Random LAN client) accesses mydomain1.tld from LAN 23.234.12.253 (Random External client) accesses mydomain1.tld via WAN DNS A records are setup so that both mydomain1.tld and mydomain2.tld points to 111.111.111.111 - and the Linux www serves the http parts with VirtualHost configurations, setting up the document roots pr ServerName, this is not so interesting though.. NAT rule translates 111.111.111.111:80 to 192.168.1.2:80 (1:1 NAT) Our problem follows; When accessing http://mydomain1.tld from outside (23.234.12.253 example host) the joint network - everything is fine, zywall receives requests via port 80 and maps it to the linux host' httpd. However - once trying to go through the NAT from LAN side (in-house, 192.168.123.123 example host) then one gets filtered in the Zywall port 80 firewall. I know this only because port 443 is open for administration interface and https://mydomain1.tld prompts for zywall login. So my conclusion is, that the LAN that accesses 111.111.111.111 in fact are routed to 192.168.1.1 whilst bypassing the NAT table. I need to know how to setup NAT / Policy Route, so that LAN WAN LAN will function with proper network translations instead of doing the 'quick nameserver lookup' or whatever this might be.

    Read the article

  • Specific DNS sometimes resolves to wildcard, incorrectly

    - by Mojo
    I have an intermittent problem, and I'm not sure where to start trying to troubleshoot it. In our dev environment, we have two visible IP addresses on load balancers, one to the front-end, and one to a number of back-end service machines. The front-end is configured to take a wildcard DNS name to support generic "portals." dev.example.com A 10.1.1.1 *.dev.example.com CNAME dev.example.com The back-end servers are all specific names within the same space: core.dev.example.com A 10.1.1.2 cms.dev.example.com CNAME core.dev.example.com search.dev.example.com CNAME core.dev.example.com Here's the problem. Periodically a developer or a program trying to reach, say, cms.dev.example.com will get a result that points to the front-end, instead of the back-end load balancer: cms.dev.example.com is an alias to core.dev.example.com core.dev.example.com is an alias to dev.example.com (WRONG!) dev.example.com 10.1.1.1 The developers are all on Mac OS X machines, though I've seen the problem occur on an Ubuntu machine as well, using a local cloud host DNS resolver. Sometimes the developer is using a VPN, which directs the DNS to its own resolver, and sometimes he's on the local net using a DNS resolver assigned by the NAT router. Sometimes clearing the Mac OS X DNS cache, logging into the VPN, then logging out of the VPN, will make the problem go away. The origin authoritative server is on zerigo, and a dig directly to their name servers always seems to give the correct answer. The published DNS cache time for these records is 15 minutes, but the problem has been intermittent for about a week. Any troubleshooting suggestions?

    Read the article

  • Debian - starting UFW (Uncomplicated Firewall) before network interfaces are operational

    - by Tomasz Zielinski
    I want to install UFW on Debian Lenny. Everything looks straightforward except that I don't know where to plug UFW startup script so that it configures iptables before hax0rs can break in. I've reviewed runlevel directories and in /etc/rc0.d, /etc/rc6.d and /etc/rcS.d there are items like these: S35networking -> ../init.d/networking S36ifupdown -> ../init.d/ifupdown Runlevel 0 and 6 are for shutdown and reboot so I guess nothing should be changed there, but runlevel S advertises itself (in README) like something for me: The scripts in this directory whose names begin with an 'S' are executed once when booting the system, even when booting directly into single user mode. The following sequence points are defined at this time: * After the S40 scripts have executed, all local file systems are mounted and networking is available. All device drivers have been initialized. (What bothers me is that both rc0/6.d and rcS.d point to the same networking and ifupdown scripts, but after looking at sources I believe those scripts are smart enough to figure out where to start and where to stop networking.) Now, I think that I should plug my /lib/ufw/ufw-init into /etc/rcS.d, with priority higher that the one of ifupdown and networking, i.e. <= 38 for my /etc/rcS.d. Am I right in this "analysis" ?

    Read the article

  • Accessing Virtual Host from outside LAN

    - by Ray
    I'm setting up a web development platform that makes things as easy as possible to write and test all code on my local machine, and sync this with my web server. I setup several virtual hosts so that I can access my projects by typing in "project" instead of "localhost/project" as the URL. I also want to set this up so that I can access my projects from any network. I signed up for a DYNDNS URL that points to my computer's IP address. This worked great from anywhere before I setup the virtual hosts. Now when I try to access my projects by typing in my DYNDNS URL, I get the 403 Forbidden Error message, "You don't have permission to access / on this server." To setup my virtual hosts, I edited two files - hosts in the system32/drivers/etc folder, and httpd-vhosts.conf in the Apache folder of my WAMP installation. In the hosts file, I simply added the server name to associate with 127.0.0.1. I added the following to the http-vhosts.conf file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www/ladybug" ServerName ladybug ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common </VirtualHost> Any idea why I can't access my projects from typing in my DYNDNS URL? Also, is it possible to setup virtual hosts so that when I type in http://projects from a random computer outside of my network, I access url.dyndns.info/projects (a.k.a. my WAMP projects on my home computer)? Help is much appreciated, thanks!

    Read the article

  • how do you add an A record for a root domain

    - by nbv4
    this seems really simple, but I can't figure it out. I'm using xname.org since it's free and I own a bunch of domains spread over a few different registrars. The setup I desire is very simple: one A record that points the bare domain name to my server, plus a wildcard CNAME record pointing all subdomains to the same server. So if the user goes to domain.com it will point them to 285.24.435.75, if they go to www.domain, blah.domain.com, or any other sub domain, they'll get sent to 285.24.435.75. All the examples I read on the internet about setting up A records all have the A record set to a subdomain such as www. WWW is deprecated so I want to have noting to do with it. Currently my xname.org zone looks like this: $TTL 86400 ; Default TTL domain.com. IN SOA ns0.xname.org. nbvfour.gmail.com. ( 2010052503 ; serial 10800 ; Refresh period 3600 ; Retry interval 604800 ; Expire time 10800 ; Negative caching TTL ) $ORIGIN domain.com. IN NS ns2.xname.org. IN NS ns0.xname.org. IN NS ns1.xname.org. @ IN A 65.49.73.148 * IN CNAME domain.com The '@' symbol is something that the godaddy domain interface uses to mean "this root domain', but that may have been specefic to that interface and has no meaning here. Before I had a 'www' entry in the A rcords and it worked in the sense that I could ping 'www.domain.com' and it returned a response, but pinging the root domain 'domain.com' returned "no host found".

    Read the article

  • Error when starting .Net-Application from ThinApp-Application

    - by user50209
    one of our customers uses SAP through VMWare ThinApp. In SAP there is a button that launches an .Net application from a server. When starting the .Net-application directly, there is no error. If the user tries to start the application by clicking the button in the ThinApp-Application, it displays the following errors: Microsoft Visual C++ Runtime Library R6034 An application has made an attempt to load the C runtime library incorrectly. Please contact the application's support team for more information. After clicking "OK" it displays: Microsoft Visual C++ Runtime Library Runtime Error! R6030 - CRT not initialized So, does the customer have to install some components into his ThinApp (if yes, which?) to get things working? Regards, inno ----- [EDIT] ----- @Sean: It's installed the following way: The .exe of the .Net-Application is on a mapped drive on a server. All clients have the requirements installed (.Net-framework for example) and start the .exe from the mapped drive. The ThinApp-Application tries to start this application and throws the mentioned exceptions. AFAIK there are no entry points for this application configured. What I should also mention is: The .Net-Application crashes during execution. That means, we have a debug mode implemented that shows what the application is doing. The application shows what it's doing and after some steps it crashes. The interesting point is: It's a .Net-application, not a C++ Application.

    Read the article

  • How to manage processes-to-CPU cores affinities ?

    - by Philippe
    I use a distributed user-space filesystem (GlusterFS) and I would like to be sure GlusterFS processes will always have the computing power they need. Each execution node of my grid have 2 CPU, with 4 cores per CPU and 2 threads per core (16 "processors" are seen by Linux). My goal is to guarantee that GlusterFS processes have enough processing power to be reliable, responsive and fast. (There is no marketing here, just the dreams of a sysadmin ;-) I consider two main points : GlusterFS processes I/O for data access (on local disks, or remote disks) I thought about binding the Linux Kernel and GlusterFS instances on a specific "processor". I would like to be sure that : No grid job will impact the kernel and the GlusterFS instances Researchers jobs won't be affected by system processes (I'd like to reserve a pool of cores to job execution and be sure that no system process will use these CPUs) But what about I/O ? As we handle a huge amount of data (several terabytes), we'll have a lot of interuptions. How can I distribute these operations on my processors ? What are the "best practices" ? Thanks for your comments!

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >