Search Results

Search found 4834 results on 194 pages for 'dns srv'.

Page 163/194 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Add a Cache Clearing Button to Firefox

    - by Asian Angel
    While emptying your browser’s cache may not be something that you need to worry with often or at all there are times when clearing it can be helpful. The Empty Cache Button extension lets you have instant on-demand cache clearing in Firefox. Some reasons why you might want or need to clear your browser’s cache: Clear out older (or out of date) versions of images, etc. from your favorite websites Free up disk space Clearing the cache may help fix browser behavior issues Help protect privacy (i.e. images, etc. displayed within a personal account) Before For our example we loaded three webpages in order to add content to our browser’s cache. Using the “CacheViewer” we were able to easily see the contents of our browser’s cache after the webpages finished loading. What if you need to clear your cache immediately without restarting your browser (if the options are set to empty the cache on browser exit)? Note: CacheViewer is available via a separate extension and can be found here. Empty Cache Button in Action Once you install the extension all that you need to do is right click on any of your browser’s toolbars and select “Customise”. Drag the “Toolbar Button” to an appropriate location in your browser’s UI and you are ready to go. To clear your browser’s cache simply click the button…that is all there is to it. When the cache is empty you will see this small message window appear in the lower right corner of your “Desktop”. Opening up the “CacheViewer” again shows that everything has been cleared out. Terrific! Conclusion If you ever find yourself needing to clear your browser’s cache immediately then the Empty Cache Button extension provides an easy way to do so without restarting your browser (if the options are set to empty the cache on browser exit). Links Download the Empty Cache Button extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Change SuperFetch to Only Cache System Boot Files in VistaTroubleshoot Browsing Issues by Reloading the DNS Client Cache in VistaSearch for Install Packages from the Ubuntu Command LineQuick Tip: Empty Internet Explorer 7 Cache when Browser is ClosedRemove the New Tab Button in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone

    Read the article

  • How to Assign a Static IP to an Ubuntu 10.04 Desktop Computer

    - by Mysticgeek
    If you have a home network with several computers, assigning them static IP addresses can make troubleshooting easier. Today we take a look at switching from DHCP to a static IP in Ubuntu. Assign a Static IP Using Static IPs prevents address conflicts between machines and can allow easier access to them. If you have a small home network and are satisfied with the machines getting their IP address automatically via DHCP, there won’t be anything gained by using static addresses. Using Static IPs isn’t necessarily for the average user, but if you’re a geek who wants to know the address assigned to each machine, it can allow for faster troubleshooting.  To change your Ubuntu machine to a Static IP go to System \ Preferences \ Network Connections. In our example, we’re on a wired system so click on the Wired tab, then select Auto eth0 and click on Edit. Select the IPv4 settings tab, change Method to Manual, click the Add button. Then type in the Static IP Address, Subnet Mask, DNS Servers, and Default Gateway. Then click Apply when you’re finished. Make sure to hit Enter after typing in the Default Gateway otherwise it will revert back to 0.0.0.0 You’ll need to enter in your admin password before the changes go into affect. To verify the changes have been made successfully launch a Terminal session and type in ifconfig at the command prompt, or follow these directions. You also might want to ping the address from another machine to make sure everything is communicating. If you want to assign a Static IP to your Windows machines, check out our article on how to assign a Static IP on Windows systems (make sure to browse the comments as our readers have some good suggestions).  Whether you have a small office or home network set up with a server and several machines, using a Static IP on each device can help you manage them easily. Again, it isn’t for everyone as it really depends on how your network is setup and the way you use it. Similar Articles Productive Geek Tips Change Ubuntu Desktop from DHCP to a Static IP AddressAllow Remote Control To Your Desktop On UbuntuAssign Custom Shortcut Keys on Ubuntu LinuxKeyboard Ninja: 21 Keyboard Shortcut ArticlesChange Ubuntu Server from DHCP to a Static IP Address TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries

    Read the article

  • Asynchronous connectToServer

    - by Pavel Bucek
    Users of JSR-356 – Java API for WebSocket are probably familiar with WebSocketContainer#connectToServer method. This article will be about its usage and improvement which was introduce in recent Tyrus release. WebSocketContainer#connectToServer does what is says, it connects to WebSocketServerEndpoint deployed on some compliant container. It has two or three parameters (depends on which representation of client endpoint are you providing) and returns aSession. Returned Session represents WebSocket connection and you are instantly able to send messages, register MessageHandlers, etc. An issue might appear when you are trying to create responsive user interface and use this method – its execution blocks until Session is created which usually means some container needs to be started, DNS queried, connection created (it’s even more complicated when there is some proxy on the way), etc., so nothing which might be really considered as responsive. Trivial and correct solution is to do this in another thread and monitor the result, but.. why should users do that? :-) Tyrus now provides async* versions of all connectToServer methods, which performs only simple (=fast) check in the same thread and then fires a new one and performs all other tasks there. Return type of these methods is Future<Session>. List of added methods: public Future<Session> asyncConnectToServer(Class<?> annotatedEndpointClass, URI path) public Future<Session> asyncConnectToServer(Class<? extends Endpoint>  endpointClass, ClientEndpointConfig cec, URI path) public Future<Session> asyncConnectToServer(Endpoint endpointInstance, ClientEndpointConfig cec, URI path) public Future<Session> asyncConnectToServer(Object obj, URI path) As you can see, all connectToServer variants have its async* alternative. All these methods do throw DeploymentException, same as synchronous variants, but some of these errors cannot be thrown as a result of the first method call, so you might get it as the cause ofExecutionException thrown when Future<Session>.get() is called. Please let us know if you find these newly added methods useful or if you would like to change something (signature, functionality, …) – you can send us a comment to [email protected] or ping me personally. Related links: https://tyrus.java.net https://java.net/jira/browse/TYRUS/ https://github.com/tyrus-project/tyrus

    Read the article

  • How to Name Linked Servers

    - by Bill Graziano
    I did another SQL Server migration over the weekend that dealt with linked servers.  I’ve seen all kinds of odd naming schemes and there are a few I like and a few I suggest you avoid. Don’t name your linked server for its IP address.  At some point whatever is on the other end of that IP address will move.  You’ll probably need to point your linked server to a new IP address but not change the name of the linked server.  And then you’ve completely lost any context around this.  Bonus points if a new SQL Server eventually ends up at the old IP address further adding confusion when you’re trying to troubleshoot. Don’t name your linked server based on its instance name.  This one is less obvious.  It sounds nice to have a linked server named [VSRV1\SQLTRAN01].  You know what it is and it’s easy to use.  It’s less nice when you’ve got 200 stored procedures that all reference this linked server but the database they reference has moved to a new instance.  Now when you query this you’re actually querying a different instance. (Please note: I’m not saying it’s a good idea to have 200 stored procedures that all reference a linked server.  I’m just saying it’s not all that uncommon.) Consider naming your linked server something that you can easily search on.  See my note above.  You can also get around this by always enclosing the name in brackets.  That is harder to enforce unless you use some odd characters in it. Consider naming your linked server based on the function.  For example, I’ve had some luck having a linked server named [DW] that points to our data warehouse server.  That server can change names or physically move and all I need to do is update the linked server to point to the new destination.  The descriptive name of the linked server is still accurate.  No code needs to change and people still know what it is just by looking at it. Consider naming your linked server for the database.  I’m still thinking through this one.  It may mean you have multiple linked servers that point to the same instance.  I’ve found that database names rarely change.  It also makes it easier to move individual databases to new servers. Consider pointing your linked servers to DNS entries and not IP addresses.  I’ve done this for reporting databases and had some success.  Especially for read-only snapshots that can get created on the main database or on the mirror.  What issues have you had with linked server names?  What has worked for you?  Where are the holes in my approach?

    Read the article

  • Advice for moving from custom domain hosted on blogspot to new custom domain on hosting provider [closed]

    - by Chan
    Need some advice :) a) Facts: 1. Currently, we have a blog www.thebigbigsky.net hosted on blogspot.com. The site has been up for around a year and google had cached some of the posts. 2. The idea for setting up the blog was to promote/share articles/write-up about personal development and self-growth, and also to offer counseling services related to them. 3. There is another subdomain - chinese.thebigbigsky.net hosting similar blog posts but in Chinese. This is hosted on blogspot as well. b) What we want to do: 1. We did some research on the internet and understood that, to make a website more popular, it is better to have a domain name that is related to the content of the site. Hence, in our case, a domain name containing "personal development" would be more ideal and easier for people to remember. 2. Hence, we are thinking to move away from blogspot to a meaningful new domain (example: personaldevelopmentwithxyz.com) and move all contents there. The new domain will be hosted on a hosting provider e.g. hostgator.com c) Here are the steps we have in mind: 1. Register the new domain (example: personaldevelopmentwithxyz.com) 2. Get a hosting provider hostgator.com 3. Setup a new site based on Wordpress, import all contents from existing blog to this new site - personaldevelopmentwithxyz.com. 4. Switch current blog back to thebigbigsky.blogspot.com 5. Switch DNS of www.thebigbigsky.net to point to the new site on hostgator.com 6. On hostgator.com, setup permanent redirect 301 of www.thebigbigsky.net to point to personaldevelopmetnwithxyz.com by following the tutorial mentioned here - http://www.blogbloke.com/migrating-redirecting-blogger-wordpress-htaccess-apache-best-method/ Questions: 1. Will it have major impact on the current google pagerank or SEO of thebigbigsky.net? As the site is not that popular, we assumed that the impact would not be a major one. 2. Is the domain personaldevelopmentwithxyz.com considered too long? (For SEO etc) 3. Steps c.4 and c.5 above - Will this work for our case? 4. How about the subdomain - chinese.thebigbigsky.net? We would like keep it as separate blog with the domain e.g. chinese.personaldevelopmentwithxyz.com 5. We added "facebook comments" to the existing post on www.thebigtbigsky.net and would not want to lose it. Will the comments still be there after we moved to the new domain? 6. Any better idea to do it? :) Thank you very much! regards, -chan

    Read the article

  • how to start LXDE session automatically after tightvncserver starts to make me able see desktop when connecting to the host via vncclient?

    - by Oleksandr Dudchenko
    I have system which is equipped with Intel Celeron processor 1.1 GHz s370 with 384 Mb of RAM on Intel d815egew motherboard which supports wake-on-lan function. I want to use such a PC for Internet sharing to the local network. Also this PC is a DHCP+DNS server as well as router/gateway. Based on above I decided to install Lubuntu as it is lightweight system. I installed Lubuntu 10.04.4 LTS from alternate ISO. System has no auto login. System boots and has acceptable performance. Host PC has onboard 4 network adapters: eth0 – ethernet controller which is used for Local Network connections. Has static address 10.0.0.1 eth1 – ethernet controller which is not used and not configured so far, I plan to connect printer here later on. eth2 - ethernet controller which is used to connect to Internet, which we plan to share for the local network wlan0 – wireless controller, it is used in role of access poit for local Network and has address 10.0.0.2 We want to control our gateway remotely. So, we need to be able to power it on remotely. To allow this I’ve done the following things: $ cd /etc/init.d/ made a new file with command $ sudo vim wakeonlanconfig Wrote the following lines to the newly created file, saved and closed it #!/bin/bash ethtool -s eth0 wol g ethtool -s eth2 wol g exit Made the abovementioned file executable $ sudo chmod a+x wakeonlanconfig Then included it into autostart sequence during boot. $ sudo update-rc.d -f wakeonlanconfig defaults after system reboot we will be able to poweron system remotely. Than we need to have a possibility to connect remotely to the host via SSH and VNC. So, I installed following packets with the following commands: $ sudo apt-get update $ sudo apt-get install openssh-server tightvncserver Add ssh daemon into autostart sequence during boot. $ sudo update-rc.d -f ssh defaults Power off the host PC $ sudo halt Then I went to remote place, send magic paket and powered the Host up. System started... And I connected to the host via Putty from remote system under Windows. Than logged in and run the command to start vnc server. $ tightvncserver -geometry 800x600 -depth 16 :2 VNC server successfully started and I got message like follows. New 'X' desktop is gateway:2 Starting applications specified in /home/dolv/.vnc/xstartup Log file is /home/dolv/.vnc/gateway:2.log Using UltraVNC Viewer programm under windows I connected to the host's vnc server, enterd the password and.... sow only mouse cursor in form of cross on a grey background of 800x600 dots, no desktop. Here is my .vnc/xstartup file #!/bin/sh xrdb $HOME/.Xresources xsetroot -solid grey #x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & #x-window-manager & # Fix to make GNOME work export XKL_XMODMAP_DISABLE=1 /etc/X11/Xsession The Question: What I have to change and where to make LXDE session start automatically after tightvncserver starts?

    Read the article

  • Virtual Lab part 2&ndash;Templates, Patterns, Baselines

    - by Geoff N. Hiten
    Once you have a good virtualization platform chosen, whether it is a desktop, server or laptop environment, the temptation is to build “X”.  “X” may be a SharePoint lab, a Virtual Cluster, an AD test environment or some other cool project that you really need RIGHT NOW.  That would be doing it wrong. My grandfather taught woodworking and cabinetmaking for twenty-seven years at a trade school in Alabama.  He was the first instructor hired at that school and the only teacher for the first two years.  His students built tables, chairs, and workbenches so the school could start its HVAC courses.   Visiting as a child, I also noticed many extra “helper” stands, benches, holders, and gadgets all built from wood.  What does that have to do with a virtual lab, you ask?  Well, that is the same approach you should take.  Build stuff that you will use.  Not for solving a particular problem, but to let the Virtual Lab be part of your normal troubleshooting toolkit. Start with basic copies of various Operating Systems.  Load and patch server and desktop OS environments.  This also helps build your collection of ISO files, another essential element of a virtual Lab.  Once you have these “baseline” images, you can use your Virtualization software’s snapshot capability to freeze the image.  Clone the snapshot and you have a brand new fully patched machine in mere moments.  You may have to sysprep some of the Microsoft OS environments if you are going to create a domain environment or experiment with clustering.  That is still much faster than loading and patching from scratch. So once you have a stock of raw materials (baseline images in this case) where should you start.  Again, my grandfather’s workshop gives us the answer.  In the shop it was workbenches and tables to hold large workpieces that made the equipment more useful.  In a Windows environment the same role falls to the fundamental network services:  DHCP, DNS, Active Directory, Routing, File Services, and Storage services.  Plan your internal network setup.  Build out an AD controller with all the features listed.  Make the actual domain an isolated domain so it will not care about where you take it.  Add the Microsoft iSCSI target.  Once you have this single system, you can leverage it for almost any network environment beyond a simple stand-alone system. Having these templates and fundamental infrastructure elements ready to run means I can build a quick lab in minutes instead of hours.  My solutions are well-tested, my processes fully documented with screenshots, and my plans validated well before I have to make any changes to client systems.  the work I put in is easily returned in increased value and client satisfaction.

    Read the article

  • RIM's current BB7 developer toolset is a joke

    - by mbrit
    tl;dr - RIM's current developer toolset is not fit for purpose.Background to this is that I'm currently working on a PhoneGap/Cordova project for a client that has to run on BlackBerry. The tooling is so ridiculous to use that even though I had a gentle dig at them in a Guardian piece it's worth having a more full-on attack.At the moment, RIM's pitch is that apps are built for the current BBOS7 devices using WebWorks. This is an HTML-based toolset. Essentially a browser is spun up in a native app container and your app is powered by JavaScript. Specific JavaScript libraries exist that thunk down to native capabilities no the device. I happen to use PhoneCap/Cordova in combination with this.The tooling is non-existent. I'm using TextMate, Ant, and Terminal to develop the app. There's no "console.log" output, and no debugging. The only way to instrument the app is to put "alert" calls in your code.Apart from the fact that that's *not* fine in 2012, how about this… every time you deploy a new app to the device, the device has to reboot. This process takes six minutes on a relatively modern BlackBerry device. How about this as well - in order to get a file into the package it has to be signed. My small app over here has 100 different files (75 or so generated). Signing doesn't happen locally, it happens on RIM's servers in Waterloo. Thus whenever you deploy the app you have this utility have to call RIM's servers 100 times. More to the point, sometimes during the day these servers have "micro-downtime" moments where they're unreachable for five or ten minutes, normally two or three times a day. Oh yes, you'll also get an email sent to you per signing on success or failure. 100 inbound emails, per deployment.(I started this post at the beginning of one of these cycles, by the way. That's how long it takes to build and deploy *once*. By the way, the change I made didn't work.)To clarify:* Change the script,* Build it using Ant,* Ant will spin up a Java app that talks to RIM's servers to sign it.* Receive 100 emails, assuming the server is up.* App deployed - takes about 30 seconds.* BlackBerry device restarts - takes about six minutes.* Find and open the app. Go through security prompts.* Test the app, with no "console.log" output and no debugger."Why not use the simulator?" I hear you ask. Well, apart from the fact that the simulator refused to reach any network service over HTTPS that I happen to own? (Some people suggest changing DNS settings for this known issue.) Admittedly, the simulator does show you console.log, but you still have the "six minute" restart issue on the simulator.Developers will understand this problem. Breaking concentration for six-plus minutes every time you want to deploy an app turns developing into a nightmare. Combining that with no worthy debugging tools turns the toolset into a joke.

    Read the article

  • Setting up Ubuntu Server as a Router with DHCPD and 3 Ethernet devices

    - by cengbrecht
    My configuration: Ubuntu 12.04 DHCP3-server eth0, eth1, eth2 Edit: removed br0&br1 eth0 is the external connection eth1 & eth2 are the internal network eth1 and eth2 are supposed to be seperate networks of student/teachers respectivly. What I would like to have is the internet from external device bridged to device 1 and 2, with the DHCP server controlling the two internal devices. Its already working with DHCP, the part I am stuck on is bridging for internet. I have setup a script that I found here: Router With the original script he linked here: Ubuntu Router Guide echo -e "\n\nLoading simple rc.firewall-iptables version $FWVER..\n" IPTABLES=/sbin/iptables #IPTABLES=/usr/local/sbin/iptables DEPMOD=/sbin/depmod MODPROBE=/sbin/modprobe EXTIF="eth0" INTIF="eth1" INTIF2="eth2" echo " External Interface: $EXTIF" echo " Internal Interface: $INTIF" echo " Internal Interface: $INTIF2" EXTIP=`ifconfig $EXTIF | grep 'inet addr:' | sed 's#.*inet addr\:\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\).*#\1#g'` echo " External IP: $EXTIP" #====================================================================== #== No editing beyond this line is required for initial MASQ testing == The rest of the script below this is as is. I can get ip from the eth1 & eth2 devices, and my computer can see them, and them it, however, internet is not being passed through. If you need more information please just let me know. EDIT: So I had a 255.255.254.0 network, I believe that was causing the issue. Not sure if it will matter on the second card, I will test later. After changing the subnet to 255.255.255.0 the pings will pass through, however, I cannot get DNS requests to pass? My new Config for Firewall Rules # /etc/iptables.up.rules # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *mangle :PREROUTING ACCEPT [39:4283] :INPUT ACCEPT [39:4283] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [12:4884] :POSTROUTING ACCEPT [13:5145] COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -j LOG -A FORWARD -m state -i eth1 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth2 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth1 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth2 --state NEW,ESTABLISHED,RELATED -j ACCEPT COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *nat :INPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE -A POSTROUTING -o eth0 -j SNAT --to-source 192.168.1.25 COMMIT # Completed on Wed Nov 28 19:43:28 2012 Not sure what else you may need, but I am using Webmin to control the server(Needed for the operators on site to know how to use it.) If you could explain it as standard CLI commands, or edits to this file directly then we should be ok. :) And thanks again Erik, I do believe your edits did help.

    Read the article

  • How to run a local and external website on same computer with 2 NIC's, 2 Routers and 3 seperate networks?

    - by CandN
    Hello and hopefully I can get some answers to my question, though I think I'm making it more complicated for myself than it has to be. My business is a used auto dealership, and I'm in the process of connecting it to the world - via ethernet from the business server [running Xubuntu] to the ISP's ethernet router/modem, so that I can host our own website (no more than 5-10 people probably visiting at any time - mainly paying their bill), as well as set up a web based internal-intranet site - via DD-WRT Router on the 2nd NIC on the business server - that'll be accessed over Wifi from employees personal devices. On the other end of this is trying to offer free wifi to customers that is completely seperate of the 2 mentioned above networks. Quick Rundown: 1. Web Site for Customers to access. I'm going to use no-ip.org for DNS for the moment being, so I'll have a site that customers can access from anywhere in the world at "mybiz.no-ip.org". This will be forwarded to NIC #1 on the server, possibly at an address like "108.69.." as its being provided an IP from the ISP's modem/router, that is from Time Warner, and they allow NO! configuration options. Web Site for employees to access. I'm trying not to use the server too much as a desktop, only for critical situations, so having a backend thats seperate from the front-facing website is critical. This will be the DD-WRT router hardwired into NIC #2 on the server. This WiFi will be password accessible. Public WiFi for customers. The DD-WRT can seperate networks if I'm correct, I just can't seem to understand how to seperate the 2 and still have internet access on both. I've done it before, but the "Public" wifi (with no password set to connect) kept dropping the connection like a problem was happening that I couldn't figure out. So if I could do a little drawing, this is how it would/should possibly look. ISP -- [Sends Public Facing IP of 108.69.*.1/8] -- ISP Modem Router ISP Modem Router (Ethernet Only) -- [Gives Private IP 108.69.*.2] -- Server NIC #1 Server NIC #1 -- [Gives Private IP 108.69.*.3] -- DD-WRT Router DD-WRT Router -- [DHCP Enabled Giving IP's 172.16.0.0/16] -- Employees Network | | --------- [DHCP Enabled Giving IP's 192.168.1.0/24] -- Public WIFI Hope it's not too confusing, but it anyone could give me some good direct tutorials on how to accomplish this, or if YOU know, then it'll be alot of help. Thanks to all in advance. Need anything else to be explained? Don't hesitate to ask! *Using The LAMP stack with Webmin/VirtualMin -Customer site is located in /var/www2/ -Private Employees site is located in /var/www/ Using no-ip.org's dynamic client updater

    Read the article

  • How to switch off wifi on startup or from the console

    - by mit
    I have installed ubuntu 10.04 on a laptop. Wifi is switched on by default on startup. I can disable it rightclicking the network manager icon in the gnome bar. How can I set it to have wifi switched off as default? Alternatively, how can I switch off wifi on the console? I tried already the rfkill command but it does not list any devices and it does not switch off wifi, I tried different parameters. This is a standard install of the Ubuntu 10.04 i386 Desktop Live CD on an IBM T40 Laptop. EDIT A: This is the output of some rfkill commands on my system, and it does not affect the wifi of the laptop: $ rfkill --help Usage: rfkill [options] command Options: --version show version (0.4) Commands: help event list [IDENTIFIER] block IDENTIFIER unblock IDENTIFIER where IDENTIFIER is the index no. of an rfkill switch or one of: <idx> all wifi wlan bluetooth uwb ultrawideband wimax wwan gps fm $ rfkill list $ rfkill list wifi $ rfkill list all $ rfkill list wlan $ sudo rfkill list all $ sudo rfkill block all $ sudo rfkill block wlan $ sudo rfkill block wifi $ EDIT B: Now I found out that sudo ifconfig eth1 down turns it off. And I can turn it on through the gnome network applet again. But the applet does not reflect the change from the commandline, it stills believes wifi is switched on. I have to switch it off and on again on the applet to switch it on again, when I switched it off from the console. Is there a better way? This is what the syslog looks like when I switch wireless off and on again from the network manager: NetworkManager: <info> (eth1): device state change: 3 -> 2 (reason 0) NetworkManager: <info> (eth1): deactivating device (reason: 0). NetworkManager: <info> Policy set '24' (eth0) as default for routing and DNS. NetworkManager: <info> (eth1): taking down device. avahi-daemon[660]: Withdrawing address record for fe80::202:8aff:feba:d798 on eth1. kernel: [ 971.472116] airo(eth1): cmd:3 status:7f03 rsp0:0 rsp1:0 rsp2:0 NetworkManager: <info> (eth1): bringing up device. NetworkManager: <info> (eth1): supplicant interface state: starting -> ready NetworkManager: <info> (eth1): device state change: 2 -> 3 (reason 42) avahi-daemon[660]: Registering new address record for fe80::202:8aff:feba:d798 on eth1.*. kernel: [ 965.512048] eth1: no IPv6 routers present

    Read the article

  • Server-infrastructure recommendations

    - by Tim van Elsloo
    Here's the thing: I need a cheap, fast, reliable infrastructure that can dynamically scale (like Amazon S3: cloud-storage). I'm thinking of 3 different type of 'servers'. Application-server Should be able to run CentOS (or another light Linux-distr.) Should be able to run Apache Should be able to run PHP Should be able to run GD (so it does rely on it's cpu). Should be extremely reliable and fast. Database-server Should be able to run MySQL Should be able to... well, do nothing else :P. Should be extremely reliable and fast. Storage-server Should be able to run some kind of file-transfer-deamon (like FTP, CouchDB, etc.) Should be able to do nothing else. Should be extremely reliable and fast. So technically, by transferring all static data to 2 different servers/services, the application-server can totally focus on the webpages. My questions: What services do you recommend? Which is cheaper, faster and more reliable: using my own server, or using some cloud-storage/cloud-computing-service (like Amazon S3, CloudFiles, etc.)? How can I prevent bandwidth abuse (such as dos-attacks causing the bill to be extremely high)? What's the difference between "including CDN" and "excluding CDN"? It seems the price doesn't differ at CloudFiles? Do you have to pay "including CDN" + "excluding CDN" when you decide to enable the delivery-network? Or have you only got to pay "including CDN"? Should I use my own nameserver too or can I use my domain-hoster's nameservers? What are the minimum software specifications of a nameserver. Can I write some software myself? Does anyone have a good protocol-description? I hope you can answer my questions. Answers I shouldn't write my own nameserver-software. Instead, I should use something like bind. (http://osspro.com/2010/05/04/linux-create-your-own-domain-name-server-dns/).

    Read the article

  • Using Network load balancing to distribute load for SharePoint2010 – Part3 of building my own development SharePoint2010 Farm

    - by ybbest
    Part1 of building my own development SharePoint2010 Farm Part2 of building my own development SharePoint2010 Farm Part3 of building my own development SharePoint2010 Farm In my last post, I have installed SharePoint2010 in one of the server (WFE One) and configured using the OOB SharePoint configuration wizard. In this post I will show you how to use OOB windows network load balancing to distribute load for SharePoint2010 site. 1. Install SharePoint in another server WFE Two (you can follow the steps in my last post), but instead of choosing create new Farm, you need to select “connect to existing farm” this time. 2. Click next then click retrieve database names button and select the farm configuration database. 3. Click next and enter the passphrase you specified when you first installed the SharePoint Farm. 4. Click the advanced settings and select Use this machine to host the web site. 5. Click OK to finish the configurations 6. Next, Install NLB in the two WFE (web front end) SharePoint servers 7. Configure NLB to create the cluster. Go to Start—Administrative Tools—Network Load Balancing Manager 8. Right-click the Network Load Balancing Clusters Node and select New Cluster. 9. Type in the host name that is to be part of the new cluster. 10. Type in the IP address for the cluster. 11. Select the Multicast for this cluster.(The default one is Unicast) 12. You can configure the Port Rules for the clustering , but I will leave the default here. 13. Add another WEF to the cluster. 14. Type in the host name that is to be part of the new cluster. 15. Set the Priority to 2. 16. Click Next to complete the cluster setup. 17. Create an entry in the DNS for the new cluster. 18. Add the binding to the IIS site in the IIS Manager 19. Change the Alternate access mapping for you default site collection from http://sp2010wefone to http://team 20. Browse to http://Team , you will be redirected to the SharePoint site.

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • Can see samba shares but not access them

    - by nitefrog
    For the life of me I cannot figure this one out. I have samba installed and set up on the ubuntu box and on the Win7 box I CAN SEE all the shares I created. I created two users on ubuntu that map to the users in windows. On ubuntu they are both admins, user A & B on Windows User A is admin and user B is poweruser. User A can see both shares and access them, but user B can see everythin, but only access the homes directory, the other directory throws an error. I have two drives in Ubuntu and this is the smb.config file (I am new to samba): [global] workgroup = WORKGROUP server string = %h server (Samba, Ubuntu) wins support = no dns proxy = yes name resolve order = lmhosts host wins bcast log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user ; usershare max shares = 100 usershare allow guests = yes And here is the share section: Both user A & B can access this from windows. No problems. [homes] comment = Home Directories browseable = no writable = yes Both User A & B can see this share, but only user A can access it. User B get an error thrown. [stuff] comment = Unixmen File Server path = /media/data/appinstall/ browseable = yes ;writable = no read only = yes hosts allow = The permission for the media/data/appinstall/ is as follows: appInstall properties: share name: stuff Allow others to create and delete files in this folder is cheeked Guest access (for people without a user account) is checked permissions: Owner: user A Folder Access: Create and delete files File Access: --- Group: user A Folder Access: Create and delete files File Access: --- Others Folder Access: Create and delete files File Access: --- I am at a loss and need to get this work. Any ideas? The goal is to have a setup like this. 3 users on window machines. Each user on the data drive will have their own personal folder where they are the ones that can only access, then another folder where 2 of the users will have read only and one user full access. I had this setup before on windows, but after what happened I am NEVER going back to windows, so Unix here I am to stay! I am really stuck. I am running Ubuntu 11. I could reformat again and put on version 10 if that would make life easier. I have been dealing with this since Wed. 3pm. Thanks.

    Read the article

  • Why does the login screen fail to appear?

    - by a different ben
    My system: Dell Precision T3500 nVidia Quadro NVS 295 Ubuntu 12.04 x86_64 (3.2.0-32) Essential problem: On boot my system won't get past the splash screen. I can switch to another virtual terminal and log in, I can also ssh from another system -- so it appears that the problem might be with the display manager. How can I diagnose and fix this problem? More info: From a VT I can issue sudo lightdm restart, and this will bring up the login screen and and I can continue from there. So I do have access to my system. Update-manager recently updated a number of packages, including a bunch of x11 and xorg packages, some nVidia drivers, rpcbind, etc etc. My boot log (if that is any guidance) says the following: fsck from util-linux 2.20.1 fsck from util-linux 2.20.1 fsck from util-linux 2.20.1 fsck from util-linux 2.20.1 rpcbind: Cannot open '/run/rpcbind/rpcbind.xdr' file for reading, errno 2 (No such file or directory) rpcbind: Cannot open '/run/rpcbind/portmap.xdr' file for reading, errno 2 (No such file or directory) /dev/sda1: clean, 597650/1525920 files, 3963433/6103296 blocks /dev/sda7: clean, 11/6406144 files, 450097/25608703 blocks /dev/sda5: clean, 158323/1525920 files, 1886918/6103296 blocks /dev/sda8: clean, 250089/107929600 files, 111088810/431689728 blocks Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd * Starting AppArmor profiles [80G [74G[ OK ] Loading the saved-state of the serial devices... /dev/ttyS0 at 0x03f8 (irq = 4) is a 16550A * Starting ClamAV virus database updater freshclam [80G [74G[ OK ] * Starting Name Service Cache Daemon nscd [80G [74G[ OK ] * Starting modem connection manager[74G[ OK ] * Starting K Display Manager[74G[ OK ] * Starting mDNS/DNS-SD daemon[74G[ OK ] * Stopping GNOME Display Manager[74G[ OK ] * Stopping K Display Manager[74G[ OK ] * Starting bluetooth daemon[74G[ OK ] * Starting network connection manager[74G[ OK ] * Starting Postfix Mail Transport Agent postfix [80G [74G[ OK ] speech-dispatcher disabled; edit /etc/default/speech-dispatcher * Starting VirtualBox kernel modules [80G [74G[ OK ] * Starting the Winbind daemon winbind [80G [74G[ OK ] saned disabled; edit /etc/default/saned * Starting anac(h)ronistic cron[74G[ OK ] * Stopping anac(h)ronistic cron[74G[ OK ] * Checking battery state... [80G [74G[ OK ] nxsensor is disabled in '/usr/NX/etc/node.cfg' Trying to start NX server: NX 122 Service started. NX 999 Bye. Trying to start NX statistics: NX 723 Cannot start NX statistics: NX 709 NX statistics are disabled for this server. NX 999 Bye. * Stopping System V runlevel compatibility[74G[ OK ] * Starting Mount network filesystems[74G[ OK ] * Stopping Mount network filesystems[74G[ OK ] * Stopping regular background program processing daemon[74G[ OK ] * Starting regular background program processing daemon[74G[ OK ] * Starting anac(h)ronistic cron[74G[ OK ] * Stopping anac(h)ronistic cron[74G[ OK ]

    Read the article

  • Multi subnet in ubuntu with dnsmasq

    - by Fox Mulder
    I have a multi lan port box that install ubuntu server 11.10. I am setup network in /etc/network/interfaces file as follow: auto lo iface lo inet loopback auto eth0 iface eth0 inet static      address 192.168.128.254      netmask 255.255.255.0      network 192.168.128.0      broadcast 192.168.128.255      gateway 192.168.128.1      dns-nameservers xxxxxx auto eth1 iface eth1 inet static      address 192.168.11.1      netmask 255.255.255.0      network 192.168.11.0      broadcast 192.168.11.255 auto eth2 iface eth2 inet static      address 192.168.21.1      netmask 255.255.255.0      network 192.168.21.0      broadcast 192.168.21.255 auto eth3 iface eth3 inet static      address 192.168.31.1      netmask 255.255.255.0      network 192.168.31.0      broadcast 192.168.31.255 I am also enable the ip forward by echo 1 /proc/sys/net/ipv4/if_forward in rc.local. my dnsmasq config as follow except-interface=eth0 dhcp-range=interface:eth1,set:wifi,192.168.11.101,192.168.11.200,255.255.255.0 dhcp-range=interface:eth2,set:kids,192.168.21.101,192.168.21.200,255.255.255.0 dhcp-range=interface:eth3,set:game,192.168.31.101,192.168.31.200,255.255.255.0 the dhcp was working fine in eth1,eth2,eth3, any machine plug in the subnet can get correct subnet's ip. My problem was, each subnet machine can't ping each other. for example. 192.168.11.101 can't ping 192.168.21.101 but can ping 192.168.128.1 192.168.31.101 can't ping 192.168.21.101 but can ping 192.168.128.1 I am also try to using route add -net 192.168.11.0 netmask 255.255.255.0 gw 192.168.11.1 (and also 192.168.21.0/192.168.31.0) at this multi-lan-port machine. But still won't work. Does anyone can help ? Thanks.

    Read the article

  • Can't access some websites using Ubuntu 13.10

    - by Adame Doe
    Something's wrong with Ubuntu. Since I've upgraded to 13.10, I can't access some websites for no apparent reason. I've tried everything imaginable to solve this problem : Made sure that MTUs are the same, Disabled IPv6 in both the network manager and used browsers, Deactivated my network keys, DMZed my computer, Used other DNS like Google and OpenDNS, Checked that no firewall was running my computer ... And it's the same result. I even tried to reinstall Ubuntu a couple of times, but no luck. The most annoying thing about it is I can't access wordpress.org! So, there's no way it could be an ISP restriction of some kind. When I use a VPN, I can access pretty much anything. I'm really frustrated because I have to use wordpress.org very often. Any clue? ifconfig adame@adame-ws:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:26:18:3d:b0:7c inet addr:10.42.0.1 Bcast:10.42.0.255 Mask:255.255.255.0 inet6 addr: fe80::226:18ff:fe3d:b07c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8024 errors:0 dropped:0 overruns:0 frame:0 TX packets:7966 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:684480 (684.4 KB) TX bytes:616608 (616.6 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:8222 errors:0 dropped:0 overruns:0 frame:0 TX packets:8222 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:568269 (568.2 KB) TX bytes:568269 (568.2 KB) wlan0 Link encap:Ethernet HWaddr 00:19:70:40:85:eb inet addr:192.168.2.3 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::219:70ff:fe40:85eb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1464 Metric:1 RX packets:123705 errors:0 dropped:0 overruns:0 frame:0 TX packets:98141 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:94963545 (94.9 MB) TX bytes:10387470 (10.3 MB) /etc/hosts 127.0.0.1 localhost 127.0.1.1 adame-ws ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters tracepath wordpress.org 1: adame-ws.local 0.092ms pmtu 1500 1: 192.168.2.1 1.300ms asymm 2 1: 192.168.2.1 1.060ms asymm 2 2: no reply 3: no reply 4: no reply 5: no reply 6: no reply 7: no reply 8: no reply ... keep on going like that ping wordpress.org adame@adame-ws:~$ ping wordpress.org PING wordpress.org (66.155.40.250) 56(84) bytes of data. --- wordpress.org ping statistics --- 10 packets transmitted, 0 received, 100% packet loss, time 9071ms

    Read the article

  • how did Google Analytics kill my site?

    - by user1813359
    Yesterday I created a google analytics profile for one of my sites and included the JS block in the layout template. What happened next was very strange. Within about 2 minutes, the site had become unreachable. I had been checking the AWStats page for the site when I thought to set up GA. After that had been done, I clicked on the link for 404 stats, which opens in a new tab. It churned for a long while and then showed a nearly blank page, similar to that when Firefox chokes on a badly-formatted XML page, except there was no error msg. But i was logged into the server and could see that that page has a 401 Transitional DTD. Strange! I tried viewing source but it just churned endlessly. I then tried "inspect element" and was able to see an error msg having to do with some internal Firefox lib. Unfortunately, i neglected to copy that. :-( All further attempts to load anything on the site would time out. Firebug's Net panel showed no request being made. Chrome would time out. So, I deleted the GA profile, removed the JS block, and cleared the server cache. No joy. I then removed all google cookies and disabled JS. Still nothing. No luck in any other browser. And now my client couldn't access the site. Terrific. I was able use wget while logged into another server. The retrieved page was fine, and did not contain the GA JS block. However, the two servers are on the same network. (Perhaps a clue.) The server itself was fine. Ping, traceroute looked great. I could SSH in. I tailed the access log and tried a browser request. Nothing. But i forgot to quit and a minute or so later I saw a request from someone else being logged. Later, I could see that requests had been served all day to some people. Now, 24 hours later, the site works once again, but is still unreachable by the client (who is in another city). So, does anyone have some insight into what's going on? Does this have something to do with google's CDN? I don't know very much about how GA works but what I'm seeing reminds me of DNS propagation issues. And why the initial XML error? And why the heck was the site just plain unreachable? What did google do to my site?! Sorry for the length but I wanted to cover everything.

    Read the article

  • How to set up an rsync backup to Ubuntu securely?

    - by ws_e_c421
    I have been following various other tutorials and blog posts on setting up a Ubuntu machine as a backup "server" (I'll call it a server, but it's just running Ubuntu desktop) that I push new files to with rsync. Right now, I am able to connect to the server from my laptop using rsync and ssh with an RSA key that I created and no password prompt when my laptop is connected to my home router that the server is also connected to. I would like to be able to send files from my laptop when I am away from home. Some of the tutorials I have looked at had some brief suggestions about security, but they didn't focus on them. What do I need to do to let my laptop with send files to the server without making it too easy for someone else to hack into the server? Here is what I have done so far: Ran ssh-keygen and ssh-copy-id to create a key pair for my laptop and server. Created a script on the server to write its public ip address to a file, encrypt the file, and upload to an ftp server I have access to (I know I could sign up for a free dynamic DNS account for this part, but since I have the ftp account and don't really need to make the ip publicly accessible I thought this might be better). Here are the things I have seen suggested: Port forwarding: I know I need to assign the server a fixed ip address on the router and then tell the router to forward a port or ports to it. Should I just use port 22 or choose a random port and use that? Turn on the firewall (ufw). Will this do anything, or will my router already block everything except the port I want? Run fail2ban. Are all of those things worth doing? Should I do anything else? Could I set up the server to allow connections with the RSA key only (and not with a password), or will fail2ban provide enough protection against malicious connection attempts? Is it possible to limit the kinds of connections the server allows (e.g. only ssh)? I hope this isn't too many questions. I am pretty new to Ubuntu (but use the shell and bash scripts on OSX). I don't need to have the absolute most secure set up. I'd like something that is reasonably secure without being so complicated that it could easily break in a way that would be hard for me to fix.

    Read the article

  • AWStats is processing log files but does not display them

    - by Wouter
    I've setup AWStats on my VPS to get some more insight into the traffic coming to my site. As instructed I ran a manual build/update which ran fine: sudo -u www-data ./awstats.pl -config=xxxx.com Create/Update database for config "/etc/awstats/awstats.xxxx.com.conf" by AWStats version 6.9 (build 1.925) From data in log file "/usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Phase 2 : Now process new records (Flush history on disk after 20000 hosts)... Warning: awstats has detected that some hosts names were already resolved in your logfile /usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |. If DNS lookup was already made by the logger (web server), you should change your setup DNSLookup=1 into DNSLookup=0 to increase awstats speed. Jumped lines in file: 0 Parsed lines in file: 814 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 814 new qualified records. It also produced the file in the DatDir: /var/lib/awstats/awstats052010.xxxx.com.txt which contains what I would expect. BUT when I visit: xxxx.com/awstats/awstats.pl it tells me Last Update: Never updated (See 'Build/Update' on awstats_setup.html page) and the rest of the page is blank. I'm pretty sure I set it up correctly but now I cannot figure out why this is happening. Hopefully someone smarter then me can help me. Thank you in advanced.

    Read the article

  • Squid on an Azure VM

    - by LantisGaius
    I can't get it to work. Here's exactly what I did: Create a new Azure VM, Windows Server 2012. RDP to the new VM Download & Extract Squid for Windows (2.7.STABLE8) Rename the conf files (squid, mime & cachemgr) Add the following lines on the end of squid.conf auth_param basic program c:/squid/libexec/ncsa_auth.exe c:/squid/etc/passwd.txt auth_param basic children 5 auth_param basic realm Welcome to http://abcde.fg Squid Proxy! auth_param basic credentialsttl 12 hours auth_param basic casesensitive off acl ncsa_users proxy_auth REQUIRED http_access allow ncsa_users Use http://www.htaccesstools.com/htpasswd-generator-windows/ to create passwd.txt Test passwd.txt via c:/squid/libexec/ncsa_auth.exe c:/squid/etc/passwd.txt (success) squid -z squid -i net start squid (No errors so far). go to https://manage.windowsazure.com, Virtual Machines - myVM - Endpoints Add Endpoint: Name: Squid Protocol: TCP Public Port: 80 Private Port: 3128 That's it. Unfortunately, it doesn't work. I think I screwed something up at the endpoint? I'm not sure.. help? EDIT: I'm testing it via Firefox - Options - Advanced - Network, and the exact error is "The Proxy Server is refusing connections." I'm using my DNS as the Proxy server "abcdef.cloudapp.net" and port 80 (since that's my public endpoint).

    Read the article

  • Access to message queuing system is denied MSMQ?

    - by user1401694
    My problem is a little confusing. I have 2 servers (Windows Server 2008 R2) with MSMQ installed and I want to use Server B to consume a MessageQueue on Server A. When I try to Receive it always throws a message error: "Access to message queuing system is denied.". IP between them. Server A: 172.31.23.130 Server B: 172.31.23.195 FormatName:Direct=TCP:172.31.23.195\private$\queuesource (It's working for Sends) I can ping each server from the other; The firewall is disabled; The "queuesource" has Full Control to "Everyone", "Anonymous Logon", "Network", "Network Services"; Journal is disabled; Authentication is ok; The queue is Transactional. My code in .Net C# is basically like this: MessageQueue _sourceQueue = new MessageQueue(); _sourceQueue.Path = "FormatName:Direct=TCP:172.31.23.195\private$\queuesource"; _sourceQueue.Receive(); // << here throw an exception. Actually I'm using the Private Queue only to avoid Active Directory's problems. For example, if the server DNS fail all network fail. I don't know what do anymore.

    Read the article

  • Cannot bind OSX to AD

    - by erotsppa
    I'm trying to get an mac mini running snow leopard server to join a windows domain here. The windows domain server is running Windows server 2008. When I go to "Accounts" in my System Preferences, and lick on "Join", I get this error: "Unable to add server. Node name wasn't found. (2000)" In my console messages I find this: 10-04-06 11:42:25 AM System Preferences1452 -[ODCAddServerSheetController handleOtherActionError: gotError: Error Domain=com.apple.OpenDirectory Code=2000 UserInfo=0x2004f2f80 "Custom call 82 to Active Directory failed.", Node name wasn't found. I specified a FQDN for the domain server, so I am totally confused as to why it would list "domain = com.apple...." in that error. I've tried firing up the Directory Utility and trying to join a domain via the Active Directory option there. Again I fill in the FQDN, and the proper administrator/password acount info. Now I get a different error: "Invalid Domain An invalid Domain and Forest combination was specified. You should enter a fully qualified DNS name for the domain and forest (e.g., ads.company.com)." If anyone has any pointers or suggestions this would be appreciated.

    Read the article

  • My SMTP's outgoing mail gets bounced

    - by BloodPhilia
    I've got a ISPconfig 3 production server set up, running Ubuntu Server 9.04. My e-mail gets delivered ok to almost every other server I send mail to except for one (smtp.chello.nl which bounces my email). In my /var/log/mail.err I found the below error. Sep 23 08:59:33 <MYHOSTNAME> postfix/smtp[26944]: 3DB2B1456149: to=<<RECIPIENT>@chello.nl>, relay=smtp.chello.nl[213.46.255.2]:25, delay=2, delays=0.02/0.01/1.9/0.04, dsn=5.1.0, status=bounced (host smtp.chello.nl[213.46.255.2] said: 550 5.1.0 Dynamic/Generic hostnames are blocked. Please contact your Email Provider. Your IP was <MY IP>. Your hostname was ??. (in reply to MAIL FROM command)) What could be the cause of this? I did an SMTP check on mxtools.com and got the following: OK - Not an open relay OK - 0 seconds - Good on Connection time OK - 1.482 seconds - Good on Transaction time OK - 83.161.xx.xx resolves to a83-161-xx-xx.xxx.xxx.nl WARNING - Reverse DNS does not match SMTP Banner Update: My IP is static.

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >