Search Results

Search found 18740 results on 750 pages for 'network balancer'.

Page 560/750 | < Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >

  • What would be the best way to correlate logs and events on several hosts?

    - by user220746
    I'm trying to build a log correlation system on multiple hosts. SEC seems interesting but I don't know if it will cover my needs. How could I correlate system events, logs, network events, etc. on multiple hosts at the same time, in real time? Examples: If 5 failed logins happened on host A the last minute and if firewall B has denied lots of access on differents ports on A, then we assume there is a potential attack in progress on A. If the Apache service on host A didn't receive any request for the last N minutes and Apache service on host B did, then the load balancing could be faulty.

    Read the article

  • The Most Ridiculous Computer Cameos of All Time

    - by Jason Fitzpatrick
    For the last half century computers have played all sorts of major and minor roles in movies; check out this collection to see some of the more quirky and out-of-place appearances. Wired magazine rounds up some of the more oddball appearances of computers in film. Like, for example, the scene shown above from Soylent Green: Spoiler alert: Soylent Green is people! But that’s not the only thing we’re gonna spoil. Soylent Green is set in 2022, and at one point, you’ll notice that a government facility is still using a remote calculator that plugs into the CDC 6600, a machine that was state-of-the-art in 1971. Come to think of it, we should scratch this from the list. This is pretty close to completely accurate. Hit up the link below to check out the full gallery, including a really interesting bit about how the U.S. Government’s largest computer project–once decommissioned and sold as surplus–ended up on the sets of dozens of movies and television shows. The Most Wonderfully Ridiculous Movie Computers of All Time [Wired] Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference

    Read the article

  • Monitoring host and app parameters in real-time

    - by devopsdude42
    I have a bunch of VMs that I need to monitor in real-time. For all nodes I need to watch host parameters like load, network usage and free memory; and for some I need app-specific metrics too, like redis (some vars from the output of INFO command) and nginx (like requests/sec, avg. request time). Ideally I'd also like to track some parameters from the custom apps that run on these node too. These parameters should get tracked as a bunch of line charts on a dashboard. I checked out graphite and it looks suitable (although the UX and aesthetics looks like it needs some love). But setting up and maintaining graphite looks to be a pain, esp. since we don't have a full-time person just for this. Are there any alternatives? Or at least something that is simpler to setup and will scale? Reasonable paid services are also ok.

    Read the article

  • debian hang on startup "starting the winbind daemon: winbind"

    - by Bajingan Keparat
    I took a copy of a VM running a debian, just so that I can play around with it. I spin up the copy, but didn't give it any network connection to avoid conflict with the original one. However, when I turn the VM on, it seems to freeze after this startup message Starting Sambe daemons:nmbd smbd Starting PostgreSQL 8.4 database server: main Starting the Winbind daemon: winbind how do i fix this? I never get to the prompt to login. This vm does have a mount point that connects to a windows share folder.

    Read the article

  • Recommended Tape Library Backup software

    - by D4
    Hi, I recently "inherited" a Tape Library (Powevault 136T / Scalar 100). and I was asking for some advise on the backup software to manage the Library. My goal is to be able to manage backups of all my servers (linux and Windows) and also backup VIP´s laptop computers over the network. I am hoping for a GUI application since I will not be the one managing the process after a couple of months... Any idea is more than welcome... thanks in advance....

    Read the article

  • connect to a machine inside the intranet from outside with same address used inside

    - by pietrosld
    Hi all! I have a server inside my intranet, in wich i have apache running with some web applications. when i'm at office the url i use to connect is zeus.mydomain.it, it works couse i have in my /etc/hosts a record 192.168.0.11 zeus.mydomain.it, but obviously it does not work when i'm outside in different network. i have a internet connectino with static ip, so i can connect to my intranet form outside. the question is: how can i connect to the intranet server using zeus.mydomain.it from inside and from outside my intranet ? thanks!! Pietro.

    Read the article

  • NFS share access - Permission denied

    - by rgngl
    I'm trying to share a directory on my NAS device(WD Mybook WE) with NFS to another machine on my local network. The directory on the NAS device looks like this: drwxr-x--- 15 git git 4096 Nov 17 01:05 git/ And id's of the user git on the NAS device is like this: [root@myhost DataVolume]# id git uid=505(git) gid=505(git) I played with many different parameters in the /etc/exports file and this is what I got there currently: /DataVolume/git 192.168.0.20(async,rw,no_root_squash,no_subtree_check) On the client side I have the user git and group git with the same id's to match the ones on the server. user@myclient:~$ id git uid=505(git) gid=505(git) groups=505(git) I mount the directory with: sudo mount myhost:/DataVolume/git -t nfs git/ and the mounted directory looks like: drwxr-x--- 15 git git 4096 Nov 17 01:05 git After these steps I can't seem to cd to that directory with any user, including git and root. I am getting a Permission denied error. Thanks in advance for any help.

    Read the article

  • There is a commercial tool like Xen + Remus?

    - by SoMoS
    Hello, I have a project where we need a completely redundant system like the one offered by Remus with a Xen virtualization system. I wondered if there is a system like this one built by VMWare or some other company because the project is a very critical one and we do not want to have to wait that a bug is fixed by "the community" and money is not a problem. What Remus does is to build a completely redundant system where you can shutdown one machine and the other continues the work where the other left it maintaining the network connections opened, etc Any hint? Thanks in advance.

    Read the article

  • Bound external Cisco CIGESM ports to a specific BladeServer

    - by Vinícius Ferrão
    We have an IBM BladeCenter with 14 blade servers and one external Cisco CIGESM for Ethernet connectivity. Since this hardware is a little old, we will use it for other services, and we want to run a pfSense instance on one of the blades. It's just an Firewall Appliance, but it needs two network interfaces: one for the WAN and the other one for LAN access. Our architecture works on top of static routes, we don't use NAT, so we got the WAN IP in one interface routing to the another one. The main problem is how to plug the WAN cable in one of the four external ports and make it exclusive to the blade server containing the firewall. And we also need an exit port that goes through a 3COM 4200G switch that makes the internal routing and VLAN separation. Thanks in advance

    Read the article

  • Can remote LogMeIn Hamachi users access our local LAN?

    - by Kev
    Unknown to me, one of the kids has installed LogMeIn Hamachi on his PC so that he can access and play on his pal's Minecraft server, and vice versa. One of the things I did was disable the Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks on the Hamachi NIC in Windows 7's Network Connections. However, my lack of fu when it comes to these types of services is leaving me feeling a little uncomfortable about him using this. Is there anything I should be worried about here? For example, can his friends access our local LAN (which has a number of NAS boxes with unsecured shares) and get up to no good?

    Read the article

  • No write access on Windows 2008 workgroup share

    - by Serge - appTranslator
    Hi All, I'm trying to access network shares on my new Windows 2008 server (workgroup) using my server's local admin account. I can read files but not modify them. Permissions (sharing + file system) say: Users = Read only. Admins = Full Control. Even though I connect using my admin account (net use x: \\server\share /user:server\me), I can't write. OTOH, I have another share on the same computer where users have Read AND Write. I can write into that one. What's the problem? Does it have to do with UAC? TIA for your help.

    Read the article

  • Sound card not detected in 13.04

    - by Ganessh Kumar R P
    I have a problem with my sound card. I don't have volume up or down option anywhere. In the setting -> Sound I don't have any card detected. But when I run the command sudo aplay -l, I get the following output **** List of PLAYBACK Hardware Devices **** Failed to create secure directory (/home/ganessh/.config/pulse): Permission denied card 0: MID [HDA Intel MID], device 0: STAC92xx Analog [STAC92xx Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 7: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 8: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 9: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 And the command lspci -v | grep -A7 -i "audio" outputs 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) Subsystem: Dell Device 02a2 Flags: bus master, fast devsel, latency 0, IRQ 48 Memory at f0f20000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) (prog-if 00 [Normal decode]) -- 02:00.1 Audio device: NVIDIA Corporation GF106 High Definition Audio Controller (rev a1) Subsystem: Dell Device 02a2 Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at d3efc000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel 07:00.0 Network controller: Intel Corporation Ultimate N WiFi Link 5300 So, I assume that the drivers are properly installed but still I don't get any option in the settings or volume control. The same card used to work well back in 2010 versions(04 and 10) Any help is appreciated. Thanks

    Read the article

  • Thunderbird segmentation fault problem

    - by Dariusz Górecki
    Hey all, my thunderbird crashes few seconds after it boots up regardless of in safe mode or normal, here is a strace log: http://pastebin.com/tccfYwcD I've searched google, ubuntu forums, and mozilla bug tracker about this problem, but any of found answers did not helped me :/ I've tried: nscd daemon install solutions posted earlier fresh clean instal of ubuntu 10.10 (32b) and thunderbird 3.1.7 from repos on a VM - problem still exists removed all thunderbrid related dot files and dot dirs, and setup profile from beginning Remove thunderbird and related stuff with apt purge, and install tb from official .deb package None of these steps helps me, tb still crashes with segmentation fault :/ I use gmail IMAP account, I've searched and found few other tips on google, but with no success. I've even tried to remove mails from web gui, that came after first notice of problem, still no luck. I'm not using any network fs, and not sharing this account with anyone, and I do not use filesystem or home encryption either just clean install :/ If You guys need some more info for this let me know. TB: 3.1.7 from repos and official package Ubuntu 32bit 10.10 with medibuntu repos, fully updated

    Read the article

  • rdesktop + seamlessrdp + virtualbox slow on Windows

    - by Claudiu
    I'm trying to get VirtualBox seamless mode to work on all of my monitors at once. I was directed to this link, so I followed the instructions, using the windows port of rdesktop 1.6 found here, and using Xming as an X server. I eventually finally got it to work! However, it's really slow. While VirtualBox's regular seamless mode is as performant as if I was running apps on my host machine, with rdesktop it takes 1-2 seconds to register any user input. Dragging windows around is laggy and buggy (pieces of the desktop behind the window show through). I'm simply connecting to localhost, so network bandwidth/latency shouldn't be an issue... anyone have any idea why it's so slow and what I can do to make it more performant?

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Cross-Forest Trust

    - by cdalley
    I am looking at testing a cross-domain trust we can have two domain controllers (with different forests and domain names) setup so we can move everyone onto the new domain. We do NOT run exchange on site and we do not have any links to O365 to AD currently. Onto the problem: I have setup two DCs in a Virtual Machine: They are on the same network 192.168.0.* The Windows 2003 server: Name: OLDSRVR "Clone" of our current Domain Controller IP: 192.168.0.1 Domain: internal.test.com The Windows 2012 server: Name: ADCTEST01 Brand new domain setup from scratch separate to internal.test.com Domain: internal.test2.com IP: 192.168.0.2 OLDSRVR can only see ADCTEST if it has dynamic IP set. If I set a static IP it cannot see it. If I try using the dynamic IP and try to join it gets to the end then complains "??The trust relationship between this workstation and the primary domain failed" Any ideas?

    Read the article

  • OpenVPN route missing

    - by dajuric
    I can connect to an OpenVPN server from Windows without any problems. But when I try to connect from Ubuntu 12.04 (start OpenVPN) I receive the following: OpenVPN needs a gateway parameter for a --route option and no default was specified by either --route-gateway or --ifconfig options SERVER IP: 161.53.X.X internal network: 10.0.0.0 / 8 What I need to do ? client configuration: client dev tap proto udp remote 161.53.X.X 1194 resolv-retry infinite nobind ca ca.crt cert client.crt key client.key ns-cert-type server comp-lzo verb 3 server conf: local 161.53.X.X port 1194 proto udp dev tap dev-node OpenVPN ca ca.crt cert server.crt key server.key # This file should be kept secret dh dh1024.pem # DHCP leases addresses to clients server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. push "route 10.0.0.1 255.255.0.0" client-to-client duplicate-cn keepalive 10 120 comp-lzo verb 6

    Read the article

  • Is SSL to the proxy good enough?

    - by Josh Smeaton
    We are currently trying to decide on how best to do SSL traffic in our environment. We have an externally facing Apache proxy server that is responsible for directing all traffic into our environment. It is also doing the SSL work for the majority of our servers. There are one or two IIS servers in particular that are doing their own SSL, but they are also behind the proxy. I'm wondering, is SSL to the proxy good enough? It would mean that traffic within our network is identifiable, but is that such a big deal?

    Read the article

  • iTunes randomly plays songs while importing, and can't be stopped

    - by Steve Bennett
    I'm importing a gazillion songs over the network into iTunes. Every now and then, it starts playing the song it's currently importing. And because iTunes is basically frozen up during the import process, I can't actually stop it. Then it will suddenly jump to another song a bit later on. Pretty irritating. Is it a known issue? Anything I can do about it? Versions (oops): iTunes 10.5 (141), OS X 10.6.8

    Read the article

  • rsync to cifs mount but preserve permissions

    - by weberwithoneb
    I'm backing up a linux server to a windows share. I'm currently mounting the windows share with cifs and using rsync for incremental backups. File permissions and ownership are not being preserved, as should be expected after reading this samba document: The core CIFS protocol does not provide unix ownership information or mode for files and directories. Because of this, files and directories will generally appear to be owned by whatever values the uid= or gid= options are set, and will have permissions set to the default file_mode and dir_mode for the mount. How can I achieve my goal of preserving unix file permissions while writing to a windows share? Is there another network file system that would allow me to do this? Thanks.

    Read the article

  • Storage setup for large files

    - by Mecca
    I need to store over 200TB of data (all types, biggest being video files) and be able to access it over a local network. The files will be accessed for editing or searches. I don't need versioning, but a setup that would keep me safe from harddrive failures would be nice. Right now the content is on different harddrives, some external drives, some regular. I don't exclude the possibility of buying new/extra drives if necessary. If they will ever be exposed to the web, it wont be to the public, but just a couple of people. I have no idea what to buy to make this happen. I see some NAS solutions over the internet like this http://www.bestbuy.com/site/a/2266043.p?id=1218317764591&skuId=2266043 but the storage is not enough, plus it doesn't seem to be scalable. What do you recommend? Thanks

    Read the article

  • How can I create bootable DOS usb stick?

    - by Grzenio
    I need to use this utility to change one of the parameters of my new WD hard drive: http://support.wdc.com/product/download.asp?groupid=609&sid=113&lang=en It has truly unreadable instructions: Extract wdidle3.exe onto a bootable medium (floppy, CD-RW, network drive, etc.). Boot the system with the hard drive to be updated to the medium where the update file was extracted to. Run the file by typing wdidle3.exe at the command prompt and press enter. I understand that this bootable medium should be some version of DOS? How can I make my USB stick a bootable medium compatible with this utility (I don't have a diskette drive)? I have Windows 7 and Debian Linux installed.

    Read the article

  • How do I restore the default applets to Gnome's notification area?

    - by gbacon
    I have a fresh install of Karmic Koala. In a botched attempt at trying to change my default window manager, I somehow removed at least three applets from the notification area: network manager (nm-applet), volume control (gnome-volume-control-applet), and the battery meter (???). Now if I logout and back in, these applets don't run, but I can start them from the command line. Because it's a fresh install, I completely removed my luser account and home directory. After recreating my account, I was frustrated to find that the applets are still missing and no obvious way to add them back. How can I restore the default configuration?

    Read the article

  • MacBook repeatedly disconnects from Wi-Fi

    - by redwall_hp
    I have an early 2008-model MacBook (2.4 GHz). The Wi-Fi router I have at home is a Linksys WRT54GX2 that I have had for a few years. My MacBook has recently started disconnecting from the router every few minutes, which is rather annoying. I can reconnect again without having to restart the router or anything, as it seems that the MacBook is just dropping the connection. I have tried changing the channel on the router, and upgrading the laptop from Leopard to Snow Leopard made no difference either. I'm only about six feet from the Linksys device, so distance isn't an issue. This only happens with the Linksys router, while I can use the local library's open network without any issues. The problem also seemingly becomes more pronounced after midnight. What could the problem be? Edit: Here are the logs that Spiff requested: http://pastie.org/951761

    Read the article

  • Unable to connect to remote MS SQL Server 2008 Express SP3 instance by name

    - by Max
    I am trying to connect to a remote MS SQL Server 2008 SP3 x86 Instance using it's name. At the first glance all seems to work well (e.g. it is possible to connect to the server locally and succesfully telnet it's port remotely), but there is a thing I can't understand... This line should connect us to the default instance of remote SQL Server: osql -S ServerIP -d MyDatabase /U sa -P MyPassword and it does the trick, however the next one: osql -S ServerIP\MyInstance -d MyDatabase /U sa -P MyPassword ends up with the following error: [SQL Native Client]SQL Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF]. [SQL Native Client]Login timeout expired [SQL Native Client]An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. The only instance running on the server is MyInstance, which is (I guess) the default one. Could you please put some time in explaining the issue.

    Read the article

< Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >