Search Results

Search found 19446 results on 778 pages for 'network printer'.

Page 354/778 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Data Management Business Continuity Planning

    Business Continuity Governance In order to ensure data continuity for an organization, they need to ensure they know how to handle a data or network emergency because all systems have the potential to fail. Data Continuity Checklist: Disaster Recovery Plan/Policy Backups Redundancy Trained Staff Business Continuity Policies In order to protect data in case of any emergency a company needs to put in place a Disaster recovery plan and policies that can be executed by IT staff to ensure the continuity of the existing data and/or limit the amount of data that is not contiguous.  A disaster recovery plan is a comprehensive statement of consistent actions to be taken before, during and after a disaster, according to Geoffrey H. Wold. He also states that the primary objective of disaster recovery planning is to protect the organization in the event that all or parts of its operations and/or computer services are rendered unusable. Furthermore, companies can mandate through policies that IT must maintain redundant hardware in case of any hardware failures and redundant network connectivity incase the primary internet service provider goes down.  Additionally, they can require that all staff be trained in regards to the Disaster recovery policy to ensure that all parties evolved are knowledgeable to execute the recovery plan. Business Continuity Procedures Business continuity procedure vary from organization to origination, however there are standard procedures that most originations should follow. Standard Business Continuity Procedures Backup and Test Backups to ensure that they work Hire knowledgeable and trainable staff  Offer training on new and existing systems Regularly monitor, test, maintain, and upgrade existing system hardware and applications Maintain redundancy regarding all data, and critical business functionality

    Read the article

  • DCHP and Router load testing

    - by John H
    I manage a campground wifi network with an average of 10 - 60 active users. I have encountered issues where the router starts acting flaky (failing to assign DHCP or failing to pass traffic) without any clear warning (low cpu utilization, etc). I upgraded the router a couple times and ended up with a Netgear ProSafe VPN router that seems to be handling the traffic. The interesting thing is that the Netgear has lower specs than the Buffalo router it replaced, indicating the issue is with the DD-WRT firmware. While I'll be pursuing this issue on the dd-wrt forums, I need a way to test routers. My vision is having 1-2 computers connected on the LAN side and 1-2 computers connected on the WAN side. I want the LAN computers to be generating various type of traffic and connections, as well as requesting DCHP addresses. A few notes: The wireless aspect should be a non-issue. Most clients would connect to a wireless bridge and come into the router through a network cable. I had a monitoring server with Nagios running check_dhcp against the router. This server was connected directly by a network cable, eliminating wifi bridges and other devices from the equation. This question is somewhat related, but not exactly: Load testing wireless LANs I am going to look at IxChariot. While I'd ideally like to use a 1 computer on each side running Linux and preferably free software, I can entertain running Windows, multiple computers, or non-free software. Total bandwidth doesn't seem to be the issue. I can transfer large files all day. Even on the busiest days, the users seemed to only pull ~5Mbps. There is very little "LAN to LAN traffic" and most of it might never have reached the main router. The issue I need to test for seems to be tied to active users, or more appropriately, active sessions. I know active users or active clients is a meaningless term from a router standpoint and wouldn't mind having more appropriate terms to use. Summary: I need a way to test a routers ability in handling traffic from a large number of clients. My current strategy is to purchase a router, deploy it, and see how it fails in the live environment.

    Read the article

  • Windows 2003 - logging off the Administrator account makes web applications inaccessible

    - by Saravana
    Consider a web application installed on a Windows Server 2003 SP2 machine with the admin account. The application is accessible in the server as well as in the network when at least one session of the admin account is logged in. If there are no active sessions of the admin account, the web application is not accessible via the network, nor accessible locally when logged in with another user account. What would cause the web application to be inaccessible there's no Administrator session? Please suggest anything that might help find the solution.

    Read the article

  • Configure TCP/IP to use DHCP and a Static IP Address at the Same Time

    - by Tiago
    My computer is configured to obtain a IP address automatically using DHCP. It only has one network adapter. How to configure an additional static IP address? I found a tutorial for Windows XP, but the procedure didn't work for Windows 7. Is it possible to configure two IP addresses on Windows 7, one being static e another being dynamic? How? The dynamic address I got now is 10.17.11.162. The static IP is 10.17.30.19. The network mask is the same: 255.255.224.0. Both work independently, but I don't know how to use both at the same time.

    Read the article

  • Can't complete dropbox installation from behind proxy in Ubuntu 11.10

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • CloudMail

    - by kaleidoscope
    In Web Applications, we often come across requirement of sending and receiving emails through our application. So same can be for the applications hosted on Azure. So Do you want to send email from an application hosted on Azure? CloudMail is one of the possible answers. CloudMail is designed to provide a small, effective and reliable solution for sending email from the Azure platform directly addressing several problems that application developers face. Microsoft does not provide an SMTP Gateway (yet) so the application is forced to connect directly to one hosted somewhere else, on another network. So to implement such functionality one of the possible option is using Free email providers. This might be fine for testing, but do you really want to rely on a free service in production? There can be other issues with this approach like if your chosen SMTP gateway is down or there are connection problems? Again there can be some specific requirement that, you want to send email via a company’s mail server, from inside their firewall. CloudMail solves these problems by providing a small client library that you can use in your solution to send emails from you application and a Windows Service that you run inside your companies network that acts as a relay. Because the send and relay are disconnected there are no lost emails and you can send from your own SMTP Gateway.   CloudMail is in its Beta version and available for download here.   Technorati Tags: Geeta,Azure Email,CloudMail

    Read the article

  • .NET and SMTP Configuration

    - by koevoeter
    Sometimes I feel stupid about discovering .NET features that have been there since an old release (2.0 in this case)... Apparently you can just use this configSecion “mailSettings” and never have to configure your SmtpClient instance in code again (no, not hard-coded): <system.net>     <mailSettings>         <smtp deliveryMethod="Network" from="My Display Name &lt;[email protected]&gt;">             <network host="mail.server.com" />         </smtp>     </mailSettings> </system.net> Now you can go all like: new SmtpClient().Send(mailMessage); …and everything is configured for you, even the from address (which you can obviously override).

    Read the article

  • Fortigate 200A firewall CPU high resource usage

    - by user119720
    This morning I'm receiving complaints from several end users that saying their whole department network are slow and have intermittent. Therefore I've checked our firewall to see whether if something goes wrong with the device.From my observation in the FortiGate dashboard status, the CPU resources is very high (99 percent). My first assumption is to clear the log since in the alert log the Fortigate log mention that it is already 90% full.Based on my understanding,the log can be cleared by restarting the firewall. After restarting the firewall the network seems okay again but then after several minutes it went up again.The condition still persist until now. Can someone show me where else I can check to fix this issue? I've really appreciate any help that I can get here.Thanks. Edit: diag sys top command

    Read the article

  • Is there a convenient way to manually copy the Log Shipping *.trn files from one SQL 2008 server to

    - by Rick
    We have a remote SQL 2008 server (ServerB) that needs to keep a warm (15 minute interval is OK) copy of production data (ServerA). ServerA is also SQL 2008. Log Shipping looks like it will do the job. We can only get to the destination ServerB with remote desktop. Is there a way to set this up when we can't get to both servers from one Management Studio? We want to be able to temporarily (until a VPN is setup between our network and the ServerB network) manually export a small .trn file, copy it via remote desktop to ServerB and then manually import those transactions from the .trn file. My supervisor says he saw a post saying this is possible. We were just trying to avoid doing a full database backup and copying that every time. Thanks in advance for any suggestions on this.

    Read the article

  • b43 module loaded, but no interface showed up

    - by Eduardo Bezerra
    I'm using CentOS 6.3 x86_64 on a hardware with a BCM43224 chip for wi-fi. I installed the b43-fwcutter module and then run modprobe b43, with no error messages. However, no new network interface showed up, and the return of iwconfig is: lo no wireless extensions. eth0 no wireless extensions. lspci -nn | grep 43224 returns: 03:00.0 Network controller [0280]: Broadcom Corporation BCM43224 802.11a/b/g/n [14e4:4353] (rev 01) and uname -a: Linux localhost.localdomain 2.6.32-279.14.1.el6.x86_64 #1 SMP Tue Nov 6 23:43:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Any ideas of how to make the wireless device work?

    Read the article

  • pix 501, static route to d-link router (different subnet)

    - by ra170
    I have pix 501 cisco firewall with internal ip 192.168.10.1. I have connected d-link router (dir-655) to pix 501. The d-link router has internal ip 192.168.0.1 The picture would like something like that: |pix 501| has 192.168.10.1 ip |DIR-655| has 192.168.0.1 ip 1. |cable modem|----|pix 501|-------|DIR-655|-----PC 2. PC--------|pix 501|---------|DIR-655| | | |cable modem| When I'm on the wireless network (dir-655) with assigned ip of 192.168.0.x I can cross the subnet and connect to my firewall 192.168.10.1. (pic. 1) The problem is that if I'm on the 192.168.10.x network I can't connect to anything over at 192.168.0.x network. (pic.2) I've tried entering a static route like this: `route inside 192.168.0.0 255.255.255.0 192.168.10.1 1` I also tried assigning static ip to wan interface on DIR-655 to 192.168.10.30 and then tried this: route inside 192.168.0.0 255.255.255.0 192.168.10.30 1 But still, can't connect to 192.168.0.1 or anything on that subnet. Is there a way to setup a static route? Would adding a separate router between PIX 501 and DIR-655 help? I would think that static route like this should take care of it, but it doesn't. This is my route config and nat: (config)# sh route outside 0.0.0.0 0.0.0.0 (outside_IP) 1 DHCP static outside (outside_IP) 255.255.248.0 (outside_IP) 1 CONNECT static inside 192.168.0.0 255.255.255.0 192.168.10.1 1 OTHER static inside 192.168.10.0 255.255.255.0 192.168.10.1 1 CONNECT static or (route inside 192.168.0.0 255.255.255.0 192.168.10.30 1) (config)# sh nat nat (inside) 1 192.168.1.0 255.255.255.0 0 0 nat (inside) 1 192.168.10.0 255.255.255.0 0 0 nat (inside) 1 0.0.0.0 0.0.0.0 0 0 I ended up turning DIR-655 into an Access Point (turning off DHCP and pluging cable from PIX lan interface into one of the LAN interfaces on DIR-655, and leaving WAN port empty), that works as far as DIR-655 being on the same subnet now, and I can access every machine. However the question is, why can't I simply route between those two? would router between these two help? One of the reasons is, that the PIX 501 has only 10 licences, so now I'm using almost all of them. (I have few computers, iphones, ps3, print server, etc.) I would really appreciate some help! Thanks.

    Read the article

  • EPM Architecture: Reporting and Analysis

    - by Marc Schumacher
    Reporting and Analysis is the basis for all Oracle EPM reporting components. Through the Java based Reporting and Analysis web application deployed on WebLogic, it enables users to browse through reports for all kind of Oracle EPM reporting components. Typical users access the web application by browser through Oracle HTTP Server (OHS). Reporting and Analysis Web application talks to the Reporting and Analysis Agent using CORBA protocol on various ports. All communication to the repository databases (EPM System Registry and Reporting and Analysis database) from web and application layer is done using JDBC. As an additional data store, the Reporting and Analysis Agent uses the file system to lay down individual reports. While the reporting artifacts are stored on the file system, the folder structure and report based security information is stored in the relational database. The file system can be either local or remote (e.g. network share, network file system). If an external user directory is used, Reporting and Analysis services also communicate to this directory. The next post will cover WebAnalysis.

    Read the article

  • Lost contact with my NAS after changing its IP

    - by Beles
    I did some brain-dead reconfiguring of my D-Link DNS-323 NAS some days ago. I have a home network where each computer gets a dynamically allocated IP address starting at 192.168.1.100. The irritating point (for me at least) was that the NAS changed IP if the power went down or I turned off the router. I then had to remap a drive-letter to point to the new IP address of the NAS. To remedy that I configured the NAS to have a static IP, 192.168.0.10. I had no good reason to choose that IP, other than I found it in a user manual for the NAS. After I changed the IP and rebooted the NAS it disappeared from the network and was never to be found again. Now I have a black brick standing in my home, looking good, but "dead". Could anyone point me in a direction which helps me solve this problem? I have about 100gb worth of pic of my children on this brick so I really want it back :-) Sincerely,

    Read the article

  • Some notebooks can connect to internet via certain wireless router, some can't

    - by Nathaniel
    I take a class in the building of a business and we sometimes use their internet via Wi-Fi. In the last few weeks, though, some notebooks haven't been able to connect to the internet even though they are connected to the router. It affected first a few of us and then all of us. Last week I tweaked around the network settings on the router (yes, the admin password is the same as the network key) and even moved the internet cable on the router from port 1 to the internet port. None of this really worked so I put things more or less back as I found them and alerted someone that someone who knew more than me had better have a look at the router. So, I don't know if anybody has had a look at it but now 2 of us can again connect to the internet. I had a look at the router settings last week and I really couldn't see what might be creating this issue. What might be the problem?

    Read the article

  • How to "ignore" username and password prompt in net use

    - by Mattisdada
    I have at the moment a logon.cmd script, that I'm using to map network drives to the users profile. It looks like this: ::Onboarding net use m: /delete net use m: \\BOB\onboarding ::Bookings net use n: /delete net use n: \\BOB\bookings ::Accounts net use j: /delete net use j: \\BOB\accounts It works fine up until it gets up to a folder that the current user cannot access, it then asks for a username and password instead of erroring and continuing. Notes: This very script used to work on another Samba PDC network, but I've moved it over to another server (Still Samba PDC) and now its breaking. Is there anyway for it to ignore the username/password prompt and just continue?

    Read the article

  • What is Finder doing when it has a spinner in the bottom-right?

    - by rspeicher
    Whenever I open a folder on a remote network share that hasn't been opened since I last booted, Finder displays this animated spinner in the bottom-right (see picture below) for about 30 seconds and won't display any of the folder's contents until it's done doing whatever it's doing. This makes browsing the share painfully slow for seemingly no reason. Why's it doing this? I should note that I've disabled .DS_Store files littering network shares using defaults write com.apple.desktopservices DSDontWriteNetworkStores true, so maybe that's why. I'm kind of hoping it's something else, though.

    Read the article

  • Make puppet agent restart itself

    - by SamKrieg
    I've got a file that notifies the puppet agent. In the network module, the proxy settings are included in the .gemrc file like this: file { "/root/.gemrc": content => "http_proxy: $http_proxy\n", notify => Service['puppet'], } The problem is that puppet stops and does not restart. Aug 31 12:05:13 snch7log01 puppet-agent[1117]: (/Stage[main]/Network/File[/root/.gemrc]/content) content changed '{md5}2b00042f7481c7b056c4b410d28f33cf' to '{md5}60b725f10c9c85c70d97880dfe8191b3' Aug 31 12:05:13 snch7log01 puppet-agent[1117]: Caught TERM; calling stop I assume the code does something like /etc/init.d/puppet stop && /etc/init.d/puppet start Since puppet is not running, it cannot start itself... it kind of makes sense. How to make puppet restart itself when this file changes? Note that this file may not exist as well.

    Read the article

  • Routing and Remote access rule not being applied internally (Windows SBS)

    - by Tim Saunders
    Hi, I have a Microsoft Small Business Server. I have pointed an external domain name to the external fixed IP address for the server. In routing and remote access I have defined a service for our subversion server as follows: Incoming port: 8443 Private address: 192.168.10.5 Outgoing port: 8443 192.168.10.5 is our development server, not the SBS (which is at 192.168.10.1) This rule works correctly if I am not on our internal network. However if I am on the internal network this rule does not get applied. What can I do/set so this rule is applied both internally and externally (so users with laptops et, don't keep having to change the URL by which they access the subversion server) Not sure what other info you may need, so please let me know if more details are required. T

    Read the article

  • Forwarding wifi traffic to wired pc

    - by brydgesk
    I'm trying to play around with Wireshark on my home network, and was wondering if there is a way to create a new connection on my PC that receives all wifi packets on the network. The PC is a wired Windows 7 machine, and I'm using DD-WRT on an Asus RT-N16 router. I'm not trying to hack anything, I have full admin access to the router itself. My searching has led me to articles about client bridges and repeater bridges, but none of them seemed to apply entirely to my situation. I'd like to continue using my standard wifi connection, but make my PC act as a repeater that receives all wifi traffic. Again, the PC has no wireless connection. I've used tcpdump which is installed on the router itself, but I'd be more comfortable analyzing the packets in Windows, as I'm trying to learn Wireshark. Thanks

    Read the article

  • Cannot schedule task to run under domain user account other than current user

    - by Filburt
    On our Win 2008 machines I can't schedule tasks for domain users because the domain name does not resolve to network name but the AD dc name. The "network name" looks like ABCDEFGE-HIJKLM and the "dc" / "name" would look like ABCDEFGE-HIJKLMN. When selecting the domain user account the account qualifier will look like ABCDEFGE-HIJKLMN\task.user. This results in an "invalid account" error. When however keeping the currently logged in user it will display ABCDEFGE-HIJKLM\current.user. Does this behaviour result from the presumable "illegal" domain name? Is there a workaround for this? update I could of course log in as the desired domain account and create the task but since this account is a account used for running services I want to avoid creating a user profile on the machine.

    Read the article

  • Coldfusion server VERY slow page loads

    - by Kevin
    I inherited a windows server 2003 coldfusion 7 server a few weeks ago. Today a network cable was unplugged by accident from the server. On plugging it back in, pages were NOT loading at all. Rather, we were receiving a generic coldfusion error page. After restarting IIS several times and coldfusion even more than that, we finally got pages to start loading. However, the loading is extremely slow (30+ seconds) on pages that used to load instantly. Loading through the local network (IE localhost/cfide/administrator) does nothing to help the load speed. I am not familiar with IIS or Coldfusion (We're in the process of migrating this to Linux/PHP), so this is all new territory to me. I'm hoping someone may have experienced this issue in the past and can help me solve it. I'm happy to provide any additional information that might be necessary....I'm just not sure what information you might need in order to help. Thanks for your time.

    Read the article

  • Windows Server 2008: Terminal Services / VDI

    - by JohnyD
    I have a Dell R710 with 72GB of memory running Hyper-V. Within Hyper-V I have a Windows 2008 (32-bit) VM running Terminal Services. How do I allocate memory so that any user who connects to this Terminal Server (from their thin-client) is allocated 2GB (or whatever amount I choose) of memory? Currently I have provisioned the TS with 2GB of memory but it seems that this is shared among all that connect. Please let me know if there is further information I can provide. Thank you. Update 1: What I'm looking to accomplish with this server is setting up a VDI to allow users to connect from thin-clients from within our network. They will also have to connect from outside our network via VPN which is already in place. Am I able to set this up using Windows Server 2008 (not R2) because I have a 16-bit application which needs to be supported. Unfortunately it's not a candidate as a Remote App.

    Read the article

  • Quickbooks and Cisco ASA 5505

    - by user41516
    I have a Cisco ASA 5505, and everything has seemed to function fine, however I have had problems with Quickbooks 2008 running super slow over the network (Samba) and narrowed it down to the Cisco box. Other Samba transfers seem to be pretty fast, and the Quickbooks file is not very large at all (<25 MB), and other Samba transfers seem to be fine. However, I'm not sure if Quickbooks uses another protocol or if there are other ports that need to be opened. Can anybody give me any clues on how to resolve or troubleshoot the problem or have any prior experience with running Quickbooks over a network?

    Read the article

  • FTP Access Denied when uploading to server

    - by Albert
    Ok, here's the story. I have a server running FTP 'out there' I can connect to it using the admin account, browse files, download files. When I try to upload files, I get 550 Access Denied. I have tried through FileZilla and command line. I have windows firewall turned off (on my machine) I can UPLOAD files from another machine (using the same admin account) on our local network (that means, same public IP) what is the problem? I am running Windows 7, Build 7100 and the other machine on the network is running XP SP3 The thing that gets me though, is that this worked for the last probably 4 months, without a problem, I get back in the office after a weekend today and it won't work...

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >