Search Results

Search found 2795 results on 112 pages for 'nic strong'.

Page 66/112 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Handling permissions in a MVP application

    - by Chathuranga
    In a windows forms payroll application employing MVP pattern (for a small scale client) I'm planing user permission handling as follows (permission based) as basically its implementation should be less complicated and straight forward. NOTE : System could be simultaneously used by few users (maximum 3) and the database is at the server side. This is my UserModel. Each user has a list of permissions given for them. class User { string UserID { get; set; } string Name { get; set; } string NIC {get;set;} string Designation { get; set; } string PassWord { get; set; } List <string> PermissionList = new List<string>(); bool status { get; set; } DateTime EnteredDate { get; set; } } When user login to the system it will keep the current user in memory. For example in BankAccountDetailEntering view I control the controller permission as follows. public partial class BankAccountDetailEntering : Form { bool AccountEditable {get; set;} private void BankAccountDetailEntering_Load(object sender, EventArgs e) { cmdEditAccount.enabled = false; OnLoadForm (sender, e); // Event fires... If (AccountEditable ) { cmdEditAccount.enabled=true; } } } In this purpose my all relevant presenters (like BankAccountDetailPresenter) should aware of UserModel as well in addition to the corresponding business Model it is presenting to the View. class BankAccountDetailPresenter { BankAccountDetailEntering _View; BankAccount _Model; User _UserModel; DataService _DataService; BankAccountDetailPresenter( BankAccountDetailEntering view, BankAccount model, User userModel, DataService dataService ) { _View=view; _Model = model; _UserModel = userModel; _DataService = dataService; WireUpEvents(); } private void WireUpEvents() { _View.OnLoadForm += new EventHandler(_View_OnLoadForm); } private void _View_OnLoadForm(Object sender, EventArgs e) { foreach(string s in _UserModel.PermissionList) { If( s =="CanEditAccount") { _View.AccountEditable =true; return; } } } public Show() { _View.ShowDialog(); } } So I'm handling the user permissions in the presenter iterating through the list. Should this be performed in the Presenter or View? Any other more promising ways to do this? Thanks.

    Read the article

  • Virtualized data centre&ndash;Part four: The design

    - by marc dekeyser
    Welcome back to the fourth post in this series! Today we will have a look at what Microsoft recommends as a “private cloud design” and what I will make of it. Whilst my own solution is based of the reference architecture, it is quite different indeed! An important thing to know is that, whilst I am using the private cloud as a reference, I am skipping most of the steps in designing a private cloud. If that is why you are here, please read the links at the end of the article and skim through my own content. A private cloud is much more process driven than just building a virtual infrastructure… The architecture of it all… So imagine for a minute that you have unlimited funds to build this lab of yours… You’d want redundancy on all levels and separation of each network where possible! Unfortunately we don’t have that luxury and, as you saw me hinting at in the previous article, our own design will be more limited but still quite capable! Networking From the networking perspective I will not have a fully redundant network, after all, this is but a lab environment! Thanks to Server 2012 I will be able to use bonding on my NIC’s and use LACP to improve the performance on that part. Storage As I mentioned in the previous article a Synology DS1218+ will be used for iSCSI provisioning. This device has 2 NICs on-board which can be bonded in to one 2 Gbps interface giving me a decent throughput and making the disks the most limiting factor in the storage design. Domain controllers and extra infrastructure Server 2012 completely supports running domain controllers virtualized and has no need to actually have a reachable DC when booting… That being said I need a remote access machine to power on the hosts (I have no need for them running 24/7) and a possible System Center VMM 2012 box (although server 2012 is not supported until SP1 :( ). Undecided on if I am to install those boxes separately or as a virtual machine… Which amounts to… Something like this pretty picture!                   Sources Microsoft Private Cloud Solutions Repository (en-US) http://social.technet.microsoft.com/wiki/contents/articles/12131.microsoft-private-cloud-solutions-repository-en-us.aspx Reference  Architecture: http://social.technet.microsoft.com/wiki/contents/articles/3819.reference-architecture-for-private-cloud.aspx Private Cloud Reference Model: http://social.technet.microsoft.com/wiki/contents/articles/4399.private-cloud-reference-model.aspx

    Read the article

  • Ubuntu won't connect to wired network

    - by djeikyb
    I'm running 10.04, upgraded from 9.10, maybe, but probably not upgraded from 9.04. I have two wifi routers. Zeus is connected to the dsl modem. Hermes uses a wds bridge with Zeus to extend the network. My desktop (Daedalus) is ethernetted to Hermes. My laptop (Clyde) is wifi, switching to Hermes or Zeus as needed. Occasionally, as in whenever I transfer a large file from desktop to laptop, the wds bridge will die. Fixing it means restarting both routers, though it seems Hermes should boot first. This is ridiculous, and eventually I'll get around to asking you guys to help me stop it from happening. More importantly is that my desktop requires a reboot to get back on the network. WTF. ifconfig shows my nic has no ip. /etc/init.d/networking restart doesn't do anything, not even give me a lousy ip. dhcpcd eth1 grants me an ip address, but doesn't help with internet access. route -n shows what looks like my normal routing table, but pinging google.com informs me it's an unknown host. jake@daedalus:~$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 10.1.1.1 0.0.0.0 UG 0 0 0 eth1 It may be worth noting that I can ping both Zeus (10.1.1.1) and Hermes (10.1.1.4) and my laptop (10.1.1.55). Much obliged for any help. Rebooting is, well, trivial in this instance. But it's stupid. I switched to linux because I like the idea that if one part breaks, you fix it instead of reboot reboot reboot. I've left my poor desktop in disarray, confining myself to my little netbook. My desktop is broken, awaiting magical commands from you brilliant folk. (and yes, i know clyde the netbook should be named icarus. it was its original name. ironically the ssd burned out, and i felt it wasn't right when it came to reinstalling)

    Read the article

  • Latency Matters

    - by Frederic P
    A lot of interest in low latencies has been expressed within the financial services segment, most especially in the stock trading applications where every millisecond directly influences the profitability of the trader. These days, much of the trading is executed by software applications which are trained to respond to each other almost instantaneously. In fact, you could say that we are in an arms race where traders are using any and all options to cut down on the delay in executing transactions, even by moving physically closer to the trading venue. The Solaris OS network stack has traditionally been engineered for high throughput, at the expense of higher latencies. Knowledge of tuning parameters to redress the imbalance is critical for applications that are latency sensitive. We are presenting in this blog how to configure further a default Oracle Solaris 10 installation to reduce network latency. There are many parameters in fact that can be altered, but the most effective ones are intr_blank_time and intr_blank_packets. These parameters affect on-board network throughput and latency on Solaris systems. If interrupt blanking is disabled, packets are processed by the driver as soon as they arrive, resulting in higher network throughput and lower latency, but with higher CPU utilization. With interrupt blanking disabled, processor utilization can be as high as 80–90% in some high-load web server environments. If interrupt blanking is enabled, packets are processed when the interrupt is issued. Enabling interrupt blanking can result in reduced processor utilization and network throughput, but higher network latency. Both parameters should be set at the same time. You can set these parameters by using the ndd command as follows: # ndd -set /dev/eri intr_blank_time 0 # ndd -set /dev/eri intr_blank_packets 0 You can add them to the /etc/system file as follows: set eri:intr_blank_time 0 set eri:intr_blank_packets 0 The value of the interrupt blanking parameter is a trade-off between network throughput and processor utilization. If higher processor utilization is acceptable for achieving higher network throughput, then disable interrupt blanking. If lower processor utilization is preferred and higher network latency is the penalty, then enable interrupt blanking. Our experience at ISV Engineering is that under controlled experiments the above settings result in reduction of network latency by at least 50%; on a two-socket 3GHz Sun Fire X4170 M2 running Solaris 10 Update 9, the above settings improved ping-pong latency from 60µs to 25-30µs with the on-board NIC.

    Read the article

  • Unable to ping inside or outside network with default gateway 0.0.0.0

    - by agentroadkill
    I've been around here before and I could usually piece together everything to more or less get myself up and running, but this time I'm truly stumped. I'm trying to connect my new 14.04 install to a network, and I'm forced to be behind my college's router. Now I've tested the vary cable that is right now plugged into my Ubuntu box on a Windows, Mac OS X, and even my friend's Ubuntu 14.04 box, and they all connect no problem. I've been trying to track this down for about two days, but every time I get close to it, the bug jumps to some other piece of my connection. Anyway, as it sits ifconfig -a gives: eth2 Lninkencap:Ethernet HWaddr:00:1f:bc:08:31:1d inet addr:10.32.51.51 Bcast:10.32.51.155 Mask: 255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 RX bytes:0 TX bytes:0 as well as the local loopback, but I'm assuming that is not an issue here. sudo dhclient -v eth2 returns: Listening on LPF/<hardware address of my integrated NIC, above> Sending on <same> Sending on Socket/fallback DHCPREQUEST of 10.32.51.51 on eth2 to 255.255.255.255 port 67 (xid=0x6f4a66ba) <two more lines of same> DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 3 (xid=0x156f9fb4) <many more of above with varying intervals> No DHCPOFFERS received. Trying recorded lease 10.32.51.51 RTNETLINK answers: File exists bound: renewal in <large number> seconds If I then try ping 8.8.8.8, I get: connect: Network is unreachable /etc/resolv.conf only contains the two lines telling you not to edit it, while /etc/network/interfaces only has the loopback interface block in it. I've tried commenting out the "option rfc3442" line in /etc/dhcp/dhclient.conf which seemed to fix this issue for many people, as well as adding the line send vendor-class-indentifier "MSFT5.0" to dhclient.conf as well to tell the router I'm a windows box, in case they don't like Linux. Finally, route -n reveals: Destination Gateway Genmask Flags Metric Ref Use Iface 10.32.51.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 I would like to apologize in advance for the doubtless butchered text alignment, but I'm obviously typing this all by hand, reading from the terminal as I type commands. I'm hoping this is an interesting problem, and not something I blithely stumbled past in my (apparent) over-confidence. TIA! Quick addendum before posting: The activity light on the ethernet port are lit and one blinks during boot, but they rarely (and seemingly randomly) do so afterwards (both are dark) even while running dhclient in the foreground. When I had the Ubuntu box tethered to my MacBook earlier, I got what looked like a normal power/uplink blinking pattern, but was unable to ping one from the other.

    Read the article

  • iScsiPrt error event ID 5

    - by AZee
    Event Log: "Failed to setup initiator portal. Error status is given in the dump data." This is being recorded every 3/100's of a second. We are using MS iSCSI Initiator on Windows Server 2003, Dell 2970 w/4GB (PAE). I am sure that this was configured by Dell initially. I have no idea what changes or mods were made since the company installed this machine until now. (I'm a new User so the lovely and vibrant screen images had to be removed. They were quite pretty and I am sure you would have been very moved and appreciative of them.) It appears that everything is installed correctly and the 5TB bound volume is accessible but I have never worked with iScsi before so I plead total ignorance. In searching I have found this to be a fairly sparce and bland documented subject. I'd like two things... First, to get rid of the error msg being logged. MS says it can be ignored if everything is working but it chews up resources logging it and I don't feel comfortable about any errors on my servers. I want to correct whatever is causing this problem. Secondly, being totally green to this, I would like to confirm that the setup is optimized and we are taking advantage of all features available. Although there are 3 NIC's in this machine it appears that the initiator is only configured for the Broadcom BMC5708C NetXtreme II on our 10.90.1.#, the other 2 NICS are 1GB on the 192.168.0.#. Would additional targets improve performance? If someone who is experienced in configuring the Microsoft iScsi Initiator can help I would really appreciate it since, as I mentioned, everything I have come across has not been of any value at all. Thanks! ~AZ

    Read the article

  • Intermittent Windows Server 2008 BSOD and restart

    - by Timka
    Our EC2 Instance (Windows Server 2008) crashed multiple times for the past 3 months (last time was today at 1:05 EST). Upon reviewing MEMORY.DMP file we noticed that possible cause of the crashes is rhelnet.sys (RedHat PV NIC Driver). Server's Event Viewer has the following records right after the crash: Critical - Kernel Power: The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. BugCheck: The computer has rebooted from a bugcheck. The bugcheck was: 0x000000d1 (0x000000000000002d, 0x0000000000000002, 0x0000000000000000, 0xfffff88001402d14). A dump was saved in: C:\Windows\MEMORY.DMP. Report Id: 100113-35849-01. Could this be a hardware issue? Would it help if we stop and start the instance? Or is this more likely that this is caused by the software running on the system? [Update 10.01.2013] Amazon Rep suggested to update RH drivers to Citrix PV drivers on our instance: Upgrading PV Drivers [Update 10.08.2013] We performed a drivers upgrade on the cloned instance. Right after the upgrade we noticed the following errors in our Event viewer: Xennet6 errors in Event Viewer (Event ID# 5001) After digging a bit more I found this article suggesting to install the latest Citrix drivers. Unfortunately, this didn't help us at all and our cloned instance became unresponsive. [Update 10.08.2013 2] I recreated an instance and updated PV drivers again. After searching on Internet I found this article where Amazon Rep explains that: "Event ID 5001 from source Xennet6 cannot be found" message does not indicate anything wrong, just that the PV driver is looking for a feature that we have not implemented in our version of Xen. I will keep my test system running for a while to see if there any issues with it.

    Read the article

  • Configuring default gateway returned by dhcp server

    - by comp1mp
    Hello, I have a machine which connects via ethernet to a private LAN, and wireless to a network which provides internet connectivity. The private LAN uses a wireless router to perform DHCP. The problem is that the wireless and NIC adapters have different default gateways. The default gateway for the private LAN has a lower adapter metric, and is thus chosen by the routing algorithm. I am thus unable to browse the internet when connected to both adapters. The following link has a solution for manually setting the adapter metric to a high number. http://superuser.com/questions/77822/how-to-tell-windows-7-to-ignore-a-default-gateway I was hoping to find a different solution. Does any one know of a router that allows you to configure its DHCP server to return an empty default gateway? I cannot find such an option for my linksys wrt300n. Configuring a static ip address with no default gateway does work, however I would like to use DHCP if possible. Does anyone know of a different way to specify a default gateway for a windows 7 machine with multiple network adapters without mucking with the adapter metric? Thanks, Matthew

    Read the article

  • Sending emails from PHP - email providers vs GAE

    - by nrph
    I need to send emails from my social service (this is continuation of Experiences in mailing to registered users). I got strong feeling that it's better to avoid problems with email server configuration and maintance and to choose email provider which will take care of all painful problems. So several offers were compared: http://imgur.com/JkK2X.jpg Three of them look very attractive: Postageapp / Sendgrid / CritSend As alternative i'm considering setup GAE app. Email provider is quite easy to start work with, but have no idea how much effort require GAE to integrate with PHP. So my question is: which option is better to choose: email provider GAE ? Two factors are important here: business background (therefore prices are mentioned), work required to setup and maintain desired solution. Preferably i would love to avoid all email-related problems (like black lists and so on).

    Read the article

  • Active RDP session over VPN getting disconnected

    - by Wandering Penguin
    I am having seemingly random disconnects of active RDP sessions (I am actively typing or otherwise interacting with the desktop) when connected over the VPN connection. The attempted to reconnect 1/20 pops up and proceeds all the way through 20 then drops. Once the session drops I can open a new session and connect again. This started happening about a week ago, The VPN connection is an IPSec VPN connection from a SonicWall NSA 2400. The NIC drivers are up to date. The VPN client is up to date. The firmware on the SonicWall is up to date (both regular and the early-release versions work the same). I have attempted to connect over three ISPs all with the same behavior. Two different workstations were used to test the VPN connection. The same behavior occurs when connecting to a domain workstation or server. If I am within the firewall I can connect to the same workstations and servers with the disconnect. The VPN connection has "enable fragmented packet handling" and "ignore DF (don't fragment) bit" set. Is there something I am missing in where I am looking for the problem?

    Read the article

  • Validating SSL clients using a list of authorised certificates instead of a Certificate Authority

    - by Gavin Brown
    Is it possible to configure Apache (or any other SSL-aware server) to only accept connections from clients presenting a certificate from a pre-defined list? These certificates may be signed by any CA (and may be self-signed). A while back I tried to get client certificate validation working in the EPP system of the domain registry I work for. The EPP protocol spec mandates use of "mutual strong client-server authentication". In practice, this means that both the client and the server must validate the certificate of the other peer in the session. We created a private certificate authority and asked registrars to submit CSRs, which we then signed. This seemed to us to be the simplest solution, but many of our registrars objected: they were used to obtaining a client certificate from a CA, and submitting that certificate to the registry. So we had to scrap the system. I have been trying to find a way of implementing this system in our server, which is based on the mod_epp module for Apache.

    Read the article

  • Windows Web Server 2008 R2 Server Core local password complexity

    - by Dennis Allen
    How can I disable the local user account password complexity settings on Windows 2008 R2 "Server Core"? I am trying to migrate our windows 2003 web server to windows 2008 R2. I am trying to see if I can use the "Server Core" install, and it has been a very internet search intensive experience. What I can't find out how to do is to find out how to disable password complexity for local user accounts. While our user account generator currently creates nice strong passwords, there was a time when this was not the case and unfortunately forcing the users to change their password is not an option at this time. Any help greatly appreciated. Dennis

    Read the article

  • Cant access Dell BMC IPMI Over IP

    - by Bobb
    I have Dell R210 with iDRAC BMC (new name for old BMC). Which is on-board feature with shared NIC (I believe). Server is on colocation and I didnt set it up before sent there... So I asked for the remote hands to setup IPMI Over IP. They enabled it, set the IP and everything. The IP is different than main box IP. Also the box is cabled to NIC1 and the BMC supposed to share it (am I right?) I can see new IP in the Open Server Administrator (installed on the box). I tried Supermicro IPMI tool and I tried Dell ipmish.exe command like this ipmish -ip xxx -u root -p calvin sysinfo gives BMC is not detected What could be wrong? is there a diagnostics tool I can try? It must be something obvious. I just never used things like that before.... P.S. I read something about encryptions key in the Dell docs. But I understand that is for encrypted IPMI 2.0 and ipmish can use IPMI 1.5 without encryption.

    Read the article

  • OS choice between: Debian, gNewSense, and OpenSolaris

    - by penyuan
    I am planning to migrate from Mac OS X and Windows to either a Unix or Linux distribution, i.e. I am a Linux/Unix beginner. Right now the following caught my interest: Debian: Well established with huge repository of 20000+ apps. gNewSence: "Totally free" version of Ubuntu, so it should be more beginner friendly? OpenSolaris: Also open-source, and built on "strong" Unix base. I do mainly basic tasks such as web browsing, office work, maintaining big photo collection, and a little bit of programming. Questions: How "free" are each of these distributions compared to each other, is this whole freedom thing a big deal? Will a binary labeled as for Ubuntu work on gNewSense? What are simple IDEs for Debian and gNewSense?

    Read the article

  • How do I locate the app generating this network traffic?

    - by Christopher Bartels
    I don't know what this process is doing on my computer. I run Windows 7 Professional w/ all its updates running current non-free antivirus. I only see it in Resource Monitor, where you can see the Network Service process connected to bitum.nnov.ru. When my PC's network traffic generating apps are idle, this process is using the most of all the idle processes using the network. Screenshot hosted here: http://sss.proinbox.com/bitum-nnov-ru.jpg Does anyone recognize this? The page source mentions a control port & a stream port: Page Source for http://bitum.nnov.ru : <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title>DVR WebViewer</title> <meta http-equiv="Content-Type" content="text/html; charset=euc-kr"> </head> <body topmargin="0" leftmargin="0"> <OBJECT classid="clsid:EE479A40-C128-40DD-93DA-000556AF9607" codebase="CtrWeb.cab#version=1,0,2,2" width=875 height=585 align=center hspace=0 vspace=0 > <param name="CmdPort" value="5920"> <param name="StreamPort" value="5921"> </body> </html> When I google this page's title, I see a number of other domains that host the same page. Whois: domain: NNOV.RU nserver: ns.kis.ru. nserver: ns.nnov.ru. 78.25.80.210 nserver: ns1.kis.ru. nserver: ns2.kis.ru. state: REGISTERED, DELEGATED, VERIFIED org: "Agentstvo Delovoj Svjazi", Ltd registrar: RU-CENTER-REG-RIPN admin-contact: https://www.nic.ru/whois created: 1996.10.23 paid-till: 2012.11.01 free-date: 2012.12.02 source: TCI Last updated on 2012.06.16 04:20:46 MSK

    Read the article

  • Virutal Machine loses network connectivity on Hyper V Cluster

    - by Chris W
    We're running a number of VMs on a 6 node failover cluster of blades using Hyper V. We have an intermittent issue (every few days at different times - not a fixed frequency) of VMs losing network connectivity. Console access to the VM suggests all is fine and the underlying blade has normal connectivity. To resolve the problem we either have to re-start the VM or, more usually, we do a live migration to another blade which fires up connectivity and we then migrate it back to the original blade. I've had 3 instances of this happen with a specific VM running on a particular blade however it has happened once with a different VM running on a different blade. All VMs and blades have the same basic setup and are running Windows 2008 R2. Any ideas where I should be looking to diagnose the possible causes of this problem as the event logs provide no help? Edit: I've checked that each blade is running the latest NIC drivers and all seem to be fine. Something that is confusing me - a failover or restart of the VM resolves the issue. Whilst I need to work out the underlying issue that is causing the NICs to hang I'm also concerned that the VM didn't failover to another node which would have solved the outage for me. Is there a way to configure the cluster so that it can tell that the VM guest has lost connectivity and fail it over? As things stand the cluster is assuming that the VM is running happily as I presume Hyper V says everything is great even though there is a problem.

    Read the article

  • Help building maya render node spec

    - by Ak
    Hi there, I'm looking to build 4x Maya render slaves/nodes for a friend of mine when his project gets green lit. The project involves MentalRay and lots of glass. I'm unsure if the new i7's 9xx or 8xx with hyper threading will do any better than a core 2 quad of the same (or close enough) speed. Does hyper threading make a difference to Maya or is it more performance per core based? I'm sure he's prefer I'd build another render node than pay for a bleeding edge CPU that only adds fractionly more GHz. -- The rest of the spec so far: 4Gb - 8Gb ram 64 bit OS: Probably Windows 7 (I know Linux is free, but want to build something my friend can support himself as easily as he supports his own workstation) 1TB HDD to hold textures, Maya files and renders which will be copied to central storage later Mobo with on-board video, gigabit NIC 500 - 650 watt PSU Desktop case something like a: Cooler Master ATCS 840 The machines will sold afterwards if necessary. -- If anyone has had experience in Maya and has done any tests with the new CPUs vs. the older ones I'd really appreciate your input.

    Read the article

  • Latency between IIS and SQL on same physical, two VMs

    - by Jerad Rose
    I have a single server (2x4 core CPUs, 32GB ram), that is a Windows Server 2012 Hyper V host, and it hosts two guest VMs (also Windows Server 2012 instances). One of them is a web server, the other is a SQL server. When hitting a page that loops over 50 records, there is noticeable latency. I capture/report the timings of each iteration on the loop, and each iteration is about 20-30 milliseconds. Of course, this amounts to over a second of latency for the whole loop. I thought maybe SQL needed to be tuned, but running profiler on it, the queries are showing almost 0 duration, so it seems the bottleneck is in transit between the two VMs. I have both VMs configured to use the actual NIC (vs. using a VNIC), so maybe that's part of my problem. Also, this is a classic ASP site, so it's using the SQL OLE DB provider, and I'm wondering if that is part of the problem. This is a new server setup, from an existing Windows 2003/IIS6 server setup where both web and DB run on the same server instance (no virtualization). On that setup, there is no such latency when looping over the cursor like this. But there are so many variables, I'm not sure where to start ruling things out.

    Read the article

  • Performance mitigations serving content from a UNC share via IIS 6

    - by codepoke
    I have a quad processor vmware instance running Windows 2003 and 1gb ethernet. I'm comparing serving the exact same heavy .NET 2.0 content from the local hard drive versus serving it from a UNC drive. If I use WCAT to load it down, I see about a 40% reduction in transactions/sec while serving from the UNC. Processor time barely moves from 45% and the NIC sits around 40% either way. I don't see any significant memory loading either way. Context Switches/Transaction, though, more than doubles when serving from the UNC. Pathlengths more than double as well, but I believe that's just an expression of the effect of context switching. All told, it looks like the bottleneck is processor switching while waiting on content from the UNC share. Is my experience about the norm? Is there some mitigation I might try? I twiddled HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\MaxCmds a little bit per http://technet.microsoft.com/en-us/library/dd296694(WS.10).aspx, but to no obvious effect. I kind of doubt my problem is lack of connections, but rather just the act of switching from thread to thread while waiting on data.

    Read the article

  • Word documents very slow to open over network, but fine when opened locally - on one machine

    - by Craig H
    Windows XP, Word 2003, patched. The issue is happening with several Word documents stored on a network drive. The Word documents are clearly a bit wonky (i.e. one is 675k, but if you copy everything but the last paragraph marker into a new document, the new document is only 30k). But that's only part of the problem. On one weird machine, and one machine only, it takes ~20 seconds to open these Word documents from the network drive. Copy the file to C: on that werid machine? Opens immediately. Go to other machines (that are very similar - same patch level, etc.) and open the same document from the network? Opens immediately. Delete normal.dot? 20 seconds. Login with a different user on the weird machine? 20 seconds. Plug wonky machine into a different network port? 20 seconds. So the problem appears to be hardware related (i.e. wonky internal NIC) or related to a setting that is not profile specific. Any ideas? "Scrubbing" all the documents isn't ideal for several reasons. This is driving me nuts because I swear I ran into this before many years ago and eventually figured it out. But I appear to have lost my notes.

    Read the article

  • Issues resolving DNS entries for multi-homed servers

    - by I.T. Support
    This is difficult to explain, so bear with me. We have 2 domain controllers, each multi-homed to straddle 2 internal subnets, (subnet A and subnet B) and provide dns, dhcp, and ldap authentication. Both domain controllers each have 2 DNS entries. both entries have identical host names, but correspond to subnet A & subnet B respectively (example entries shown): dc1 host 192.168.8.1 dc1 host 192.168.9.1 dc2 host 192.168.8.2 dc2 host 192.168.9.2 We also have a 3rd subnet for our dmz, (subnet C) which neither domain controller has an IP address on, but our firewall/routing tables provide access to subnet A from subnet C and vice versa, but don't allow access to subnet B from subnet C. Here's my issue. How can I force/determine which dns entry is used when a server on subnet C queries either domain controller by host name? Right now it seems to randomly pick one of the two entries, swaps out the name for the IP address and that's that. The problem is if it randomly selects the entry that corresponds to the 9.x subnet B (no access from subnet C), then the server fails to resolve. If it picks the entry for the 8.x subnet A then it resolves (firewall/routing tables defined for communication between these 2 subnets) Here's what I'd like to know: What are Best Practices (if any) for dealing with DNS resolution on subnets that the DNS servers don't have a presence on? Can I control something akin to a metric value to force an order of DNS resolution when there are multiple entries for the same host name that correspond to different IP subnets? Should I even have 2 DNS HOST entries for the same name? Here's what I'd like to avoid: Making edits to the HOSTS files of servers on subnet C to force DNS resolution of the hostname to the appropriate subnet Adding NIC's to the DC's to have them straddle the DMZ as well, thus obtaining a third DNS entry that corresponds to subnet C Again, my apologies if this was too verbose / unclear. Thanks!

    Read the article

  • Routing based on source address in Windows Server 2008 R2

    - by rocku
    Hi, I'm implementing a direct routing load balanced solution using Windows Server 2008 R2 as back-end server. I've configured a loopback interface with the external IP address. This works, I am receiving packets with the external IP address and respond to them appropriately. However our infrastructure requires that traffic which is being load-balanced should go through a different gateway then any other traffic originating from the server, ie. updates etc. So basicly I need to route packets based on source address (external IP) to another gateway. The built-in Windows 'route' command allows routing based on destination address only. I've tried setting a default gateway on the loopback interface and mangled with weak/strong host send/receive parameters on the interfaces, however this didn't work. Is there any way around this, possibly using third party tools?

    Read the article

  • Linux Kernel Packet Forwarding Performance

    - by Bob Somers
    I've been using a Linux box as a router for some time now. Nothing too fancy, just enabling forwarding in the kernel, turning on masquerading, and setting up iptables to poke a few holes in the firewall. Recently a friend of mine pointed out a performance problem. Single TCP connections seem to experience very poor performance. You have to open multiple parallel TCP connections to get decent speed. For example, I have a 10 Mbit internet connection. When I download a file from a known-fast source using something like the DownThemAll! extension for Firefox (which opens multiple parallel TCP connections) I can get it to max out my downstream bandwidth at around 1 MB/s. However, when I download the same file using the built-in download manager in Firefox (uses only a single TCP connection) it starts fast and the speed tanks until it tops out around 100 KB/s to 350 KB/s. I've checked the internal network and it doesn't seem to have any problems. Everything goes through a 100 Mbit switch. I've also run iperf both internally (from the router to my desktop) and externally (from my desktop to a Linux box I own out on the net) and haven't seen any problems. It tops out around 1 MB/s like it should. Speedtest.net also reports 10 Mbits speeds. The load on the Linux machine is around 0.00, 0.00, 0.00 all the time, and it's got plenty of free RAM. It's an older laptop with a Pentium M 1.6 GHz processor and 1 GB of RAM. The internal network is connected to the built in Intel NIC and the cable modem is connected to a Netgear FA511 32-bit PCMCIA network card. I think the problem is with the packet forwarding in the router, but I honestly am not sure where the problem could be. Is there anything that would substantially slow down a single TCP stream?

    Read the article

  • SNAT through Racoon IPSec VPN

    - by Mite fine d'ailes
    I am trying to route traffic from a device (that I will call "target") connected to my Ubuntu box (that I will call "host") to servers at a remote office. The host uses a Racoon IPSec VPN, connected through a NIC called efix. This creates an aliased IF called efix:0 which has IP adress 192.168.190.132. It is able to reach the servers. The link between host and target is an Ethernet link, using IP adresses 10.0.0.1 on IF eusb for the host and 10.0.0.2 on IF eth0 for the target. I have setup the following routes and iptables entries: On target: 10.0.0.0 * 255.255.255.0 U 0 0 0 eth0 default 10.0.0.1 0.0.0.0 UG 0 0 0 eth0 On host: echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -j SNAT --to 192.168.190.132 iptables -A FORWARD -s 10.0.0.0/24 -j ACCEPT iptables -A FORWARD -d 10.0.0.0/24 -j ACCEPT Using Wireshark to monitor an HTTP GET, I can see SYN packets from the target go all the way to the server, but the server's SYNACK packets stop at the host and are not forwarded to the target. Am I missing something here ? Isn't SNAT supposed to keep track of the connections ?

    Read the article

  • Ubuntu: How to login without entering username and password

    - by torbengb
    I'm a newbie running Ubuntu 9.10. I have two users (wife and me), and each user's screensaver is set to lock so that on wakeup, we get to choose which user's desktop to go to. However, Ubuntu requires a password, so this is pretty tedious. I'd like to switch users without entering any password. I know about this trick that works for the boot login, but it doesn't deal with multiple users. Is it possible to set empty passwords for users in Ubuntu, or skip the password in other ways? (I'm expecting real Linux users to suggest that passwordless users must not get any rights and there be an admin user with a strong password. Yes, you're right. But that's not what this question is about. Thanks.)

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >