Search Results

Search found 11954 results on 479 pages for 'gets'.

Page 306/479 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • Terminal server performance over high latency links

    - by holz
    Our datacenter and head office is currently in Brisbane, Australia, and we have a branch office in the UK. We have a private WAN with a 768k link to our UK office and the latency is at about 350ms. The terminal server performance is reeeeealy bad. Applications that don't have too much animation or any images seem to be okay. But as soon as they do, the session is almost unusable. Powerpoint and internet explorer are good examples of apps that make it run slow. And if there is an image in your email signature, outlook will hang for about 10 seconds each time a new line is inserted, while the image gets moved down a few pixels. We are currently running server 2003. I have tried Server 2008 R2 RDS, and also a third party solution called Blaze by a company called Ericom, but it is still not too much better. We currently have a 5 levels dynamic class of service with the priority in the following order. VoIP Video Terminal Services Printing Everything else When testing the terminal server performance, the link monitored using net-flows, and have plenty we of bandwidth available, so I believe that it is a latency issue rather than bandwidth. Is there anything that can be done to improve performance. Would citrix help at all?

    Read the article

  • Port forwarding for samba

    - by EternallyGreen
    Alright, here's the setup: Internet - Modem - WRT54G - hubs - winxp workstations & linux smb server. Its basically a home-style distributed internet connection setup, except its at a school. What I want is remote, offsite smb access. I figured I'd need to find out which ports need forwarding and then forward them to the server on the router. I'm told in another question on SF that multiple ports will need forwarding, and it gets somewhat complicated. One of the things I need to know is which ports require forwarding for this, and what complications or vulnerabilities could arise from this. Any additional information you think I should have before doing this would be great. I'm told SMB doesn't support encryption, which is fine. Given I set up authentication/access control, all this means is that once one of my users authenticates and starts downloading data, the unencrypted traffic could be intercepted and read by a MITM, correct? Given that that's the only problem arising from lack of encryption, this is of no concern to me. I suppose that it could also mean a MITM injecting false data into the data stream, eg: user requests file A, MITM intercepts and replaces the contents of file A with some false data. This isn't really an issue either, because my users would know that something was wrong, and its not likely anyone would have incentive to do this anyway. Another thing I've been informed of is Microsoft's poor implementation of SMB, and its crap track record for security. Does this apply if only the client-end is MS? My server is linux.

    Read the article

  • What can I do to lower bandwidth cost on a bandwidth heavy site?

    - by acidzombie24
    The easiest answer is CDN but I'd like to ask. A friend of mine has a server that is used for mirror downloads. He says he is doing about 10TB of bandwidth a month which shocked me (I wonder if he is lying). I seen his site and he has no ads. I suspect he might close his website once he gets the bill. Anyways I was wondering since his CPU/RAM is not being used and his HD usage is around 15gb what he can do to lower cost if he continues this site. I said put up ads but I don't know if ads would cover it I found one CDN which offers $0.070 / GB. 10240gb (10TB) * .07 = $717 a month. That seems a little steep but he is using lots of traffic due to it being a mirror site. Also using a CDN doesnt make sense as he doesn't need multiple servers hosting the files in different areas (which is one reason he isn't using that now). He just needs a big upload pipe Is there something he can do? At the moment he is paying $200 a month on a dedicated server and he is using WAY more bandwidth then he should be using. Side question: Can gz-ing files large already compressed files help? like on (zip, rars, etc)

    Read the article

  • how to run multiple shell scripts in parallel

    - by tom smith
    I've got a few test scripts, each of which runs a test php app. Each script runs forever. So, cat.sh, dog.sh, and foo.sh, each run a php script, and each shell script runs the php app in a loop, so it runs forever, sleeping after each run. I'm trying to figure out how to run the scripts in parallel, and at the same time, see the output of the php apps in the stdout/term window. I thought, simply doing something like foo.sh > &2 dog.sh > &2 cat.sh > &2 in a shell script would be sufficient, but it's not working. foo.sh, runs foo.php once, and it runs correctly dog.sh, runs dog.php in a never ending loop. it runs as expected cat.sh, runs cat.php in a never ending loop *** this never runs!!! it appears that the shell script never gets to run cat.sh. if i run cat.sh by itself in a separate window/term, it runs as expected... thoughts/comments

    Read the article

  • What can lead to a zone memory exhaustion and how Nginx reacts to it?

    - by Miles Hughes
    What is a possible scenario for exhausting the memory designated to a connection zone with limit_conn_zone directive and what are the implication in this case? Suppose I have this in my configuration: http { limit_conn_zone $binary_remote_addr zone=connzone:1m; ... server { limit_conn connzone 5; which, according to the documentation, allocates 16000 states for connzone on a 64-bit server. It also says that If the storage for a zone is exhausted, the server will return error 503 (Service Temporarily Unavailable) to all further requests. Well, Ok. But what does it mean on practice? When does this happen? Who receives those 503s? Does it mean that if the number of IPs somehow associated with connzone hits 16000 everyone gets a 503 and it's all over? How does Nginx decide? The documentation is weirdly vague on this. So, considering the example config, who would actually get a 503 and under which circumstances and how would things go from there? Same with request zones?

    Read the article

  • Problem configuring Apache/Wordpress on subdomain

    - by friism
    I have two servers (one LAMP, one Windows) and one website with an associated blog. I'm running the main site on the Windows server, and the blog on the LAMP server, using Wordpress. The main site is accessed at http://folketsting.dk (it's in Danish -- sorry), the blog is accessed at http://blog.folketsting.dk (this link is bad, read on). The main site works fine. The blog works, except for the frontpage. Example of working post: http://blog.folketsting.dk/2009/10/09/ftlive/. The frontpage of the blog (http://blog.folketsting.dk) shows html from http://folketsting.dk however (except for the css and javascript). In fact, any other URL than the frontpage "works", and gets served by Wordpress e.g. http://blog.folketsting.dk/foo. I cannot -- for the life of me -- understand how the LAMP server running http://blog.folketsting.dk manages to serve up content generated by the Windows server running http://folketsting.dk. Looking at the response headers at http://blog.folketsting.dk, it's evident that the content originates from Apache, not IIS. I'm pretty sure it's not a DNS-issue, since the problem is evident even when accessing the raw IP, eg. http://130.226.142.141/ vs. http://130.226.142.141/foo. I'm thinking it's a bad config in Apache... any clues?

    Read the article

  • Routing WIFI and LAN for specific traffic

    - by jakebird451
    I have two network devices aboard my macbook pro: WIFI (en1): Used for general traffic. Connects to an ip of 192.168.19.* via DHCP LAN (en0): Used for specific traffic. Connects to an ip of 192.168.2.10 as a static IP. Does not connect to a router, only a switch for direct routing connection. I have 4 IP addresses I need to access on the LAN: 192.168.2.1 192.168.2.21 192.168.2.20 192.168.2.30 The rest of the traffic needs to go to WIFI. I have tried setting up a routing table for the specific ip addresses, but I only managed to mess up my network. I do not venture out into the world of networking too often, but this was the latest command I have been trying: sudo route add -host 192.168.2.30 -interface en0 This command killed my ability to use ping. It told me that ping could not allocate memory (is that even possible)? It also killed my wifi access. Logging out and back in fixed the issue. I really do not mind to make this solution permanent, so I am fine with a temporary routing. EDIT: If I currently have been trying: sudo route flush sudo route add default 192.168.19.1 This gets everything to work for about a minute. But after such minute it "forgets" the routing to WiFi while retaining LAN's (en0) routing. If I unplug and replug my LAN (en0) cable, the process works for another minute.

    Read the article

  • Splitting Servers into Two Groups

    - by Matt Hanson
    At our organization, we're looking at implementing some sort of informal internal policy for server maintenance. What we're looking at doing is completing maintenance on our entire server pool every two months; each month we'll do half of the servers. What I'm trying to figure out is some way to split the servers into the two groups. Our naming convention isn't much to be desired (but getting better) so by name or number doesn't really work. I can easily take a list of all the servers and split them in two, but with new servers are being added constantly, and old ones retired, that list would be a headache to maintain. I'd like to look at any given server and know if it should have its maintenance done this month or next. For example, it would be nice to look at the serial number. If it started with an even number, then it gets maintenance done on even months and vice-versa. This example won't work though as a little over half of the servers are virtual. Any ideas?

    Read the article

  • PPTP server connection closes - Too much data?

    - by Sebastian Hoitz
    I set up a PPTP server for my company. However, every time I have another computer connected to this server (i.e. our backup server) and a lot of data gets transferred, the connection to this computer closes. In the syslog on the PPTP server I find this message: Apr 22 12:44:34 komola-chase pptpd[2581]: CTRL: Reaping child PPP[2583] Apr 22 12:44:34 komola-chase pppd[2583]: MPPE disabled Apr 22 12:44:34 komola-chase pppd[2583]: Connection terminated. Apr 22 12:44:34 komola-chase pppd[2583]: Exit. Apr 22 12:44:34 komola-chase pptpd[2581]: CTRL: Client 192.168.0.11 control connection finished Apr 22 12:55:11 komola-chase pptpd[2674]: GRE: xmit failed from decaps_hdlc: No buffer space available Apr 22 12:55:11 komola-chase pptpd[2674]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Apr 22 12:55:11 komola-chase pppd[2675]: Modem hangup Apr 22 12:55:11 komola-chase pppd[2675]: Connect time 23.0 minutes. Hopefully you can help me as to what is wrong. As far as I can tell, there is no compression enabled on the PPTP server (npbsdcomp option). Thank you!

    Read the article

  • Network Traffic Log

    - by Chris Becke
    Background - On my "home" network I have a Linksys WTR45GL router providing my internet access as well as a wireless AP. Connected I have * 2 Windows PCs (wired) * At least one laptop (Wired) * Some 802.11 enabled handheld consoles (PSPs) * A Nintendo Wii * Some windows XP pcs used by the people in the granny flat. Where I live, South Africa, well, 1Gb worth of monthly cap is, while not expensive, costly enough that I'd like to be sure that all the bandwidth used by devices on my network is ... well ... legitimate and not the result of neighbors parasiting my wireless, malware or just the result of "liberal" download policies in my software. I got the Linksys WRT45GL on the understanding that there were custom firmwares (DD-WRT and Tomato) that allowed bandwidth tracking, but there doesn't seem to be any facility to get a log of traffic that can be examined to see (a) which local devices were the biggest consumers of bandwidth and (b) what they were connected to. What tools are there for logging traffic such that, when it gets to that OMG moment in the month when all my bandwidth is gone, I have a chance to find out what the hell used it all up (and hopefully attempt some corrective action).

    Read the article

  • Is there any way to detect when nginx has completed a graceful shutdown?

    - by Daniel Vandersluis
    I have a ruby on rails application which is running on passenger and nginx, with one main webserver and multiple application servers. I am trying to update my deployment process in order to minimize (or ideally, remove) any downtime caused by the deployment. The main roadblock right now is that passenger takes some time to restart (ie. reload the application), so in order to get around this, I want to stagger my restarts so that only one app server gets restart at a time. In order to do this without losing any long running passenger processes, I am thinking I need to gracefully shutdown the app server's nginx instance, which will cause it to no longer accept new connections but continue to process the existing ones; as well, HAProxy will detect that the app server is down and route new requests to the other server. However, assuming that there is a long-running process, I am not sure how to detect when the graceful shutdown has completed so that I can start it back up. Since the shutdown is caused by sending a signal (ie. kill -QUIT $( cat /var/run/nginx.pid )), and the kill command will return immediately, I cannot combine commands (ie. kill ... && touch restarted), as the touch command will execute immediately, even if nginx hasn't completed its shutdown. Is there any good way to do this?

    Read the article

  • Windows 8 auto-hibernate from sleep not working on Retina MacBook Pro

    - by frenchglen
    I have a similar question to this one. Only my context is the 15" Retina MacBook Pro - and Windows 8. I have just the original Mac OS X Mountain Lion on there, then Windows 8 via Bootcamp. no rEFIt installed. (I just press ALT every time I restart windows, actually as a security measure to stop tech-unsavvy thugs, who, if the laptop is stolen, think it's only a mac and don't discover my Windows as quickly as they would've, and by that time I remotely activate various anti-theft mac apps and nab them that way). SO: like the related question asks, why isn't it behaving like it should? The Windows 7 FAQ states: Will sleep eventually drain my laptop battery? If your laptop battery charge gets critically low while the computer is asleep, Windows automatically puts the laptop into hibernation mode. But this is just not happening - on my rMBP Windows 8. It seems EVERY time I set the laptop to sleep (when it reaches 10%), then arriving home and plugging it in and hoping to simply resume my work, it does NOT save the session to disk and I lose ALL my work. Who's fault is it? Win 8's (a bug, grr)? Or Apple's EFI system (maybe fixable via editing EFI options/do I have to install refit to make it work perhaps?) Or maybe changing windows power options can somehow fix the problem? Thanks for your help.

    Read the article

  • XP CD doesn't offer repair option

    - by SLaks
    I'm fixing an IBM Thinkpad laptop running XP Pro which doesn't boot all the way (It gets past the XP logo boot screen, a movable mouse cursor appears, and it doesn't get any further, even in safe mode) after being bumped a bit. I'd like to do a repair install. I booted it to an XP Pro CD, but the Repair install option (not recovery console) doesn't appear. After pressing F8 to accept the EULA, it says, Loading setupp.ini, then immediately goes to a partition list (it never says Searching for previous installations of Microsoft Windows). If I select the partition, it warns me that there is already a Windows installation in that partition, and that it will be completely obliterated if I continue. (So I know that it does see the contents of the hard disk) I booted the same CD in an XP virtual machine, and it offered to repair the XP installtion in the virtual machine, so the problem isn't with the CD. Does anyone know how make it do a repair install (or have any other ideas to solve the problem?) It might not show up because it's an OEM installation (but not an OEM CD), but that's just a guess.

    Read the article

  • OS X AFP shares and access

    - by gbrandt
    I am running 10.5.6 Client as a mini server and am having problems with AFP shares. All clients are OS X 10.5.7 I have created three users for 'File Sharing' only on the 'server'. I have created groups and placed these users into specific groups. I have created ACL's to give each group access to certain shares. Two of those users can read and write to any share, one user cannot write to the shares, with different results: when copying a directory, only the directory is created, no files inside are copied, the OS does not give any errors when copying a single file I get three dialogs: "You may need to enter the name and password for an administrator on this computer to change the item named 'xxxx', "The item 'xxxxx' contains one or more items you do not have permission to read. Do you want to copy the items you are allowed to read?, and, The operation cannot be completed because you do not have sufficient priveleges for some of the items. With the single file, a file gets created on the server, but is empty. My ACL for the group this user belongs to is: 0: group:projectmembers allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 1: group:informationtechnology inherited allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 2: group:executive inherited allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 3: group:everyone inherited deny list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit User 1 & 2 belong to informationtechnology and executive and projectmembers, they can read and write freely on the share. User 3 belongs to projectmembers and cannot read and write freely. I have read that this is a UID issue, however User 1 & 2 do not have matching UID's across clients and server and they work, so I don't think this is the case. Any ideas?

    Read the article

  • CopSSH SFTP -- limit users access to their home directory only

    - by bradvido
    Let me preface this by saying I've read and followed these instructions at the FAQ many times: http://www.itefix.no/i2/node/37 It does not do what the title claims... It allows every user access to every other user's home directory, as well as access to all subfolders below the copssh installation path. I'm only using this for SFTP access and I need my users to be sandboxed into only their home directory. If you know a fool-proof way to lock users down so they can see only their home directory and its subfolders, stop reading now and reply with the solution. The details: Here is exactly what i tried as I followed the FAQ. My copSSH installation directory is: C:\Program Files\CopSSH net localgroup sftp_users /ADD **Create a user group to hold all my SFTP users cacls c:\ /c /e /t /d sftp_users **For that group, deny access at the top level and all levels below cacls "C:\Program Files\CopSSH" /c /e /t /r sftp_users **Allow my user group access to the copSSH installation directory and its subdirectories For each sftp user, I create a new windows user account, then I: net localgroup sftp_users sftp_user_1 /add **Add my user to the group I've created Open the activate user wizard for CopSSH, choosing the user, "/bin/sftponly" and Remove copssh home directory if it exists **Remains checked Create keys for public key authentication **Remains checked Create link to user's real home directory **Remains checked This works, however, every user has access to every other user's home directory as well as the CopSSH root directory.... So I tried denying access for all users to the user home directory: cacls "C:\Program Files\CopSSH\home" /c /e /t /d sftp_users **Deny access for users to the user home directory Then I tried adding permissions on a user-by-user basis for each users home\username folder. However,these permission were not allowed by windows because of the above deny rule i created at the home directory was being inherited and over-riding my allow rule. The next step for me would be to remove the deny rule at the home directory and for each user folder, add a deny rule for every user it doesn't belong to, and add an allow rule for the one user it does belong to. However, as my user list gets long, this will become very cumbersome. Thanks for the help!

    Read the article

  • AD server within another network - DNS issues

    - by Harry Muscle
    Here's a quick summary of the environment I support: we have a domain (domain A) that has about 20 client computers. The domain server for this domain and all the clients sit within the network infrastructure of a larger domain (domain B). All the computers get their network settings via DHCP from domain B's servers. I have no control and am unable to make changes to anything to do with domain B. The problem I have is that currently in order for my domain's (domain A) clients to be able to resolve the domain server and the shares on it they have their DNS server IP address set to domain A's domain server (via the default GPO). Unfortunately when a laptop (windows and mac) gets taken home, they are still looking for the domain server as their DNS server and obviously can't access the internet correctly outside of our environment. Ideally I need a solution where the machines use domain A's domain server as their DNS when inside the office and use what ever DNS server DHCP gives them when they are outside the office. However, since I have no control over the office DHCP server, I'm not sure how this can be accomplished. Any help and advice that anyone can offer is highly appreciated. Thanks, Harry P.S. The solution I'm trying to find needs to require no involvement from the user.

    Read the article

  • Error with procmail script to use Maildir format

    - by bradlis7
    I have this code in /etc/procmailrc: DROPPRIVS=yes DEFAULT=$HOME/Maildir/ :0 * ? /usr/bin/test -d $DEFAULT || /bin/mkdir $DEFAULT { } :0 E { # Bail out if directory could not be created EXITCODE=127 HOST=bail.out } MAILDIR=$HOME/Maildir/ But, when the directory already exists, sometimes it will send a return email with this error: 554 5.3.0 unknown mailer error 127. The email still gets delivered, mind you, but it sends back an error code to the sending user as well. I fixed this temporarily by commenting out the EXITCODE and HOST lines, but I'd like to know if there is a better solution. I found this block of code in multiple places across the net, but couldn't really find why this error was coming back to me. It seems to happen when I send an email to a local user. Sometimes the user has a .forward file to send it on to other users, sometimes not, but the result has been the same. I also tried removing DROPPRIVS, just in case it was messing up the forwarding, but it did not seem to affect it. Is the line starting with * ? /usr/bin/test a problem? The * signifies a regex, but the ? makes it return an integer value, correct? What is the integer being matched against? Or is it just comparing the integer return value? Do I need a space between the two blocks? Thanks for the help.

    Read the article

  • Outlook express 6 crashes after "synchonizing account"

    - by Ira Baxter
    [EDIT: Revised OE version to match reality] I use Outlook Express (OE) 6.00.3790.3959 to read newsgroups. I regularly scan through about 100 newsgroups. When I select a newsgroup, and then select "synchronize account", OE goes through the newsgroups visibly updating the unread message count. It gets apparantly to the end, and then comes back to check for Watched Conversations, of which I typically have a few scattered across the newsgroups. It invariably crashes with some kind of access fault. A dialog box pops up saying "Want to debug with "; I invariable say No, and the process goes away. I restart OE, select the newsgroup, and skip "Synchronize Account"; I am able to read news and everything seems just fine. This behavior has occurred since I set it up several months ago. Is the newsgroups database screwed up? Can I run something like CHKDSK or Outlooks PST repair to fix things up? Any suggestions as to what to do? The system is Windows XP 64 system installed in January 2008. I accept Windows Updates and install them on a regular basis. I use Outlook (not OE) for regular email an it behaves perfectly normally.

    Read the article

  • How to keep Ubuntu 11.10 and Kate editor w/terminal from changing command line when changing tabs?

    - by Kairan
    I am programming C using Kate editor in Ubuntu 11.10. It works great, but when I change tabs in Kate, the terminal line changes to the file path of the tab I click on. Normally this is not a big deal (other than annoyingly adding extra text to my terminal) however if I am currently RUNNNING a C program, it obviously will type at the command line, which is not so cool. Example terminal window for my C program (its at a menu): 1) select opt 1 2) select opt 2 Enter choice: (here it waits for prompt from user) Now when I click a tab in Kate, it wants to put in the cd / path of the file in that tab, such as: cd /home/user/os/files And of course since my terminal was waiting for prompt from user it gets that command.. not good. Perhaps there is no fix, but maybe someone knows? Obviously I could choose NOT to switch tabs or end program before switching tabs... Note: I probably made the mistake of putting this under StackOverflow which is more of a programming area - so though repost here might be best (I am not sure how to link the questions but will paste hyperlink to that post - I dont want to violate any stackoverflow/superuser violations) Suggestions on merging them are welcome or if I should delete one? StackOverFlow Question

    Read the article

  • Windows 7 Media Center PC not displaying MKV files properly

    - by David
    Does anyone know why Windows Media Center wouldn't correctly display a 16x9 video? I just upgraded my home-built HTPC from Vista Ultimate (hey, it was a giveaway) to Windows 7 Ultimate - clean install. After that, I installed the Divx7 beta (to get .MKV support) and AC3filter (so I could HEAR the MKV files). Previously, under Vista Ultimate (32 bit), I had Arcsoft's Total Theater Pro for playing Blu-Rays and MKV decoding under Media Center. Now when I play a 720p 16x9 MKV file, I get some 'letterboxing' - like it's halfway between 4x3 and 16x9 - with the aspect ration looking alightly squished as a result. Here's the wierd part. If I use the Media Center connection software in my XBox 360 - it plays perfectly, filling the 16x9 screen edge-to-edge, just like Vista's WMC software USED to. Of course, beats the network up because the XBox goes to the HTPC, the HTPC goes to my WHS machine to fetch the data, it comes back to the HTPC, gets transcoded and streamed back to the Xbox. I'm running the latest drivers for an NVidia card (as fetched by Windows 7). I have no idea WHY this is because if I play "ordinary" (i.s. SD) Divx files that are 16x9, they play just fine, scaled right up to my screen's edges. It puzzles me as to why the same machine that properly converts the bits for the Xbox can't display/scale them properly for the attached display. Mind you, Windows Media Player exhibits the same symptoms. Ideas?

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • Why does ATI 5570 HD video card driver installation cause Windows 7 To Blue Screen?

    - by Mort
    This one is for the hive mind. I have a brand new Dell Optiplex 760 workstation with 4 gigabytes of RAM running Windows 7 Professional (32bit). This is a new box with nothing installed other than what was provided for directly by Dell. I installed a Saphire ATI PCI Express 5570 HD. Upon trying to install the 10.4 Catalyst drivers the system will blue screen. It blue screens during the hardware detection phase of the installation process. I have already performed the following trouble shooting steps: Changed system RAM Installed only 2 gigabytes of RAM Installed different versions of Catalyst drivers (10.4 - 9.12) Tried to install video only component of driver (vs entire Catalyst suite) Made sure Windows 7 was fully updated Flashed mother board BIOS to current version Removed and re-seated video card Contacted ATI Support (We all know how this went......) Verified supply outputting properly The blue screen error (via Windows BugCheck entry in event log) is a 0x000000CA and refers to a plug and play error most likely caused by a bad driver. The problem is that the driver installation process never gets far enough to actually install a driver. The resolution center in Windows provides a solution of installing the 10.4 Catalyst driver to resolve issue (which fails). Looking for some alternate views to resolve.

    Read the article

  • Is 30 calls / second a lot for one IIS server?

    - by Lieven Cardoen
    We have a RIA application that 300 clients concurrently use in an intranet environment. Together they make 30 calls / second to IIS (asp.net) (actually it's 60 but calls are loadbalanced over two IIS servers). Half of the calls is getting an asset (Caching Profile is used so most of the time cache is hit), the other half is saving data to a sql server. Retrieving an asset is done with a aspx page. Saving the data happens via WebORB, asp.net and Sql Server. So some processing is needed by WebORB (amf decoding, GZIP, ...). We also use Spring.NET, and some of the container objects have a request scope (not a lot). IIS servers -- Virtual machines, 4 CPU, 2 gb RAM. They are based on Windows 2008 x64 SP2 Enterprise Edition. Sql Server 2008 is used. Apparently CPU of both IIS serers is constantly around 60-70%. Now, my question, is the load of 60-70% acceptable and how could we possible bring that down to less % (maybe using only one IIS server)? + Is 2 gb RAM enough? Assets can be up to 20mb, but on average, they are about 30kb. (the load of 60-70% is achieved with assets around 30kb). The data that gets saved with weborb is very small (2kb) and is just one object.

    Read the article

  • Configuring apache and php to handle many connections

    - by Marc
    My preliminary setup is like this. Two QuadCore 8GB servers running debian 6, with php and apache, One QuadCore 16GB server running debian 6, with mysql My plan is to have one 8Gb server to act as a proxy server, using vertx java to handle connections. I will let vertx use HttpClient to send web requests to the second 8GB server. This would have apache installed and use php to deliver any information that it gets from the mysql server on the third, 16GB server. The main reason I want this setup is to have things separated, so the "proxy" will be the only way to access the system, as the other two server will only be reachable from the local network. I can have the vertx proxy handle 5000+ concurrent connections, but, I don't know how to configure apache to handle all the requests coming from the proxy. Php will connect over mysqli with persistent connection pool of 500-800 connections, the mysql server seems not to have any issues on this part. In previous projects, the apache part was always causing issues, no matter how I set it up. I might not fully understand how to setup apache, since normally apache should handle many concurrent connections, but it does seem to now.

    Read the article

  • Strange corruption saving from Textpad 5 within Windows 7-64 VirtualBox VM to shared folder with Mac host

    - by joelarson
    I have a fairly new Window-7 64bit install running in Virtual Box on a MacBook Pro. I'm using TextPad 5 within that environment to edit source files that live on a shared folder that is on the Mac Host. When I save some of these source files, the saved file ends up with some amount of the end of the file repeated one or more times. For example, a file that has this at the end: ... return ttp; }; would, once saved, open up with: ... return ttp; }; }; It is definitely a problem with how the file gets written as opposed to how it's read, because I can see this now matter what app I use to open the file with (NotePad & Word in Windows 7, TextWrangler back in the Mac). I've tried saving as ANSI and UTF-8, and with or without the 'Write Unicode and UTF-8 BOM' checked in TextPad preferences. It doesn't happen with all files though I can't see any pattern about which files do or don't have the problem. It doesn't happen with files written to the Windows 7 c:\ drive. And so far it doesn't happen from other applications saving files, only TextPad. Any ideas? My versions: Textpad 5.4.2 Windows 7 Professional 64-bit, fully up to date VirtualBox 4.0.8 r71778 OSX 10.6.7

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >