Search Results

Search found 18761 results on 751 pages for 'lot'.

Page 542/751 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • Analyze a BSOD (irql_less_than_or_equal)

    - by Bruno Reis
    Hello. About 2 months ago I bought a new system and built it at home: Mother board: XFX X58i Processor: Core i7 920, using the stock cooler Memory: 3x2GB Corsair DDR3 1600 Video card: NVIDIA GTS 250 (1GB) Hard disk: 2x WD 500GB, 7200rpm I have 2 screens plugged into the video card, and the system is connected to a 550W PSU. Nothing is overclocked. After building the system, I stressed it a lot with Prime95 and rthdribl to check its stability. All my tests were perfect. So I reinstalled Win 7 x64 Professional and started using it normally. The first week (2010-03-15) I got the infamous irql_less_than_or_equal BSOD. Ten days after (2010-03-24) I got another one. Then on 2010-04-09, 2010-05-04. Since 2 days ago it became worse: I got one bluescreen per day! (2010-05-12, 2010-05-13, 2010-05-14). I installed BlueScreenView to try to obtain some information, but I'm not able to extract any useful information apart from the bug check string (irql_less_than_or_equal), and that it was caused by ntoskrnl.exe (the first three at ntoskrnl.exe+71f00, the last 4 at ntoskrnl.exe+70600 -- which I suspect could be the same thing, as Microsoft could have patched this file in the mean time, so the address of the function causing it changed). Then I stressed my memory sticks with memtest, they worked perfectly. After booting, I've stressed my GPU with FurMark and RTHDRIBL, everything was fine. Then I stressed the CPU with 4 instances of Prime95 while monitoring the temperature -- that never exceeded 85oC with the case closed --, everything fine. Finally I've stressed the whole system with HeavyLoad for a looooong time, everything worked just fine. So, I have stressed most of the components of the system, but couldn't get any useful information from it. Do you have any hint on what else can I do to find the culprit? Thanks Bruno

    Read the article

  • It takes a long time until Windows XP recognize I connected USB drive

    - by Pavol G
    I have a problem with my new USB disk. When I connect it to my laptop with Windows XP SP2 it takes about 4-5min until Windows recognizes it and shows it as a new disk. I can also see (disk's LED is blinking) that something is scanning the disk when I connect it; when this is done Windows immediately recognize it. Also when I'm copying data to this disk the speed is about 3.5MB/sec. It's connected using USB2.0. I tried to check for spyware (using Spybot), also tried running Windows in safe mode. But still have the same problems. Do you have any idea what could help to solve this problem? On Windows Vista (another laptop) everything is ok, disk loads in about 15sec and speed is about 20-30MB/sec. Edit: I tried to update to SP3 - no change Edit2: When this "strange" scanning occurs I can see that DPCs process is taking about 50% of CPU. When the scan ends (after 5min) this process take 0% again. Edit3: About the scan time, currently it's taking about 5min, but this time is growing as I'm adding more data to the disk, currently its about 40GB and I don't want to see how long it will take with 1000GB. Thanks a lot for every advice!

    Read the article

  • Massive SQL issue shutting down our site.

    - by Pselus
    Our website has started timing out like crazy today. All of our clients are finding it unusable. The only error we can seem to trace down as a potential problem is this: SQLAllocHandle on SQL_HANDLE_DBC failed Error ASP Description Error Category Microsoft OLE DB Provider for ODBC Drivers I have no idea what it means or how to go about fixing it. Anyone ever encountered this error before? Currently, you can log in to our site, but then once you go to do anything else, you find yourself logged out or nothing happens. We have a lot of Ajax going on so the "nothing happens" probably has to do with the Ajax pages not loading properly due to logouts and so nothing displays to the user. Like I said, I'm at a loss. Anyone have any advice on this error? EDIT I realize that this isn't necessarily a programming question, but we are a small startup company that just yesterday started talking about how we need to get a backup server running. Apparently we talked about it too late. We don't have a DBA, just 2 mid level programmers trying their hardest to keep our clients happy. So please, if you have any assistance give it but please don't close my question right now. EDIT 2 Turns out we had something on our server running called "ServerMask" that makes our IIS server look like Apache to the outside world. Shutting it down fixed our issue. Still no idea why it was messing up but it was the problem apparently. Thanks to everyone who tried to help.

    Read the article

  • arp requests are sent continuously and my linux machine disconnected to the world

    - by sees
    I have the following problem and really need your help I'm implementing a small server to receive request from client on port 18999(just sample) using TCP socket. When I tested my server by using a lot of requests from a tablet through a router, I got the ARP problem(?) My net work just like: TABLET <------- WIRELESS ROUTER <------- MY SERVER (LINUX) Problems: 1. Can not connect to my Linux any more ( telnet, ping v.v...unreachable) 2. I use serial cable to connect to my Linux machine and use Wiresharp (from Windows) to catch the send message from Linux. It says that Linux keeps sending out continuously every 3 seconds ARP messages like the following: xx:xx:99:77:ff:69 ff:ff:ff:ff:ff:ff ARP 60 Who has 192.168.10.2? Tell 192.168.10.3 In the above message: xx:xx:99:77:ff:69 my Linux MAC address 192.168.10.2 my Tablet address 192.168.10.3 my Linux IP address Can you help me figure out the problem? Or tell me the way to detect the problem and reset the network back to normal (maybe restart Linux but I want to detect problem and restart automatically) UPDATE: 1. The above network works normally if tablet sends messages to my LINUX in normal speed (but also down after 48 hours) 2. The router works again after I unplugged my Linux ethernet cable (RJ45) from router. 3. The wireless network still works ( I can browser the router page from tablet) 4. When I use: ifconfig down then ifconfig up , the Linux restarts (?????????)

    Read the article

  • PowerPoint save group as picture creates asymmetric edge, how to fix?

    - by Se Norm
    I created tons of figures for my thesis in PowerPoint and now I realized that when I try to save the grouped items (= one figure) as a picture (EMF), it somehow asymmetrically adds a border on the left and the bottom. First one is original group, second is the same pasted as a picture. Original group: Pasted as a picture: Does anyone have an idea how to fix that for a huge number of figures? I think it only started happening when I used a page size of 1m x 1m in PowerPoint to be able to zoom in more for some figures. However, I cannot not simply change the page size now as it messes up font and object sizes. Also, copying it into a smaller page and then saving as EMF doesn't do the trick. Maybe it is not related to the page size after all. Cropping every figure individually would be a lot of work, so I hope there is a different solution. I found the origin of the problem: the text label in the left bottom corner of each image (0s, 8s, 16s). I still do not understand why it is happening though, since the text label does not expand over the edge of the image (it was aligned using the align left function). It would still be great if there was an easy way to fix this, especially as I want to keep the text where it is.

    Read the article

  • Geographically distributed file system with preferred locality

    - by dpb
    Hi All -- I'm building a application that needs to distribute a standard file server across a few sites over a WAN. Basically, each site needs to write a lot of misc files of varying size (some in the 100s MB range, but most small), and the application is written such that collisions aren't a problem. I'd like to have a system set up that meets the following qualifications: Each site can store files in a shared "namespace". That is, all the files would show up in the same filesystem. Each site would not send data over the WAN unless necessary. I.e., there would be local storage on each side of the WAN that would be "merged" into the same logical filesystem. Linux & Free ($$$) is a must. Basically, something like a central NFS share would meet most of the requirements, however it would not allow the locally written data to stay local. All data from remote sides of the WAN would be copied locally all the time. I have looked into Lustre, and have run some successful tests with it, however, it appears to distribute files fairly uniformly across the distributed storage. I have dug through the documentation and have not found anything that automatically will "prefer" local storage over remote storage. Even something that went with the lowest latency storage would be fine. It would work most of the time, which would meet this application's requirements. Any ideas?

    Read the article

  • Automated software installation for MS Windows?

    - by Duncan Bayne
    I am currently setting up a Windows development environment (the whole Visual Studio 2010 stack plus plugins on top of Windows 7). This has got me wondering whether there's a Windows equivalent to what I do for dev environment setup in Ubuntu. It takes literally hours to get a dev environment set up in Windows, involving a lot of manual intervention. On Ubuntu, I have two shell scripts - one I run as root which configures the system using apt-get (amongst other things), one I run as me which configures my user account. Those scripts live in my private Subversion repository. To set up a dev environment from scratch requires five commands: sudo apt-get install -y subversion svn co http://svn.XXXX.XXX/personal/ cd personal sudo ./ubuntu_setup_root.sh ./ubuntu_setup_user.sh The only human intervention required is to pick a root password for MySQL. So it takes only a few minutes of human attention to go from a vanilla Ubuntu installation to a full development environment with the latest builds of everything, perfectly tailored down to shortcut keys and wallpaper. Is there an equivalent process for Windows? In an ideal world it'd be something trivially scriptable using C# Script or Powershell, which could live in source control & make use of a repository of ISOs downloaded from MSDN ...

    Read the article

  • What are "Excess Fragments" in defragmenting a hard drive?

    - by Andrew Swift
    I'm defragmenting my hard drive (XP SP3) with PerfectDisk 7.0, and it finds 816,659 excess fragments when I ask for an analysis. [update] Specifically, it shows that the 1TB disk is 14% fragmented with 19693 fragments and 816,659 excess fragments. About 20% of the disk is still free space. What does excess fragments refer to? What is the difference between fragments and excess fragments? I have had problems in the past where I defragmented a fragmented disk and many files were corrupted. It seemed as though "excess fragments" referred to orphan pieces, where the program couldn't find out where to put them. If that was true, then defragmenting a disk resulted in many incomplete files, and in fact I defragmented a disk full of MP3's and got a lot of corrupted files as a result. Instead, I started to simply format a separate disk and copy everything from one to the other. That way there were no orphan bits, and no file corruption. Does anybody know what "excess fragments" really are?

    Read the article

  • Hubs/switches taking out switches?

    - by Bart Silverstrim
    Here's the issue...we have a network with a lot of Cisco switches. Someone plugged in a hub on the network, and then we started seeing "weird" behavior; errors in communication between clients and servers, or network timeouts, dropping network connections, etc. It seemed that somehow that hub (or SOHO switch) was particularly freaking out our Cisco 3700 series switches. Disconnect that hub or netgear-type SOHO switch and things settled down again. We're in the process of trying to get a centralized logging server for SNMP and management, etc., to see if we can trap errors or narrow down when someone does this sort of thing without our knowledge because things seem to work, for the most part, without issue, we just get freaky oddball incidents on particular switches that don't seem to have any explanation until we find out someone decided to take matters into their own hands to expand available ports in their room. Without getting into procedure changes or locking down ports or "in our organization they'd be fired" answers, can someone explain why adding a small switch or hub, not necessarily a SOHO router (even a dumb hub apparently caused the 3700's to freak out) sending DHCP request out, will cause issues? The boss said it's because the Cisco's are getting confused because that rogue hub/switch is bridging multiple MAC's/IP's into one port on the Cisco switches and they just choke on that, but I thought their routing tables should be able to handle multiple machines coming into the port. Anyone see that behavior before and have a clearer explanation of what's happening? I'd like to know for future troubleshooting and better understanding that just waving my hand and saying "you just can't".

    Read the article

  • Is it worthwhile to block malicious crawlers via iptables?

    - by EarthMind
    I periodically check my server logs and I notice a lot of crawlers search for the location of phpmyadmin, zencart, roundcube, administrator sections and other sensitive data. Then there are also crawlers under the name "Morfeus Fucking Scanner" or "Morfeus Strikes Again" searching for vulnerabilities in my PHP scripts and crawlers that perform strange (XSS?) GET requests such as: GET /static/)self.html(selector?jQuery( GET /static/]||!jQuery.support.htmlSerialize&&[1, GET /static/);display=elem.css( GET /static/.*. GET /static/);jQuery.removeData(elem, Until now I've always been storing these IPs manually to block them using iptables. But as these requests are only performed a maximum number of times from the same IP, I'm having my doubts if it does provide any advantage security related by blocking them. I'd like to know if it does anyone any good to block these crawlers in the firewall, and if so if there's a (not too complex) way of doing this automatically. And if it's wasted effort, maybe because these requests come from from new IPs after a while, if anyone can elaborate on this and maybe provide suggestion for more efficient ways of denying/restricting malicious crawler access. FYI: I'm also already blocking w00tw00t.at.ISC.SANS.DFind:) crawls using these instructions: http://spamcleaner.org/en/misc/w00tw00t.html

    Read the article

  • Skip Corrupt Revisions During SvnAdmin Load

    - by cisellis
    I have a dump file that I am generating from VSS with the use of the VSS2SVN script. I've tested the generated dump file before and some of the revisions are corrupt for one reason or another (binary data or long path strings seem to be the main culprit). This is fine. In the past I have used svndumpfilter to split the dump file, remove the corrupt revisions and continue to load the repository. It worked but took a lot of manual effort to start the load, hit the bad revision, split the dump file, continue loading the repo, etc. This dump file is pretty large (~5GB) and takes several hours to load. I think I know the answer to this but is there any way to simply tell svnadmin load to keep going and skip corrupt revisions? I know how to verify, backup, etc. the dump file and don't need any of that. I don't care about recovering corrupt revisions. I just want to start the load, walk away, and not worry about checking it every few hours to manually remove the corrupt revisions. Is that possible? Thanks.

    Read the article

  • Webmin ADSL module

    - by expatcm
    I was wondering if the Webmin ADSL module is going to help me solve a problem .... but I cannot find any documentation telling me what the module does ..... Any ideas? What I am hoping is that it will solve a problem .... I am just in the process of setting up a Debian server. I will use the DHCP server as part of the Debian setup to manage the lan IP addresses. I want to turn off the external DHCP server which is part of the Linksys ADSL modem / router and use just the modem. The challenge I have is knowing what I need to do in order to get the public DNS on the eth1. When I turn off the DHCP on the modem / router not a lot happens apart from no longer being able to access the settings .......... So I am looking at this Webmin module and wondering if it is to manage the ADSL connection and find the public DNS address .... The local DHCP server is working well for the lan, I am just stuck for the external DNS.

    Read the article

  • APACHE - PHP - Bounce emails

    - by user1179459
    I want to improve our mail server lists by handling all the bounces we get in our websites. I have a website which has over 8000+ users and another website which as over 1500+ users, they are emailed various notifications every second, ie. job alerts, email alerts, I am using POP connections with EXIM on APACHE server, most scripts are based on PHP language generates email on the fly. Problems i have But some users has registered long time ago and now quite few has bounce email address Some users register with dummy emails like [email protected] which never existed but a valid email address, any chance of stopping this without asking them login to the email account and clicking links which dont work most times.. (too annoying to the end user) Server is sending unnecessary emails can be avoided if i know they dont exists ? Solutions i need to have Is there a way i can download the bounce email list somewhere (WHM/Cpanel), i know exim mail has it but its not readable (i need a file like CSV or something similar to scan them over and write a php script to delete the users from the database ?) I need to know if there is any function in the PHP that can check the existence of the email address on the fly ? so that i can set the email send function in the mailer class to check before it sends out. On the server will bouncing emails are going to eatup lots of server resources ? like memory/cpu on processing them ? or are they are minimal where we dont have to worry about this at all ? May be a opensoruce or linux software to capture them and view them in a report basis and cleaning them up ? I am not a linux expert or server admin but i do lot of PHP coding, so please be descriptive of the solutions specially if they are linux commands Thank you!

    Read the article

  • Apache LDAP auth: denied all time

    - by Dmytro
    There is my config (httpd 2.4): <AuthnProviderAlias ldap zzzldap> LDAPReferrals Off AuthLDAPURL "ldaps://ldap.zzz.com:636/o=zzz.com?uid?sub?(objectClass=*)" AuthLDAPBindDN "uid=zzz,ou=Applications,o=zzz.com" AuthLDAPBindPassword "zzz" </AuthnProviderAlias> <Location /svn> DAV svn SVNParentPath /DATA/svn AuthType Basic AuthName "Subversion repositories" SSLRequireSSL AuthBasicProvider zzzldap <RequireAll> Require valid-user Require ldap-attribute employeeNumber=12345 Require ldap-group cn=yyy,ou=Groups,o=zzz.com </RequireAll> </Location> The Require valid-user is work. But ldap-attribite, ldap-filter, ldap-group does not work - denied in logs all time. I spent a lot of time but can't understand what's going on. This is the example of my logs: [Tue Sep 25 16:42:26.772006 2012] [authz_core:debug] [pid 23087:tid 139684003014400] mod_authz_core.c(802): [client 1.1.1.1:52624] AH01626: authorization result of Require valid-user : granted [Tue Sep 25 16:42:26.772014 2012] [authz_core:debug] [pid 23087:tid 139684003014400] mod_authz_core.c(802): [client 1.1.1.1:52624] AH01626: authorization result of Require ldap-attribute employeeNumber=12345: denied I checked all info with ldapsearch: there is a valid username, employee ID and other...

    Read the article

  • Robocopy failure with Windows Server 2008 Scheduled Task

    - by CC
    So I have a batch script for robocopy. Running this from the command line does exactly what I want. robocopy "D:\SQL Backup" \\server1\Backup$\daily /mir /s /copyall /log:\\lmcrfs4g\NavBackup$\robocopyLog.txt /np Then I create a Scheduled Task in Windows Server 2008. If I set up the task to use my Domain Admin account, great. But I'm trying to get it to run as a separate domain account for Scheduled Tasks. If I use that account, folders get created, but files aren't copied. I get the following error: 2011/02/17 15:41:48 ERROR 1307 (0x0000051B) Copying NTFS Security to Destination Directory D:\SQL Backup\folder\ This security ID may not be assigned as the owner of this object. I've verified my domain\Scheduled Tasks account has Full Control NTFS permissions on both the source and destination, and the Full Control Sharing on my hidden \server1\backup$ share. Just for giggles, I've tried adding the domain account to the local Administrators group on both servers. This works fine, but that seems like a lot of privileges just to copy files. Any ideas on what I'm missing?

    Read the article

  • Falsification of an email sent from lotus notes

    - by thejumper
    I needed from a person an important email with an attachment (I give the person a fictious name here: Name familyname) . The person sent me an email in the format below (I changed the content but respected the format). In the email he sent me there is two parts, first below the email he sent, and after that above the answer he received. I told the person that he didn't give me the true information because the first part of the email is falsified. Please tell me what you think. Thanks a lot. From: abc efg To: Name familyname Date: 2012-03-09 12:14 AM Subject: Lorem ipsum dolor Nam dictum feugiat neque, euismod convallis mi euismod ut. Mauris at vulputate enim. Nunc posuere tortor vitae justo volutpat luctus. Sed ut ligula id magna dictum blandit id vitae erat. Nunc dignissim eleifend vulputate. 2012/3/4 Name familyname Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce aliquam ligula in elit blandit porta. Vestibulum facilisis, elit ut aliquam euismod, erat elit tempor mi, et pulvinar velit neque ac nibh. Nullam fringilla viverra erat sed laoreet. Aenean elementum enim ac elit ultricies luctus. Name Adress. Tel. Thanks for your help.

    Read the article

  • Backup software for incremental swapped-out drives?

    - by user13743
    We're using Acronis Home 11 to backup our main Windows machine at the office. We have a set of portable hard drives that we swap out each week, for redundancy. We have incremental sets ( a new diff of the entire series each night) building on each drive. However, from time to time, Acronis gets confused and sometimes makes a new full backup. This eats up a lot of drive on the disks. Also, I have to trick the Acronis script each time I swap out a drive and point it to the new incremental backup set. Finally, if a drive gets full, there's no way to partition the backup set on a drive. I found this out the hard way, and now one drive is full with one backup set. So now on the other drive, I have three folders of backup sets. When one starts to get full, I delete the oldest one and start a new set. That way one single drive never gets filled up with one single backup set. I'm looking for a backup software that can backup Windows in incremental sets, and doesn't get tripped up with swapped out drives. Is there a better solution?

    Read the article

  • Concerns about Apache per-Vhost logging setup

    - by etienne
    I'm both senior developer and sysadmin in my company, so i'm trying to deal with the needs of both activities. I've set up our apache box, wich deals with 30-50 domains atm (and hopefully will grow larger) and hosts both production and development sites, with this directory structure: domains/ domains/domain.ext/ #FTPS chroot for user domain.ext domains/domain.ext/public #the DocumentRoot of http://domain.ext domains/domain.ext/logs domains/domain.ext/subdomains/sub.domain.ext domains/domain.ext/subdomains/sub.domain.ext/public #DocumentRoot of http://sub.domain.ext Each domain.ext Vhost runs with his dedicated user and group via mpm-itk, umask being 027, and the logs are stored via a piped sudo command, like this: ErrorLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_error.log" CustomLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_access.log" combined Now, i've read a lot about not letting the logs out of a very restricted directory, but the developers often need to give a quick look to a particular subdomain error log, and i don't really want to give them admin rights to look into /var/logs. Having them available into the ftp account is REALLY handy during development stages. Do you think this setup is viable and safe enough? To me it is apparently looking good, but i'm concerned about 3 security issues: -is the sudo pipe enough to deal with symlink exploits? Any catches i'm missing? -log dos: logs are in the same partition of all domains. got hundreds of gigs, but still, if one get disk-space dos'd, everything will break. Any workaround? Will a short timed logrotate suffice? -file descriptors limits: AFAIK the default limit for Apache on Ubuntu Server is currently 8192, which should be plenty enough to handle 2 log files per subdomain. Is it? Am i missing something? I hope to read some thoughts on the matter!

    Read the article

  • Deploying concrete5 on nginx

    - by Nithin
    I have a concrete5 site that works 'out of the box' in apache server. However I am having a lot of trouble running it in nginx. The following is the nginx configuration i am using: server { root /home/test/public; index index.php; access_log /home/test/logs/access.log; error_log /home/test/logs/error.log; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } # pass the PHP scripts to FastCGI server listening on unix socket # location ~ \.php($|/) { fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } location ~ /\.ht { deny all; } } I am able to get the homepage but am having problem with the inner pages. The inner pages display an "Access denied". Possibly the rewrite is not working, in effect I think its querying and trying to execute php files directly instead of going through the concrete dispatcher. I am totally lost here. Thank you for your help, in advance.

    Read the article

  • Spots appear in a rectangle area on screen, ubuntu gnome 13.04, nvidia driver

    - by frozen-flame
    I am using Ubuntu Gnome 13.04 with nvidia-310 driver installed. My GPU is GeForce GTX650. Strange spots freqently appear on screen, with following traits: Spots are restricted in one or two rectangle areas at any instant. When typing, the pattern of spots change. Possibly increase, or all disappear when one key pressed. Mouse movement also influences. This problem last within one boot. The only way can I get rid of this problem is to reboot. It can be detected as soon as entering desktop if it appears. Simultaneously, the "power off" option is lost in the top-right menu of Gnome3. Never such problem when using windows 7 on the same computer, neither ubuntu with Nouveou driver. Seldomly, half of the screen become black. I googled a lot. Similar conditions are described, but no confirmed solution. Uninstall-r einstall strategy does not work. Any clue solving this will be appeciated.

    Read the article

  • Ubuntu lucid, maverick high iowait

    - by netom
    I'm using Ubuntu, and I've got the same problem with Lucid and Maverick. From time to time, especially a few minutes after boot, the iowait goes between 50-100% and the box is unusable. Everything that tries to access the disk freezes. I have the following setup: Hard disk: Model Family: Western Digital Caviar Green family Device Model: WDC WD15EADS-00P8B0 Serial Number: WD-WMAVU0391287 Firmware Version: 01.00A01 User Capacity: 1.500.301.910.016 bytes I have a quad core Intel Core2 Q6600 processor, and 4G of memory. When the high iowait occurs, usually 4 processes are active: kdmflush (two procs) jbd2/dm-0-8 jbd2/db-1-8 and a few more starving user processes of course. I know this from top and iotop. Any suggestions about why this is happening? There are a lot of q/a-s about linux and high iowait, but none of them helped so far, I even tweaked the hard disk not to park the head in every 8 seconds (Load cycle count is 50334!!!!! :o ), but nothing. Problem persists. Thank you in advance.

    Read the article

  • Cisco QoS Guidance

    - by Kyle Brandt
    I have a 10M connection to the internet that is hooked into a 100M port. I am getting started with QoS, and am hopping for a little guidance on setting it up on a Cisco 3825 router. Right now I am going forward with the idea that I have to implement it on my router, and the provider can't provide QoS for me. How I envision it working is that the QoS will drop or queue packets on my router and that will help prevent a situation where the provider has to start dropping a lot of packets. Right now all I am tasked with is making sure that one of the 3 LANs gets a certain slice (say 3M for Gig Lan1) of the 10M internet connection (But ideally this will be more flexible in the Future). 10M Internet on 100M port on HWIC-4ESW +-----------------------+ | | Gig Lan1 | Cisco 3825 | Lan3 on HWIC-4ESW | | +-----------------------+ Gig Lan2 I need to learn more about QoS, but having a target technology and maybe example configuration will help me wrap my head around the reading I am doing a little more. Which Cisco QoS Technology do you recommend for this particular situation? Have a basic sample config of how this might work? Right now the 10M line is not congested, so this more to have something in place in case it starts to become mildly congested in the future. I do have VOIP at one location connected to this one over the Internet that goes through a VPN tunnel. Everything else that is between this location and other offices is on a separate MPLS network.

    Read the article

  • Debugging a Drobo that chokes Windows 7x64 When Plugged In

    - by Pridkett
    I've had a love hate relationship with my Drobo for a long time. After two years of using it on a Linux box, I moved it over to a Windows 7 machine where it seemed to work just fine for a long time, but it was under very light usage. Mainly backups that never actually happened. Recently I began using it for additional backup services (through CrashPlan, which is great). This means the Drobo gets a lot more usage. Also it means that something interesting happens, the Drobo can choke my system on startup. Here's what I mean: Start computer without Drobo plugged in, CrashPlan and Drobo Dashboard services disabled: 105s Start computer with Drobo plugged in Crashplan disabled, Drobo Dashboard enabled: 250s (and 1 cpu at 100% for a very long time, drobo churning) Start computer with Drobo plugged in, CrashPlan and Drobo Dashboard disabled: 250s (1 cpu at 100% for a very long time, drobo churning) Start computer with Drobo plugged in, Crashplan and Drobo Dashboard enabled: 300s (1 cpu at 100% for a very long time, drobo churning) If I yank the USB plug on the Drobo the CPU usage goes down to nothing very quickly. The slow startup in the fourth scenario is because CrashPlan is trying desperately to load stuff up on the H: drive before it gives up, so I've disabled it for the time being. So here's my question: What the heck is going on when I plug the drobo in? I've fired up Process Explorer and see that the System process is hogging the CPU, specifically it's an ntoskrnl.exe/KdPollBreakIn thread that's going ape. Is this something that's wrong with Drobo? Windows? Any idea on how to find out? If it matters, here's tech info: Athlon 64x2 4400, 2GB RAM, Win7 Ultimate, Drobo USB (2x1TB, 2x320GB)

    Read the article

  • Looking for equivalent of ProxyPassReverseMatch in Apache to fix missing trailing forward slash issue

    - by Alex Man
    I have two web servers, www.example.com and www.userdir.com. I'm trying to make www.example.com as the front end proxy server to serve requests like in the format of http://www.example.com/~username such as http://www.example.com/~john/ so that it sends an internal request of http://www.userdir.com/~john/ to www.userdir.com. I can achieve this in Apache with ProxyPass /~john http://www.userdir.com/~john ProxyPassReverse /~john http://www.userdir.com/~john The ProxyPassReverse is necessary as without it a request like http://www.example.com/~john without the trailing forward slash will be redirected as http://www.userdir.com/~john/ and I want my users to stay in the example.com space. Now, my problem is that I have a lot of users and I cannot list all those user names in httpd.conf. So, I use ProxyPassMatch ^(/~.*)$ http://www.userdir.com$1 but there is no such thing as ProxyPassReverseMatch in Apache. Without it, whenever the trailing forward slash is missing in the URL, one will be directed to www.userdir.com, and that's not what I want. I also tried the following to add the trailing forward slash RewriteCond %{REQUEST_URI} ^/~[^./]*$ RewriteRule ^/(.*)$ http://www.userdir.com/$1/ [P] but then it will render a page with broken image and CSS because they are linked to http://www.example.com/images/image.gif while it should be http://www.example.com/~john/images/image.gif. I have been googling for a long time and still can't figure out a good solution for this. Would really appreciate it if any one can shed some light on this issue. Thank you!

    Read the article

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >