Search Results

Search found 25579 results on 1024 pages for 'complex event processing'.

Page 678/1024 | < Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >

  • Apache https is slow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With https (KeepAlive on), I can get over 3000 requests per second. However, with https (KeepAlive off), I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • wireless to wired internet

    - by Mark T
    My wife uses an old computer which doesn't really like having an internet connection via a USB wireless adapter. I've tried several, and none have been satisfactory. The connection gets dropped often and it is frustrating for her. A wireless card also had the same trouble. Repositioning the computer made no difference. (Two laptops, nearby, connect just fine and stay up, so it isn't the router.) However, the computer always worked well when I had an Ethernet cable connected to it. I know there is a box which will connect to my wireless network and provide an Ethernet cable connection. But the terminology used is so complex, I can't tell what it is I really need. In case that wasn't clear, here it is in different words: What I need is just the opposite of a wireless router. My wireless router takes my cable modem's Ethernet connection and makes it available to wireless clients. What I want is a box which is a wireless client to my wireless router and provides an Ethernet cable connection that I can connect to any device. I need to know the right name for a box with such capabilities. If you know of some inexpensive examples, that would also be helpful. I'm running a wireless G network with a Linksys Router.

    Read the article

  • Combat server downtime by duplicating server and re-routing when main server is down

    - by Wasim
    I have a CentOS server which at times either crashes or gets attacked with DDOS. At the moment I have an off site backup which is filled up with 1.7TB of data. I'm currently paying as much for the backup as I am for the server and I was looking for advice from experienced people as to what option is best to proceed from here. Would it be a viable solution to ditch the offsite backup, and instead purchase an additional server which is an exact duplication of the first server. So if the first server is down, users are re-routed to the second server without noticing the first server is even down. This would create an automatic backup of the first server (albeit not offsite) and relinquish the need for the expensive offsite backup. Is the above solution a true solution to pricey backup or is offsite backup absolutely necessary? How would I go about doing this (obviously it's pretty complex so just links to some reading material or the terminology of the procedure would be great)? Appreciate the help and advice.

    Read the article

  • How to make Thunderbird play nice with Google mail

    - by Christi
    Thunderbird and gmail aren't exactly the best of friends. Gmail's tags mean that Thunderbird often downloads multiple copies of a single mail. Anything tagged in gmail will appear in a folder related to that tag, the "all mail" folder, and possibly the "inbox" and "sent mail" folders too. Thus a mail with multiple tags could potentially be stored more than four times in a local Thunderbird cache. This can make searching difficult, and is obviously wasteful of disk space. The best solution I have come up with is as follows. Operate a zero inbox policy (i.e. use the inbox for processing live mail only and archive everything else) which eliminates an extra copy in the inbox. Secondly, configure Thunderbird not to sync the "Sent Mail" folder - this is a bit of a pain, since I actually find it quite useful to be able to look through just the mails I've sent, but a search can duplicate this functionality. In this way, most of the duplicates are removed, and only mail with tags is stored locally more than once. Ideally, however, I'd only like one copy of each mail to be stored locally. I am surprised Thunderbird doesn't store mail by some sort of hashing algorithm to prevent precisely this problem - but it wouldn't be compatible with the way the folders are mirrored in a local directory structure, I suppose. Can anyone think of a better way to get Thunderbird to cache a Google mail account locally efficiently.

    Read the article

  • Server Hosting + AWS

    - by ledy
    Since my dedicated servers are hosted at a "normal" hosting service, I wonder if there is a really cheap way to extend the server farm with AWS instances. E.g. it seems to be a effient and flexible solution with data storage and ressources for ocassional data processing, too. However, it might be very in-efficient to mix two data centres and transfering data from current webhoster to amazon and vice-versa. In my case, the traffic for this continuous data exchange seems to be expensive and the delay for moving the data back to the hoster leads into a lack or delay. How are best practises for mixing non-aws and aws systems? E.g.: How to move the hosters data to aws as log file storage to run urchin analysis and/or port the log file data into a bigtable for exhausting analysis there. After working with the data: how to bring it back to the hoster and use the data with the webservers there? I am not going to move all the server farm to amazon, only "separate" parts or tasks if the transfer/exchange does not lead to increased cost.

    Read the article

  • Splitting an HTTP request into multiple byte-range requests

    - by redpola
    I have arrived at the unusual situation of having two completely independent Internet connections to my home. This has the advantage of redundancy etc but the drawback that both connections max out at about 6Mb/s. So one individual outbound http request is directed by my "intelligent gateway" (TP-LINK ER6120) out over one or the other connection for its lifetime. This works fine over complex web pages and utilises both external connects fine. However, single-http-request downloads are limited to the maximum rate of one of the two connections. So I'm thinking, surely I can setup some kind of proxy server to direct all my http requests to. For each incoming http request, the proxy server will issue multiple byte-range requests for the desired data and manage the reassembly and delivery of that data to the client's request. I can see this has some overhead, and also some edge cases where there will be blocking problems waiting for data. I also imagine webmasters of single-servers would rather I didn't hit them with 8 byte-range requests instead of one request. How can I achieve this http request deconstruct/reconstruction? Or am I just barking mad?

    Read the article

  • How to make Windows command prompt treat single quote as though it is a double quote?

    - by mark
    My scenario is simple - I am copying script samples from the Mercurial online book (at http://hGBook.red-bean.com) and pasting them in a Windows command prompt. The problem is that the samples in the book use single quoted strings. When a single quoted string is passed on the Windows command prompt, the latter does not recognize that everything between the single quotes belongs to one string. For example, the following command: hg commit -m 'Initial commit' cannot be pasted as is in a command prompt, because the latter treats 'Initial commit' as two strings - 'Initial and commit'. I have to edit the command after paste and it is annoying. Is it possible to instruct the Windows command prompt to treat single quotes similarly to the double one? EDIT Following the reply by JdeBP I have done a little research. Here is the summary: Mercurial entry point looks like so (it is a python program): def run(): "run the command in sys.argv" sys.exit(dispatch(request(sys.argv[1:]))) So, I have created a tiny python program to mimic the command line processing used by mercurial: import sys print sys.argv[1:] Here is the Unix console log: [hg@Quake ~]$ python 1.py "1 2 3" ['1 2 3'] [hg@Quake ~]$ python 1.py '1 2 3' ['1 2 3'] [hg@Quake ~]$ python 1.py 1 2 3 ['1', '2', '3'] [hg@Quake ~]$ And here is the respective Windows console log: C:\Workpython 1.py "1 2 3" ['1 2 3'] C:\Workpython 1.py '1 2 3' ["'1", '2', "3'"] C:\Workpython 1.py 1 2 3 ['1', '2', '3'] C:\Work One can clearly see that Windows does not treat single quotes as double quotes. And this is the essence of my question.

    Read the article

  • Oracle: 1 Large Server vs. 2 Smaller Servers?

    - by nvahalik
    We are in the planning stages of setting up our production Oracle 10gR2 environment. Our budget gives us the ability to buy 2 processor licenses of Oracle DB Standard Edition. We have minimal experience with Oracle so I'll defer to anyone who has used it. We are trying to decide if we should set up a single dual quad-core box or 2 individual quad-core boxes in a RAC configuration. Our DB right now is about 60 GB, and at our peak, we'll have up to 150 concurrent users. Most of the big stuff is done via batch processing at night. My gut tells me that having 2 boxes in a RAC configuration can't be a bad thing because it provides a true hardware failover solution. DB stored in a shared LUN on a SAN via iSCSI. Plus if we ever need to add capacity, we already have boxes in place that can be upgraded with extra procs (I assume with zero downtime, since it's set up in a RAC config) if we add extra licenses, or RAM. Does RAC have any performance penalties? Will it add extra latency? Is there any true advantage for having dual processor boxes running these systems? If we build out the Oracle boxes with special hardware: hardware iSCSI cards, TOE NICs, will these boxes be solid? We are deploying on 64-bit Windows. So what would you do? One box or two?

    Read the article

  • Wildcard subdomain setup ... want to change host IP throws off client A records... what to do...

    - by Joe
    Here is the current set up (in a nutshell). The site is set up with a wildcard subdomain, so *.website.com is accessible. Clients can then domain map their own domains with an A record to the server IP address and it will translate the to appropriate *.website.com with re directions and env variables in htaccess. Everything is working perfect... but now comes the problem. The site has grown larger than a single DQC Xeon server can handle at peak times. Looking at cloud options seems tempting, but clients are pointing their domains to a single IP address with the A record (our server). Now, this was probably bad planing from the start, but the question is, if this was to be done today, how would we set it up so that clients use a CNAME perhaps to point their domains to our server rather than an A record. And, if that is not possible for the root domain, how can we then use multiple IP addresses on our side to translate the incoming http request? Complex enough? Hope I've explained it well!

    Read the article

  • How to change the MAC address in Win 8 to spoof a Roku Player through a WiFi splash page?

    - by luser droog
    My Linux laptop died yesterday and now I can't watch TV. Let me explain. I use a Roku Player to stream Netflix shows to my television; and a year or two ago, the Internet Service provided in my apartment complex added a Splash Page to get through the router and onto the net. After not too many days, I remembered that internet devices identify themselves with a MAC address. So I delved into the manpage of ifconfig and discovered that I could persuade my laptop to pretend to be the Roku Player, connect, click through the Splash Page, disconnect and change it back. This would allow the Roku to connect for about 24 hours, when I would have to do it again. But the laptop died yesterday during my smoke break. So during lunch, I ran to OfficeMax and got a new one. But I don't know where to begin looking for where to change the MAC address (assuming it's possible). I know I can try dual-boot, or a keychain OS, or possibly other things to resurrect my old method. But, is it possible to get Windows do it?

    Read the article

  • VPN with VLANs? [closed]

    - by Craig
    As usual, I'm sure I'm in way over my head on this one. My networking skills are limited; so, bear with me if you will. What I have are a few testing servers at my house as well as at a friends house that I want to link together so they can see each other (VPN right? I've done those before). We want to be able to see all the servers and work with them from either location. All the servers also need to be able to see each other. But, we don't want to see each others PCs, printers, PS3s etc. How do we pull that trick off? Multiple VLAN?... subnets?... what? If hardware matters, I have an old PC I was planning on loading pfSense onto because my current el-cheapo router doesn't support VPN. The VPN linking the houses is about the only thing I'm sure on. Beyond that, I'm lost. I'm not a complete noob; but, like I said, I'm not so sharp with the more complex networking. I do however read well... So use lots of descriptive words and feel free to link away to long dry articles if necessary. :-)

    Read the article

  • Adding a transaction ID to ruby-on-rails logs

    - by Blue Warrior NFB
    We have a RoR app (rails version 3.2.15 right now). As it has been getting busier, the log-files it's producing are becoming less and less useful for troubleshooting. When they come in like this, it's not a problem: Started GET "/accounts/28088166/kittens/22894/rendered_png?file_id=5d3eaec77954a489b5ddd75143091767&kitten_store_id=9970569bbacf7b6dbeb4eb9295960d69&size=large" for 172.16.202.30 at 2013-11-12 13:45:00 +0000 Processing by KittenController#rendered_png as HTML Parameters: {"file_id"="5d3eaec77954a489b5ddd75143091767", "kitten_store_id"="9970569bbacf7b6dbeb4eb9295960d69", "size"="large", "kitten_cam_id"="280941", "id"="kjlak357aw479607t"} Rendered text template (0.0ms) Sent data (1.8ms) Completed 200 OK in 1037.4ms (Views: 1.4ms | ActiveRecord: 98.4ms) Short request, quickly assembled, all the relevant log-lines are in one block. However, not all of our code renders in 1037ms. There are a few calls that can exceed several seconds, and during that time several of these quicker ones can come in. When that happens, its very, very hard to identify which log-lines belong to which GET. Sent data (4.1ms) Completed 200 OK in 767.4ms (Views: 3.2ms | ActiveRecord: 72.2ms) Completed 200 OK in 2338.0ms (Views: 0.2ms | ActiveRecord: 0.0ms) Ooookaaaay... which goes to what? Is it possible to add something like a transaction-ID to these log-lines? The log-spam would be interspersed, but at least grep-magic would give me the unified entries that I need.

    Read the article

  • Nginx and Wordpress side-by-side with static directory alias?

    - by user117161
    I'm a Nginx novice, but I have it set up with Wordpress Multisite (subdirectories) and php-fpm, and it's working great as is. This lets me set up Wordpress sites off the web root: domain.com/site1 - a Wordpress network single site, which renders as expected. domain.com/site2 - ditto etc. Concurrently, I can easily create static files in the web root that don't conflict or interact with Wordpress, and they are also rendered normally. domain.com/hello.html - rendered normally domain.com/hello.php - rendered normally, including php processing domain.com/static/hello.php - rendered normally (along as "static" isn't a WP single site name) What I'd like to do, and this is where I'm out of my depth with nginx.conf, is create a root directory domain.com/static and put static sites in there domain.com/static/site3 domain.com/static/site4 and have Nginx check the request that comes into the root request comes in for: domain.com/site3 and before handing off to Wordpress, check to see if it exists in the /static folder checks: domain.com/static/site3 - static content exists there and if so, serves that content while maintaining the root URI. serves: domain.com/site3 (with content from domain.com/static/site3) if not, it lets Wordpress check if /site3 is a Wordpress single network site as it does now, and the process continues normally. In nginx.conf, in the server section, I start with this try_files rule: location / { try_files $uri $uri/ /index.php?q=$uri&$args; } I then include a bunch of Wordpress specific rules as identified at http://codex.wordpress.org/Nginx under the subdirectory section. I can see that rewrite rules might take care of it easily, but in my experimentation I've only achieved a bunch of looping (/static/static/static, etc.) and managed to bypass Wordpress if the looping stopped. Sorry if this is a very long-winded way of asking a simple question, but I'm definitely learning some of this stuff for the first time. Thanks!

    Read the article

  • Despeckle line art

    - by Dour High Arch
    We have a number of line-art charts unfortunately saved as JPEGs. They are now riddled with distracting compression artifacts or "speckles". Is there any way of removing these? I do not have the original files and it will be very difficult to recreate them. I am running Windows 7 and tried Paint.Net; none of the filters help. Posterize washed out all the colors and leaves the speckles. Blur makes text unreadable. Noise Reduction wrecks antialiasing of curved lines, and perversely enhances the speckles, making them look like checkerboards. Yes, I have Googled for software to do this; there are many programs that advertise despeckling but, after my experience with Paint.Net, do not want to experiment with applications that show no before and after images. The only example I have seen that does what I want is from a Photoshop tutorial. I have dozens of files and the tutorial requires considerable manual fine-tuning. I would prefer to automate or batch-process this task. Commercial apps are fine, but I do not want to spend over $600 and learning a complex program for a single task.

    Read the article

  • SQL queries break our game! (Back-end server is at capacity)

    - by TimH
    We have a Facebook game that stores all persistent data in a MySQL database that is running on a large Amazon RDS instance. One of our tables is 2GB in size. If I run any queries on that table that take more than a couple of seconds, any SQL actions performed by our game will fail with the error: HTTP/1.1 503 Service Unavailable: Back-end server is at capacity This obviously brings down our game! I've monitored CPU usage on the RDS instance during these periods, and though it does spike, it doesn't go much over 50%. Previously we were on a smaller instance size and it did hit 100%, so I'd hoped just throwing more CPU capacity at the problem would solve it. I now think it's an issue with the number of open connections. However, I've only been working with SQL for 8 months or so, so I'm no expert on MySQL configuration. Is there perhaps some configuration setting I can change to prevent these queries from overloading the server, or should I just not be running them whilst our game is up? I'm using MySQL Workbench to run the queries. Here's an example.... SELECT * FROM BlueBoxEngineDB.Transfer WHERE Amount = 1000 AND FromUserId = 4 AND Status='Complete'; As you can see, it's not overly complex. There are only 5 columns in the table. Any help would be very much appreciated - Thanks!

    Read the article

  • System Issues and Major Malfuctions after Failed hibernation Exit

    - by Sarah Seguin
    I have a HP G71-340US that went into hibernation mode for a while and when I tried coming out of it, I got an error message: You're computer cannot come out if hibernation . Status: 0xc000009a Info: A fatal error occurred processing the restoration data. File: \hiberfil.sys Any information that was not saved before the computer went into hybernation will be lost enter=continue So I hit continue and it ran soooo super slow it. It was seriously crawling. Finally I gave up and turned it off manually (IE press and hold the button). It's been a week or two since then and EVERY SINGLE TIME I have tried to to do ANYTHING it takes forever. When I say forever, I literally mean takes 5-7 minutes to load the internet, then the page itself, then to click a link, so on so forth. Eventually everything just goes not responding and I have to give up (4-6 HOURS later). I also cannot access my thumb/jump drives once I've managed to load windows. I was going to try runing malware bytes incase of a virus, but it's windows explorer developes errors and goes not responding on me. Currently I'm running scan disk or check disk and like every file is coming back unreadable. I let it run the last 2 hours straight in chkdesk and I'm only at 6 percent with around 500+ errors and still going. Yes, I've taken logs of the errors via cell phone camera and patience. A week or two prior to this happening I had to change our the hard drive due to blunt force trama next to the mouse. OH! Running on Windows 7: ) And I've tried loading the computer in safe mode and it makes absolutely no difference. Any and all help would be appreciated. I really don't know what to do from here and I'm kind of freaking out. I've googled different part of the error and things that I've done/seen and there are so many different answers/topics that I thought it best to just post the questions.

    Read the article

  • Minecraft server hosting hardware specifications [on hold]

    - by Andrew Wright
    I am planning on purchasing a server to rent off Minecraft game servers, largely to friends. I am planning on purchasing a 128GB RAM server to save on colocation costs (as I am likely to need more than 32GB and would have to rent 2U of space...) I am hoping for some advice about the processing power needed to deal with this level of RAM. The servers will be run in a shared environment on linux in a VM to make backups easier. The server I have in mind is dual CPU. I have been considering at the low end dual Xeon E5-2609V2 Quad-Core 2.5Ghz, and at the high end dual Xeon E5-2650V2 Eight-Core 2.6Ghz. The difference between these is 6.4 GT/s and 8 GT/s and £3000 for the lower spec server, £4300 for the higher spec. I was hoping I could get advice about whether it is worth paying for the extra/higher speed processor or if I would be wasting my money? Thank you for any help - I appreciate that this is not directly related to professional system administration.

    Read the article

  • Recovering a broken NTFS filesystem?

    - by OverTheRainbow
    A much-needed Windows Update broke a Vista laptop that was running fine until then: After booting up, Windows displays "Please wait..." but it never goes anywhere. I waited for a couple of hours, there is a bit of disk activity, but it didn't work out in the end. I booted with the Vista DVD, chose "Repair your computer" which said that there was nothing wrong :-/ Next, I booted it up with a Linux USB keydrive, and ran Gparted 0.8.1 (which includes ntfsresize v2011.4.12AR.4 libntfs-3g) which displays a bunch of warnings for the NTFS partition where the Vista system is located such as: ntfs_mst_post_read_fixup: magic: 0x00000000 size: 1024 usa_ofs: 0 usa_count: 65535: Invalid argument Record 16 has no FILE magic (0x0) Next, I ran ntfsfix /dev/sda2, which said: Mounting volume... OK Processing of $MFT and $MFTMirr completed successfully. NTFS volume version is 3.1. NTFS partition /dev/sda2 was processed successfully. Next, I rebooted Vista, which did a CHKDSK, before rebooting. But I'm still getting nowhere with "Please wait..." Before I copy the user's data to another host and reinstall Vista from a DVD, does someone know what I could try? Thank you. Edit: In case someone else has the same issue... After the BIOS, hit F8 and choose "Repair your computer", followed by "Toshiba HDD Recovery". In addition to a 1,5GB partition labelled "WinRE", the hard disk contains a second partition labeled "Data" from which the application will fetch a system image and reinstall it in the "Vista" partition. Make sure you copy your data out of the system partition before doing this.

    Read the article

  • Concatenating gziped Apache logs

    - by markdrayton
    We rotate and compress our Apache logs each day but it's become apparent that this isn't frequently enough. An uncompressed log is about 6G, which is getting close to filling our log partition (yep, we'll make it bigger in the future!) as well as taking a lot of time and CPU to compress each day. We have to produce a gziped log for each day for our stats processing. Obviously we could move our logs to a partition with more space but I also want to spread the compression overhead throughout the day. Using Apache's rotatelogs we can rotate and compress the log more often -- hourly, say -- but how can I concatenate all the hourly compressed logs into a running compressed log for the day, without decompressing the previous logs? I don't want to uncompress 24 hours' worth of data and recompress it because that has all the disadvantages of our current solution. Gzip doesn't seem to offer any append or concatenate option but perhaps I've missed something obvious. This question suggests straight shell concatenation "works" in that the archive can be decompressed but that gzip -l doesn't work seems a bit dodgy. Alternatively, perhaps this is still a bad way to do things. Other suggestions are welcome -- our only constraints are our relatively small log partitions and the need to provide a daily compressed log.

    Read the article

  • amazon ec2-medium apache requests per second terrible

    - by TheDayIsDone
    EDITED -- test running from localhost now to rule out network... i have a c1.medium using EBS. when i do an apache benchmark and i'm just printing a "hello" for the test from localhost - no database hits, it's very slow. i can repeat this test many times with the same results. any thoughts? thanks in advance. ab -n 1000 -c 100 http://localhost/home/test/ Benchmarking localhost (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Apache/2.2.23 Server Hostname: localhost Server Port: 80 Document Path: /home/test/ Document Length: 5 bytes Concurrency Level: 100 Time taken for tests: 25.300 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 816000 bytes HTML transferred: 5000 bytes Requests per second: 39.53 [#/sec] (mean) Time per request: 2530.037 [ms] (mean) Time per request: 25.300 [ms] (mean, across all concurrent requests) Transfer rate: 31.50 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 7 21.0 0 73 Processing: 81 2489 665.7 2500 4057 Waiting: 80 2443 654.0 2445 4057 Total: 85 2496 653.5 2500 4057 Percentage of the requests served within a certain time (ms) 50% 2500 66% 2651 75% 2842 80% 2932 90% 3301 95% 3506 98% 3762 99% 3838 100% 4057 (longest request)

    Read the article

  • Cannot Install Windows 7 SP1 (64-bit)

    - by Clever Human
    I have tried every way I know how to get Windows 7 SP1 to install. It fails every time. Below is what looks like the relevant contents of the CBS.Log file. If there are further details that would help or more information I can gather, I will get it. 2011-08-15 10:32:52, Info CBS Startup: Package: Package_for_KB976902~31bf3856ad364e35~amd64~~6.1.1.17514 completed startup processing, new state: Installed, original: Installed, targeted: Installed. hr = 0x80070490 2011-08-15 10:32:52, Info CBS WER: Generating failure report for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, status: 0x80070490, failure source: CBS Other, start state: Partially Installed, target state: Installed, client id: SP Coordinater Engine 2011-08-15 10:32:52, Info CBS Failed to query DisableWerReporting flag. Assuming not set... [HRESULT = 0x80070002 - ERROR_FILE_NOT_FOUND] 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml.bad to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS SQM: Reporting package change completion for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, current: Partially Installed, original: Partially Installed, target: Installed, status: 0x80070490, failure source: CBS Other, failure details: "(null)", client id: SP Coordinater Engine, initiated offline: False, execution sequence: 517, first merged sequence: 517 2011-08-15 10:32:52, Info CBS SQM: Upload requested for report: PackageChangeEnd_Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, session id: 101457924, sample type: Standard 2011-08-15 10:32:52, Info CBS SQM: Ignoring upload request because the sample type is not enabled: Standard I have downloaded the service pack and ran it from the EXE, I have installed it from Windows Update, I have ran all the "troubleshooting" trouble shots I could find. Nothing has worked so far. Any advice would be appreciated.

    Read the article

  • Tunnell network requests with Windows 7

    - by mark
    I've Windows 7 64bit Pro client in a private LAN behind a Netgear wgr614v7 router. I've also a remote Debian server machine outside. I'd like to tunnel all (or specified ports/protocols) over this outside server, so when I'm on the Windows machine and I request serverfault.com it would not appear from the wgr614v7 public IP but from the server. But it's not only about HTTP traffic, it's basically about everything I'd like to: other TCP ports, even UDP, etc. It must be transparent to the application, e.g. they shouldn't be aware of this. All their requests just appear as being from the server and the tunnel between them takes care about the packets. I'm aware of e.g. Putty and forwarding individual ports or using it as a socks proxy, however not many applications to support this and the support in windows itself looks non-existent to me. I might add it should be something "reasonable" easy to set up. I've heard about PPTP but I'm unsure about it's security implications (by design). Should I go for VPN? There seem to be two common solutions for Linux (OpenSwan and StrongSwan), why would I pick the one over the other? I also fear that setting up a VPN might be quite complex, OTOH maybe it's the only sane way to do the things right? Or is OpenVPN sufficient? I'm seeking for open (source) solutions, what other options to I have or which direction should I head to?

    Read the article

  • Why would my network slow down?

    - by monkthemighty
    The network at my work has about 40 computers on it and a quite a few printers. When there are a lot of people working the network will be slow. I can test the ping between my computer and the router and it will keep rising, sometimes to the point that it times out. The router we are using is running Ubuntu on a atom processor and it has 4gb of ram. When the network slows the process Ksoftirq will be using most if not all of the processing power. I have found that Ksoftirq is a process that handles irq requests. Also when the network slows down I have captured packets from the router and using tshark and looked at it using wireshark on my laptop. With the capture show a lot of packets with TCP Dup ACK and TCP Retransmissions. The destinations of the TCP Dup and TCP retransmissions are to most of the computers on the network but there are some that are far more than others. What could this problem be caused by?

    Read the article

  • upload process times out for different time zones

    - by shilezi
    I have an inhouse(NY) app that cients can upload files to. Usually uploads go pretty quickly for most clients but this particularly cient in the UK always have problems with uploads. not sure if they get any errors but since we log all exceptions, we see this...Error: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.Net.WebException: The operation has timed out at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at ... This shouldn't even be an issue because since we noticed they have had this issue before so we bumped up these to maxRequestLength="2097151" executionTimeout="14400". Further investigating this error, i read that it could be a thread timeout since the default is 20minute. A worker process with process id of serving application pool was shutdown due to inactivity. Application Pool timeout configuration was set to 20 minutes. A new worker process will be started when needed. Problem is I am not entirely sure if that really is the case as all other cients mostly North American, have no issues but then their uploads don't seem to go beyond 80mb and the UK client have done a 700mb upload before that i know of. We have tested 750mb before and the whole process took about 15min for upload and processing. Any help on what the real issue here might be? Thanks.

    Read the article

  • Looking for a powershell script that can pull a file from a set of PC's and FTP

    - by DangeRuss
    I'm looking to write a script (preferably powershell) that will essentially copy a file from a bunch of PC's and FTP it to a server. So the structure of the environment is that we have a file on multiple PC's (around 50 or so) that need to placed on a server. Sometimes one of the PC's may be turned off so the script would first need to ensure the PC is up and running (maybe a ping result), then it would need to go into a directory on that PC, pull a file off of it, rename the file, place into a source directory, then remove the file. Naming convention doesn't matter, but date/time stamp would be easiest. Ideally, it would be best to first move all the files to a source directory to save on FTP bandwidth, but since the files will be named the same, the files must be renamed during the move process. Move not copy because the directory needs to be empty so the file can be re-created the next day. So once moved to the source directory, now all the files need to be FTP'd to a server for processing. After all of this, we need to know which PC's on the list did not respond so we can manually retrieve the file so the script should output a file (txt is fine) that will show which PC's were offline. Everything is one domain and script will be run from an server with admin creds. Thank you!

    Read the article

< Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >