Search Results

Search found 7107 results on 285 pages for 'processing efficiency'.

Page 220/285 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Server 2008 email on Event variables

    - by Jeff Miles
    One of the new features of Server 2008 is the ability to attach a task to a specific event in the event logs. One of the actions available is to send an email through a SMTP server. This is working great, however it would be ideal if in the message body, the Event contents could be placed. I have tried using $eventdescription and %eventdescription%, but those are just shots in the dark. Any amount of googling produces no results. Does anyone know if this is possible? Update: Sparks' suggestion below is a step in the right direction I believe, however that method doesn't seem to work for all values. For example, I can pull the RecordID, Severity and Channel as shown, but I can't use the same method to retreive the EventID, or most importantly the description. Here's the raw XML from one event: [Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"] [System] [Provider Name="DFSR" /] [EventID Qualifiers="16384"]4412[/EventID] [Level]4[/Level] [Task]0[/Task] [Keywords]0x80000000000000[/Keywords] [TimeCreated SystemTime="2009-05-14T18:18:09.000Z" /] [EventRecordID]45692[/EventRecordID] [Channel]DFS Replication[/Channel] [Computer]servername.domain.com[/Computer] [Security /] [/System] [EventData] [Data]9046C3F4-843E-4A53-B941-4B20764072E5[/Data] [Data]D:\departments\Geomatics\Plan Quality\Data Processing\CG3533017 2009-05-13 KT FIXED[/Data] [Data]D:\departments[/Data] [Data]{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [Data]Departments[/Data] [Data]swg.ca\files\departments[/Data] [Data]B8242CE2-F5EB-47DA-BA5B-1DD2F7EE3AB9[/Data] [Data]DFAA7A54-66CB-4C31-81A0-0F861382C32C[/Data] [Data]CG3533017 2009-05-13-{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [/EventData] [/Event] I have tried using a ValueQuery for EventData, but it returns no data.

    Read the article

  • Apache https is slow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With https (KeepAlive on), I can get over 3000 requests per second. However, with https (KeepAlive off), I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • Do I need a helper column, or can I do this with a formula?

    - by dwwilson66
    I am using this formula =IF((LEFT($B26,2)="<p"),0,IF($B26="",0,IF($F26<>"",0,(FIND("""../",$B26))))) To parse data similar to the following. <nobr>&nbsp;&nbsp;&nbsp;&nbsp;contractor information</nobr><br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="../City_Electrical_Inspectors.htm"><b> City Electrical Inspectors</b></a><br> <nobr>&nbsp;&nbsp;&nbsp;&nbsp;<a href="../City_Electrical_Inspectors.htm"><b>inspection</b></a></nobr><br> My problem comes in cases such as the first line, in which the line is not a new paragraph nor a link, and my FIND returns an error of #VALUE! Id like to create an IF test to scan the line for the existence of the pattern in my FIND statement before processing that statement. I figured that looking for an error condition may be the way to go. However, the only way I can envision this is as a self-referential formula, similart to the following pseudocode. IF(ISERROR($L26)=TRUE,$L26=0,L$26=the-result-of-the-formula-above) Can this be done with a formula or do I need to use a new helper column? Thanks.

    Read the article

  • How to make Thunderbird play nice with Google mail

    - by Christi
    Thunderbird and gmail aren't exactly the best of friends. Gmail's tags mean that Thunderbird often downloads multiple copies of a single mail. Anything tagged in gmail will appear in a folder related to that tag, the "all mail" folder, and possibly the "inbox" and "sent mail" folders too. Thus a mail with multiple tags could potentially be stored more than four times in a local Thunderbird cache. This can make searching difficult, and is obviously wasteful of disk space. The best solution I have come up with is as follows. Operate a zero inbox policy (i.e. use the inbox for processing live mail only and archive everything else) which eliminates an extra copy in the inbox. Secondly, configure Thunderbird not to sync the "Sent Mail" folder - this is a bit of a pain, since I actually find it quite useful to be able to look through just the mails I've sent, but a search can duplicate this functionality. In this way, most of the duplicates are removed, and only mail with tags is stored locally more than once. Ideally, however, I'd only like one copy of each mail to be stored locally. I am surprised Thunderbird doesn't store mail by some sort of hashing algorithm to prevent precisely this problem - but it wouldn't be compatible with the way the folders are mirrored in a local directory structure, I suppose. Can anyone think of a better way to get Thunderbird to cache a Google mail account locally efficiently.

    Read the article

  • Server Hosting + AWS

    - by ledy
    Since my dedicated servers are hosted at a "normal" hosting service, I wonder if there is a really cheap way to extend the server farm with AWS instances. E.g. it seems to be a effient and flexible solution with data storage and ressources for ocassional data processing, too. However, it might be very in-efficient to mix two data centres and transfering data from current webhoster to amazon and vice-versa. In my case, the traffic for this continuous data exchange seems to be expensive and the delay for moving the data back to the hoster leads into a lack or delay. How are best practises for mixing non-aws and aws systems? E.g.: How to move the hosters data to aws as log file storage to run urchin analysis and/or port the log file data into a bigtable for exhausting analysis there. After working with the data: how to bring it back to the hoster and use the data with the webservers there? I am not going to move all the server farm to amazon, only "separate" parts or tasks if the transfer/exchange does not lead to increased cost.

    Read the article

  • How to make Windows command prompt treat single quote as though it is a double quote?

    - by mark
    My scenario is simple - I am copying script samples from the Mercurial online book (at http://hGBook.red-bean.com) and pasting them in a Windows command prompt. The problem is that the samples in the book use single quoted strings. When a single quoted string is passed on the Windows command prompt, the latter does not recognize that everything between the single quotes belongs to one string. For example, the following command: hg commit -m 'Initial commit' cannot be pasted as is in a command prompt, because the latter treats 'Initial commit' as two strings - 'Initial and commit'. I have to edit the command after paste and it is annoying. Is it possible to instruct the Windows command prompt to treat single quotes similarly to the double one? EDIT Following the reply by JdeBP I have done a little research. Here is the summary: Mercurial entry point looks like so (it is a python program): def run(): "run the command in sys.argv" sys.exit(dispatch(request(sys.argv[1:]))) So, I have created a tiny python program to mimic the command line processing used by mercurial: import sys print sys.argv[1:] Here is the Unix console log: [hg@Quake ~]$ python 1.py "1 2 3" ['1 2 3'] [hg@Quake ~]$ python 1.py '1 2 3' ['1 2 3'] [hg@Quake ~]$ python 1.py 1 2 3 ['1', '2', '3'] [hg@Quake ~]$ And here is the respective Windows console log: C:\Workpython 1.py "1 2 3" ['1 2 3'] C:\Workpython 1.py '1 2 3' ["'1", '2', "3'"] C:\Workpython 1.py 1 2 3 ['1', '2', '3'] C:\Work One can clearly see that Windows does not treat single quotes as double quotes. And this is the essence of my question.

    Read the article

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • Oracle: 1 Large Server vs. 2 Smaller Servers?

    - by nvahalik
    We are in the planning stages of setting up our production Oracle 10gR2 environment. Our budget gives us the ability to buy 2 processor licenses of Oracle DB Standard Edition. We have minimal experience with Oracle so I'll defer to anyone who has used it. We are trying to decide if we should set up a single dual quad-core box or 2 individual quad-core boxes in a RAC configuration. Our DB right now is about 60 GB, and at our peak, we'll have up to 150 concurrent users. Most of the big stuff is done via batch processing at night. My gut tells me that having 2 boxes in a RAC configuration can't be a bad thing because it provides a true hardware failover solution. DB stored in a shared LUN on a SAN via iSCSI. Plus if we ever need to add capacity, we already have boxes in place that can be upgraded with extra procs (I assume with zero downtime, since it's set up in a RAC config) if we add extra licenses, or RAM. Does RAC have any performance penalties? Will it add extra latency? Is there any true advantage for having dual processor boxes running these systems? If we build out the Oracle boxes with special hardware: hardware iSCSI cards, TOE NICs, will these boxes be solid? We are deploying on 64-bit Windows. So what would you do? One box or two?

    Read the article

  • Nginx and Wordpress side-by-side with static directory alias?

    - by user117161
    I'm a Nginx novice, but I have it set up with Wordpress Multisite (subdirectories) and php-fpm, and it's working great as is. This lets me set up Wordpress sites off the web root: domain.com/site1 - a Wordpress network single site, which renders as expected. domain.com/site2 - ditto etc. Concurrently, I can easily create static files in the web root that don't conflict or interact with Wordpress, and they are also rendered normally. domain.com/hello.html - rendered normally domain.com/hello.php - rendered normally, including php processing domain.com/static/hello.php - rendered normally (along as "static" isn't a WP single site name) What I'd like to do, and this is where I'm out of my depth with nginx.conf, is create a root directory domain.com/static and put static sites in there domain.com/static/site3 domain.com/static/site4 and have Nginx check the request that comes into the root request comes in for: domain.com/site3 and before handing off to Wordpress, check to see if it exists in the /static folder checks: domain.com/static/site3 - static content exists there and if so, serves that content while maintaining the root URI. serves: domain.com/site3 (with content from domain.com/static/site3) if not, it lets Wordpress check if /site3 is a Wordpress single network site as it does now, and the process continues normally. In nginx.conf, in the server section, I start with this try_files rule: location / { try_files $uri $uri/ /index.php?q=$uri&$args; } I then include a bunch of Wordpress specific rules as identified at http://codex.wordpress.org/Nginx under the subdirectory section. I can see that rewrite rules might take care of it easily, but in my experimentation I've only achieved a bunch of looping (/static/static/static, etc.) and managed to bypass Wordpress if the looping stopped. Sorry if this is a very long-winded way of asking a simple question, but I'm definitely learning some of this stuff for the first time. Thanks!

    Read the article

  • Adding a transaction ID to ruby-on-rails logs

    - by Blue Warrior NFB
    We have a RoR app (rails version 3.2.15 right now). As it has been getting busier, the log-files it's producing are becoming less and less useful for troubleshooting. When they come in like this, it's not a problem: Started GET "/accounts/28088166/kittens/22894/rendered_png?file_id=5d3eaec77954a489b5ddd75143091767&kitten_store_id=9970569bbacf7b6dbeb4eb9295960d69&size=large" for 172.16.202.30 at 2013-11-12 13:45:00 +0000 Processing by KittenController#rendered_png as HTML Parameters: {"file_id"="5d3eaec77954a489b5ddd75143091767", "kitten_store_id"="9970569bbacf7b6dbeb4eb9295960d69", "size"="large", "kitten_cam_id"="280941", "id"="kjlak357aw479607t"} Rendered text template (0.0ms) Sent data (1.8ms) Completed 200 OK in 1037.4ms (Views: 1.4ms | ActiveRecord: 98.4ms) Short request, quickly assembled, all the relevant log-lines are in one block. However, not all of our code renders in 1037ms. There are a few calls that can exceed several seconds, and during that time several of these quicker ones can come in. When that happens, its very, very hard to identify which log-lines belong to which GET. Sent data (4.1ms) Completed 200 OK in 767.4ms (Views: 3.2ms | ActiveRecord: 72.2ms) Completed 200 OK in 2338.0ms (Views: 0.2ms | ActiveRecord: 0.0ms) Ooookaaaay... which goes to what? Is it possible to add something like a transaction-ID to these log-lines? The log-spam would be interspersed, but at least grep-magic would give me the unified entries that I need.

    Read the article

  • System Issues and Major Malfuctions after Failed hibernation Exit

    - by Sarah Seguin
    I have a HP G71-340US that went into hibernation mode for a while and when I tried coming out of it, I got an error message: You're computer cannot come out if hibernation . Status: 0xc000009a Info: A fatal error occurred processing the restoration data. File: \hiberfil.sys Any information that was not saved before the computer went into hybernation will be lost enter=continue So I hit continue and it ran soooo super slow it. It was seriously crawling. Finally I gave up and turned it off manually (IE press and hold the button). It's been a week or two since then and EVERY SINGLE TIME I have tried to to do ANYTHING it takes forever. When I say forever, I literally mean takes 5-7 minutes to load the internet, then the page itself, then to click a link, so on so forth. Eventually everything just goes not responding and I have to give up (4-6 HOURS later). I also cannot access my thumb/jump drives once I've managed to load windows. I was going to try runing malware bytes incase of a virus, but it's windows explorer developes errors and goes not responding on me. Currently I'm running scan disk or check disk and like every file is coming back unreadable. I let it run the last 2 hours straight in chkdesk and I'm only at 6 percent with around 500+ errors and still going. Yes, I've taken logs of the errors via cell phone camera and patience. A week or two prior to this happening I had to change our the hard drive due to blunt force trama next to the mouse. OH! Running on Windows 7: ) And I've tried loading the computer in safe mode and it makes absolutely no difference. Any and all help would be appreciated. I really don't know what to do from here and I'm kind of freaking out. I've googled different part of the error and things that I've done/seen and there are so many different answers/topics that I thought it best to just post the questions.

    Read the article

  • Recovering a broken NTFS filesystem?

    - by OverTheRainbow
    A much-needed Windows Update broke a Vista laptop that was running fine until then: After booting up, Windows displays "Please wait..." but it never goes anywhere. I waited for a couple of hours, there is a bit of disk activity, but it didn't work out in the end. I booted with the Vista DVD, chose "Repair your computer" which said that there was nothing wrong :-/ Next, I booted it up with a Linux USB keydrive, and ran Gparted 0.8.1 (which includes ntfsresize v2011.4.12AR.4 libntfs-3g) which displays a bunch of warnings for the NTFS partition where the Vista system is located such as: ntfs_mst_post_read_fixup: magic: 0x00000000 size: 1024 usa_ofs: 0 usa_count: 65535: Invalid argument Record 16 has no FILE magic (0x0) Next, I ran ntfsfix /dev/sda2, which said: Mounting volume... OK Processing of $MFT and $MFTMirr completed successfully. NTFS volume version is 3.1. NTFS partition /dev/sda2 was processed successfully. Next, I rebooted Vista, which did a CHKDSK, before rebooting. But I'm still getting nowhere with "Please wait..." Before I copy the user's data to another host and reinstall Vista from a DVD, does someone know what I could try? Thank you. Edit: In case someone else has the same issue... After the BIOS, hit F8 and choose "Repair your computer", followed by "Toshiba HDD Recovery". In addition to a 1,5GB partition labelled "WinRE", the hard disk contains a second partition labeled "Data" from which the application will fetch a system image and reinstall it in the "Vista" partition. Make sure you copy your data out of the system partition before doing this.

    Read the article

  • Minecraft server hosting hardware specifications [on hold]

    - by Andrew Wright
    I am planning on purchasing a server to rent off Minecraft game servers, largely to friends. I am planning on purchasing a 128GB RAM server to save on colocation costs (as I am likely to need more than 32GB and would have to rent 2U of space...) I am hoping for some advice about the processing power needed to deal with this level of RAM. The servers will be run in a shared environment on linux in a VM to make backups easier. The server I have in mind is dual CPU. I have been considering at the low end dual Xeon E5-2609V2 Quad-Core 2.5Ghz, and at the high end dual Xeon E5-2650V2 Eight-Core 2.6Ghz. The difference between these is 6.4 GT/s and 8 GT/s and £3000 for the lower spec server, £4300 for the higher spec. I was hoping I could get advice about whether it is worth paying for the extra/higher speed processor or if I would be wasting my money? Thank you for any help - I appreciate that this is not directly related to professional system administration.

    Read the article

  • Concatenating gziped Apache logs

    - by markdrayton
    We rotate and compress our Apache logs each day but it's become apparent that this isn't frequently enough. An uncompressed log is about 6G, which is getting close to filling our log partition (yep, we'll make it bigger in the future!) as well as taking a lot of time and CPU to compress each day. We have to produce a gziped log for each day for our stats processing. Obviously we could move our logs to a partition with more space but I also want to spread the compression overhead throughout the day. Using Apache's rotatelogs we can rotate and compress the log more often -- hourly, say -- but how can I concatenate all the hourly compressed logs into a running compressed log for the day, without decompressing the previous logs? I don't want to uncompress 24 hours' worth of data and recompress it because that has all the disadvantages of our current solution. Gzip doesn't seem to offer any append or concatenate option but perhaps I've missed something obvious. This question suggests straight shell concatenation "works" in that the archive can be decompressed but that gzip -l doesn't work seems a bit dodgy. Alternatively, perhaps this is still a bad way to do things. Other suggestions are welcome -- our only constraints are our relatively small log partitions and the need to provide a daily compressed log.

    Read the article

  • amazon ec2-medium apache requests per second terrible

    - by TheDayIsDone
    EDITED -- test running from localhost now to rule out network... i have a c1.medium using EBS. when i do an apache benchmark and i'm just printing a "hello" for the test from localhost - no database hits, it's very slow. i can repeat this test many times with the same results. any thoughts? thanks in advance. ab -n 1000 -c 100 http://localhost/home/test/ Benchmarking localhost (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Apache/2.2.23 Server Hostname: localhost Server Port: 80 Document Path: /home/test/ Document Length: 5 bytes Concurrency Level: 100 Time taken for tests: 25.300 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 816000 bytes HTML transferred: 5000 bytes Requests per second: 39.53 [#/sec] (mean) Time per request: 2530.037 [ms] (mean) Time per request: 25.300 [ms] (mean, across all concurrent requests) Transfer rate: 31.50 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 7 21.0 0 73 Processing: 81 2489 665.7 2500 4057 Waiting: 80 2443 654.0 2445 4057 Total: 85 2496 653.5 2500 4057 Percentage of the requests served within a certain time (ms) 50% 2500 66% 2651 75% 2842 80% 2932 90% 3301 95% 3506 98% 3762 99% 3838 100% 4057 (longest request)

    Read the article

  • Cannot Install Windows 7 SP1 (64-bit)

    - by Clever Human
    I have tried every way I know how to get Windows 7 SP1 to install. It fails every time. Below is what looks like the relevant contents of the CBS.Log file. If there are further details that would help or more information I can gather, I will get it. 2011-08-15 10:32:52, Info CBS Startup: Package: Package_for_KB976902~31bf3856ad364e35~amd64~~6.1.1.17514 completed startup processing, new state: Installed, original: Installed, targeted: Installed. hr = 0x80070490 2011-08-15 10:32:52, Info CBS WER: Generating failure report for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, status: 0x80070490, failure source: CBS Other, start state: Partially Installed, target state: Installed, client id: SP Coordinater Engine 2011-08-15 10:32:52, Info CBS Failed to query DisableWerReporting flag. Assuming not set... [HRESULT = 0x80070002 - ERROR_FILE_NOT_FOUND] 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml.bad to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS SQM: Reporting package change completion for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, current: Partially Installed, original: Partially Installed, target: Installed, status: 0x80070490, failure source: CBS Other, failure details: "(null)", client id: SP Coordinater Engine, initiated offline: False, execution sequence: 517, first merged sequence: 517 2011-08-15 10:32:52, Info CBS SQM: Upload requested for report: PackageChangeEnd_Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, session id: 101457924, sample type: Standard 2011-08-15 10:32:52, Info CBS SQM: Ignoring upload request because the sample type is not enabled: Standard I have downloaded the service pack and ran it from the EXE, I have installed it from Windows Update, I have ran all the "troubleshooting" trouble shots I could find. Nothing has worked so far. Any advice would be appreciated.

    Read the article

  • Why would my network slow down?

    - by monkthemighty
    The network at my work has about 40 computers on it and a quite a few printers. When there are a lot of people working the network will be slow. I can test the ping between my computer and the router and it will keep rising, sometimes to the point that it times out. The router we are using is running Ubuntu on a atom processor and it has 4gb of ram. When the network slows the process Ksoftirq will be using most if not all of the processing power. I have found that Ksoftirq is a process that handles irq requests. Also when the network slows down I have captured packets from the router and using tshark and looked at it using wireshark on my laptop. With the capture show a lot of packets with TCP Dup ACK and TCP Retransmissions. The destinations of the TCP Dup and TCP retransmissions are to most of the computers on the network but there are some that are far more than others. What could this problem be caused by?

    Read the article

  • upload process times out for different time zones

    - by shilezi
    I have an inhouse(NY) app that cients can upload files to. Usually uploads go pretty quickly for most clients but this particularly cient in the UK always have problems with uploads. not sure if they get any errors but since we log all exceptions, we see this...Error: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.Net.WebException: The operation has timed out at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at ... This shouldn't even be an issue because since we noticed they have had this issue before so we bumped up these to maxRequestLength="2097151" executionTimeout="14400". Further investigating this error, i read that it could be a thread timeout since the default is 20minute. A worker process with process id of serving application pool was shutdown due to inactivity. Application Pool timeout configuration was set to 20 minutes. A new worker process will be started when needed. Problem is I am not entirely sure if that really is the case as all other cients mostly North American, have no issues but then their uploads don't seem to go beyond 80mb and the UK client have done a 700mb upload before that i know of. We have tested 750mb before and the whole process took about 15min for upload and processing. Any help on what the real issue here might be? Thanks.

    Read the article

  • FTP Scripting Capabilities

    - by lmg
    I am looking for an FTP client that will allow me to do the following Include a GUI for setting up a number of FTP connections Support FTPS Run unattended on Windows Server 2008 Retry failed transactions Support email Support custom scripts I need to pull files from a few different servers and there are certain calculations that need to be done depending on which server the files come from. I've looked at SmartFTP and it looks like pretty much what I need except I can't get it to run as a Windows Scheduled Task (I currently have some support threads open in their forum). I've also looked at a few other FTP clients (Filezilla, RoboFTP, and AutoFTP (you can find the Windows 7 BSOD in this one!)) that haven't had the capabilities I'm looking for. Right now, I'm looking at WS_FTP and its scripting capabilities. It appears I can create a script to run as a scheduled task, but I can't add the script to a file transfer task. Does anyone have suggestions on how I can do post-transfer processing on the files or better yet how to integrate scripting into the file transfer task? I'm also open to other suggestions for FTP clients as well if you have them! If I can't find a suitable FTP client, custom scripting will just have to do the trick.

    Read the article

  • Looking for a powershell script that can pull a file from a set of PC's and FTP

    - by DangeRuss
    I'm looking to write a script (preferably powershell) that will essentially copy a file from a bunch of PC's and FTP it to a server. So the structure of the environment is that we have a file on multiple PC's (around 50 or so) that need to placed on a server. Sometimes one of the PC's may be turned off so the script would first need to ensure the PC is up and running (maybe a ping result), then it would need to go into a directory on that PC, pull a file off of it, rename the file, place into a source directory, then remove the file. Naming convention doesn't matter, but date/time stamp would be easiest. Ideally, it would be best to first move all the files to a source directory to save on FTP bandwidth, but since the files will be named the same, the files must be renamed during the move process. Move not copy because the directory needs to be empty so the file can be re-created the next day. So once moved to the source directory, now all the files need to be FTP'd to a server for processing. After all of this, we need to know which PC's on the list did not respond so we can manually retrieve the file so the script should output a file (txt is fine) that will show which PC's were offline. Everything is one domain and script will be run from an server with admin creds. Thank you!

    Read the article

  • What are the right questions to ask when deciding whether to use Chef or Puppet?

    - by John Feminella
    I am about to start a new project which will, in part, require deploying many identical nodes of approximately three different classes: Data nodes, which will run sharded instances of MongoDB. Application nodes, which will run instances of a Ruby on Rails application and an older ASP.NET MVC application. Processing nodes, which will run jobs requested by the application nodes. ALl the nodes will run on instances of Ubuntu 10.04, though they will have different packages installed. I have some familiarity with Chef from previous projects, though I don't consider myself an expert. In an effort to do due diligence, I have been investigating alternative possibilities. We have a number of folks in-house who are long-time Puppet users, and they have encouraged me to take a look. I am having trouble evaluating both choices, though. Chef and Puppet share many of the same domain terminology -- packages, resources, attributes, and so on, and they have a common history that stems from taking different approaches to the same problem. So in some sense they are very similar. But much of the comparison information I've found, like this article, is a little outdated. If you were starting this project today, what questions would you ask yourself to decide whether you should use Chef or Puppet for configuration management? (Note: I don't want answer to the question "Should I use Chef or Puppet?")

    Read the article

  • Strange 3-second tcp connection latencies (Linux, HTTP)

    - by user25417
    Our webservers with static content are experiencing strange 3 second latencies occasionally. Typically, an ApacheBench run ( 10000 requests, concurrency 1 or 40, no difference, but keepalive off) looks like this: Connection Times (ms) min mean[+/-sd] median max Connect: 2 10 152.8 3 3015 Processing: 2 8 34.7 3 663 Waiting: 2 8 34.7 3 663 Total: 4 19 157.2 6 3222 Percentage of the requests served within a certain time (ms) 50% 6 66% 7 75% 7 80% 7 90% 9 95% 11 98% 223 99% 225 100% 3222 (longest request) I have tried many things: - Apache2 2.2.9 with worker or prefork MPM, no difference (with KeepAliveTimeout 10-15) - Nginx 0.6.32 - various tcp parameters (net.core.somaxconn=3000, net.ipv4.tcp_sack=0, net.ipv4.tcp_dsack=0) - putting the files/DocumentRoot on tmpfs - shorewall on or off (i.e. empty iptables or not) - AllowOverride None is on for /, so no .htaccess checks (verified with strace) - the problem persists whether the webservers are accessed directly or through a Foundry load balancer Kernel is 2.6.32 (Debian Lenny backports), but it occurred with 2.6.26 also. IPv6 is enabled, but not used. Does the issue look familiar to anyone? Help/suggestions are much appreciated. It sounds a bit like a SYN,ACK packet getting lost or ignored.

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

  • How many iptables block rules is too many

    - by mhost
    We have a server with a Quad-Core AMD Opteron Processor 2378. It acts as our firewall for several servers. I've been asked to block all IPs from China. In a separate network, we have some small VPS machines (256MB and 512MB). I've been asked to block china on those VPS's as well. I've looked online and found lists which requires 4500 block rules. My question is will putting in all 4500 rules be a problem? I know iptables can handle far more rules than that, what I am concerned about is since these are blocks that I don't want to have access to any port, I need to put these rules before any allow. This means all legitimate traffic needs to be compared to all those rules before getting through. Will the traffic be noticeably slower after implementing this? Will those small VPS's be able to handle processing that many rules for every new packet (I'll put an established allow before the blocks)? My question is not How many rules can iptables support?, its about the effect that these rules will have on load and speed. Thanks.

    Read the article

  • Vim: How to join multiples lines based on a pattern?

    - by ryz
    I want to join multiple lines in a file based on a pattern that both lines share. This is my example: {101}{}{Apples} {102}{}{Eggs} {103}{}{Beans} {104}... ... {1101}{}{This is a fruit.} {1102}{}{These things are oval.} {1103}{}{You have to roast them.} {1104}... ... I want to join the lines {101}{}{Apples} and {1101}{}{This is a fruit.} to one line {101}{}{Apples}{1101}{}{This is a fruit.} for further processing. Same goes for the other lines. As you can see, both lines share the number 101, but I have no idea how to pull this off. Any Ideas? /EDIT: I found a "workaround": First, delete all preceding "{1" characters from group two in VISUAL BLOCK mode with C-V (or similar shortcut), then sort all lines by number with :%sort n, then join every second line with :let @q = "Jj" followed by 500@q. This works, but leaves me with {101}{}{Apples} 101}{}{This is a fruit.}. I would then need to add the missing characters "{1" in each line, not quite what I want. Any help appreciated.

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >