Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 51/145 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • how to split a pcap file into a set of smaller ones

    - by facha
    Hi, everyone I have a huge pcap file (generated by tcpdump). When I try to open it in wireshark, the program just gets unresponsive. Is there a way to split a file in set of smaller ones to open them one by one? The traffic captured in a file is generated by two programs on two servers, so I can't split the file using tcpdump 'host' or 'port' filters. I've also tried linux 'split' command :-) but with no luck. Wireshark wouldn't recognize the format.

    Read the article

  • How to convert PowerPoint presentations into a Kindle/E-reader friendly form?

    - by Shiki
    I have a lot of documents in .ppt and .pptx (blame the co-workers). I would like to read them on way home or elsewhere... when I have a little time to catch up with things. One thing I could do with the documents is cutting them together into one file. But saving that one even if a smaller version of PDF (according to Office 2010) results in a huge file. And PDF is hardly readable on a Kindle. I would need something .epub free, easy-on-the-device way. Is there such a thing? (Manually I could copy all the images down into native text and whatnot and create new presentations, save those, convert them. But that would just take a lot of time.)

    Read the article

  • Strange IIS/Asp.net Exception Message

    - by Element
    I have a standard asp.net 2.0 application running on IIS 6. I have noticed some strange exception messages in the logs. They seem to be caused by random spam bots trying to submit forms. They are strange because the request string is huge and all the exception details in the event manager are messed up, they have been replaced with %21,%22, etc.. as seen in the screen shot. Is this some kind of exploit or just a bug in the asp.net exception handler/logger ? UPDATE: I traced the requests that are causing this strange log event to a bug in IE8 that causes it to request scriptresource.axd?d={html from page} as described in these links: MS Connect SO - Invalid Webresource.axd SO - IE8 Dropping Memory Pages I am still not sure why these requests would break the IIS log event like seen above, they are just long strings of jiberish being sent to the server, maybe someone reading this can shed some light on it.

    Read the article

  • how to split a pcap file into a set of smaller ones

    - by facha
    I have a huge pcap file (generated by tcpdump). When I try to open it in wireshark, the program just gets unresponsive. Is there a way to split a file in set of smaller ones to open them one by one? The traffic captured in a file is generated by two programs on two servers, so I can't split the file using tcpdump 'host' or 'port' filters. I've also tried linux 'split' command :-) but with no luck. Wireshark wouldn't recognize the format.

    Read the article

  • Converting date format in formulas in Excel

    - by Casebash
    I have a column of dates in the following format ddd mmm dd hh m:s "EST" yyyy. In another cell in another sheet, I wish to have the dates in the format dd/m/y. How can I do this? I already tried the DATEVALUE function. Seeing as the positions are fixed, I started using the RIGHT and MID functions to extract components to put into the DATE function. Unfortunately, I don't know of a way of converting the three letter string to a month without writing a huge if block. UPDATE: I managed to convert the string using MONTH(1&THREE_LETTER_DATE). I am still curious if there is a better solution though

    Read the article

  • Diagnosing high CPU waiting

    - by Will
    I have a monitoring server that is running icinga/collectd/graphite with about 50 hosts. I have noticed high load/slugging performance on the box. If you take a look at top, you'll see: Cpu(s): 0.6%us, 0.2%sy, 0.0%ni, 7.6%id, 23.4%wa, 0.0%hi, 0.2%si, 0.0%st Notice the HUGE %wa value, which as far as I know means a network or disk bottleneck. ifconfig shows no dropping packets and there's not a ton of bandwidth going on, so that leaves disk issues, right? There's not a lot of disk writing going on either...iotop is reporting we're only writing a little over 1 MB per second and the RAID tool reports everything is A-OK and write caching is enabled. How do I go about trying to figure out how to fix this? UPDATE: iostat -x output is: avg-cpu: %user %nice %system %iowait %steal %idle 0.62 0.10 0.31 9.65 0.00 89.31 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.21 33.34 83.55 16.54 1599.94 399.07 19.97 43.21 416.98 3.71 37.13

    Read the article

  • Replacing a W2K3 Domain Controller - what do I need to know?

    - by Marko Carter
    I have a network of around 70 machines, currently with two DCs both running Windows Server 2003 (DC0 & DC1). DC0 is a five year old Poweredge 1850 and has recently become increasingly flakey, and in the past fortnight has fallen over twice. I want to replace this machine, but I'm cautious as there is huge scope for this sort of thing to go wrong. The way I imagine doing this is building a new machine then doing a DCPROMO and running three domain controllers for a month or so until I'm happy that everything is working as it should be before retiring the old machine. Particular areas of concern are the replication of roles from the current controllers (GP settings for instance) and the ramifications of switching off the machine that has, up until now, been the 'primary'. If there are compelling reasons to use Server 2008 I'm willing to do so, however I don't know if this would cause problems with my exisiting 2003 machines. Any advice on best practice or previous experiences would be most welcome.

    Read the article

  • On a Mac, how can I find all files on a NTFS partition that have the same name, given case *in*sensi

    - by SCdF
    Here's the deal, I have a huge mess of files on an external drive that is formatted as NTFS. I wish to copy all of these files onto my MacBook Pro. NTFS, like sane filesystems, is case sensitive. HFS is not. There is, somewhere in the mess of tens of thousands of files and directories, one or more 'duplicates' in the eyes of HFS. Theses are preventing me from copying the entire directory of data onto my mac. (MacOSX rather unhelpfully throws a general error explaining the problem, but not the exact file. It also doesn't give you an option to skip) What is the best approach to solve this? Does anyone know a tool that can find files and directories that have the same case-insensitive name?

    Read the article

  • vim command palette, similar to sublime text

    - by user137369
    In sublime text, we can press ??P to bring up the command palette. Are there any similar tools for vim? I’ve been trying vim-ctrlp-cmdpalette, and although it works relatively well (there are some small issues), it depends on ctrlp (not a huge problem), and it looks like it won’t see much development (it has 14 commits over the course of 3 days, 5 months ago, for an “Initial experimental version”), so I was wondering if there are any known alternatives, since searching for “vim command palette” is a bit limiting; maybe there are some other terms more appropriate for this.

    Read the article

  • Using LDAP as auth method for git repositories

    - by Lenni
    I want to convince my boss that we should be using git for version control. He says, that it absolutely must authenticate users through our central LDAP server. I looked at the various solutions (gitweb, gitorious ... ) and couln't really find a definitive answer about whether they support LDAP authentication. The only solution I could find a little info on was a Apache+mod_ldap setting. But that would mean that the user authenticating on LDAP wouldn't necessarily be the same as the actual git user, right? (Not that this is a huge problem, but just something which would bug me.) So, what's the best way to authenticate git users via LDAP?

    Read the article

  • squid running out of sockets

    - by drscroogemcduck
    I have a setup where squid sits in front of a java server and acts as a reverse proxy. Recently i've load tested the site and if i fire 100 threads at it each making a request using jmeter i start getting errors in my load test tool like 'no route to host' even though the load test tool and the server are on the same machine. if i run the following command where port 82 is the port my squid server is running on: netstat -ann | grep 82 | wc -l i get 22000 or something and most of them are in TIMED_WAIT. i'm thinking that maybe the huge number of sockets in the TIMED_WAIT state are starving the box of resources.

    Read the article

  • SQL Server 2008 Optimization

    - by hgulyan
    I've learned today, if you append to your query OPTION (MAXDOP 0) your query will run on multiple processors and if it's huge query, query will perform faster. I know general guidelines on query optimizations (using indexes, selecting only needed fields etc.), my question is about SQL Server optimization. Maybe changing some options in configurations or anything else. What guidelines are there for SQL Server Optimization? Thank you. P.S. I suppose, this is not the right place to ask server related questions. Should I delete it or maybe it can be migrated to serverfault?

    Read the article

  • Migrate reports from MS Access to OOo Base

    - by John Gardeniers
    I'm currently looking at upgrading our office machines from Office XP to Office 2010. For most users the standard edition is fine but just a few of us use Access. There are only a couple of standalone Access databases but the program is used fairly extensively (mostly by myself) as a front end to MySQL. As the cost different between standard and pro versions of Office 2010 is about $170 (AUD) I'm looking at possible alternatives to Access. I'm no huge fan of Open Office but could be convinced to use it if I can find a way to migrate the many reports we currently have in Access. The data is not a problem. So far I've found nothing to suggest this is even possible/practical but perhaps someone here knows otherwise. I'm also open to suggestions for other alternatives to Access but it must be able to produce flexible reports easily. That is the one real strength of Access in my view. Because of its subjective nature I'm making this community wiki.

    Read the article

  • Kernel config file generator

    - by lisak
    Hey guys, could please anybody recommend some kind of kconfig generator that would trim modules and built-in stuff that is not needed according to current hardware ? The best I have found is this : http://lkml.org/lkml/2008/9/16/290 I don't care about compilation time and the amount of modules that are not built-in. I'm concerned about performance. I don't know how much memory and runtime is wasted on huge kernels with almost everything possible. I'm a java developer and I don't know what most of the modules and drivers are for. So there is not much I can disable and be sure that I don't screw it up. Thanks in advance

    Read the article

  • Disable Mailman Reminders

    - by VxJasonxV
    We run a mailserver on OSX server, and a few mailing lists. The password/subscription reminds USED to come out at 8AM (local), but in the past months it's moved to 5AM, a nuisance to all involved. It would appear that mailman has been modified by Apple because there is no cronjob entry I can find that controls when these reminder notices come out, and I haven't found any launch agent/daemon plists that would control this ether. Nor have I found anything in the mailman configuration web pages. So... where are they?! Due to the specific-announcement style of use, they're a fairly worthless message to be originated, and they are a huge bother when announcing to support phones.

    Read the article

  • Large number of soft page faults when assigning a TJpegImage to a TBitmap

    - by Robert Oschler
    I have a Delphi 6 Pro application that processes incoming jpeg frames from a streaming video server. The code works but I recently noticed that it generates a huge number of soft page faults over time. After doing some investigation, the page faults appear to be coming from one particular graphics operation. Note, the uncompressed bitmaps in question are 320 x 240 or about 300 KB in size so it's not due to the handling of large images. The number of page faults being generated isn't tolerable. Over an hour it can easily top 1000000 page faults. I created a stripped down test case that executes the code I have included below on a timer, 10 times a second. The page faults appear to happen when I try to assign the TJpegImage to a TBitmap in the GetBitmap() method. I know this because I commented out that line and the page faults do not occur. The assign() triggers a decompression operation on the part of TJpegImage as it pushes the decompressed bits into a newly created bitmap that GetBitmap() returns. When I run Microsoft's pfmon utility (page fault monitor), I get a huge number of soft page fault error lines concerning RtlFillMemoryUlong, so it appears to happen during a memory buffer fill operation. One puzzling note. The summary part of pfmon's report where it shows which DLL caused what page fault does not show any DLL names in the far left column. I tried this on another system and it happens there too. Can anyone suggest a fix or a workaround? Here's the code. Note, IReceiveBufferForClientSocket is a simple class object that holds bytes in an accumulating buffer. function GetBitmap(theJpegImage: TJpegImage): Graphics.TBitmap; begin Result := TBitmap.Create; Result.Assign(theJpegImage); end; // --------------------------------------------------------------- procedure processJpegFrame(intfReceiveBuffer: IReceiveBufferForClientSocket); var theBitmap: TBitmap; theJpegStream, theBitmapStream: TMemoryStream; theJpegImage: TJpegImage; begin theBitmap := nil; theJpegImage := TJPEGImage.Create; theJpegStream:= TMemoryStream.Create; theBitmapStream := TMemoryStream.Create; try // 2 // ************************ BEGIN JPEG FRAME PROCESSING // Load the JPEG image from the receive buffer. theJpegStream.Size := intfReceiveBuffer.numBytesInBuffer; Move(intfReceiveBuffer.bufPtr^, theJpegStream.Memory^, intfReceiveBuffer.numBytesInBuffer); theJpegImage.LoadFromStream(theJpegStream); // Convert to bitmap. theBitmap := GetBitmap(theJpegImage); finally // Free memory objects. if Assigned(theBitmap) then theBitmap.Free; if Assigned(theJpegImage) then theJpegImage.Free; if Assigned(theBitmapStream) then theBitmapStream.Free; if Assigned(theJpegStream) then theJpegStream.Free; end; // try() end; // --------------------------------------------------------------- procedure TForm1.Timer1Timer(Sender: TObject); begin processJpegFrame(FIntfReceiveBufferForClientSocket); end; // --------------------------------------------------------------- procedure TForm1.FormCreate(Sender: TObject); var S: string; begin FIntfReceiveBufferForClientSocket := TReceiveBufferForClientSocket.Create(1000000); S := loadStringFromFile('c:\test.jpg'); FIntfReceiveBufferForClientSocket.assign(S); end; // --------------------------------------------------------------- Thanks, Robert

    Read the article

  • logrotate by size outside the daily schedule

    - by Josh Smeaton
    We have a couple of applications that generate huge log files. It's not enough to rotate those logs daily, so I created the following logrotate conf: /var/log/ourapp/*log { compress copytruncate missingok size 200M rotate 10 } The idea is that we can keep 2GB of logs for this one application, no matter how quickly those files are filling up. The problem, though, is that logrotate only runs once daily. AFAIK, when logrotate kicks off at 4am, it will check to see that the size is at least 200M and rotate it if so. Ideally logrotate would run every minute, check the size, and rotate if the size is greater. Is there a standard way for rotating based on size outside of the daily cron schedule?

    Read the article

  • Reduce Windows DNS Service caching on Window

    - by Nick G
    I'm struggling with DNS caching issues on a Windows based LAN. I've noticed that if I change a DNS record on a domain hosted by a 3rd party nameserver, that I always seem to be the very last person to see the change happen. I can often query the domain using a service which checks propagation around the world like www.whatsmydns.net but I usually find that all other DNS servers are correct and it's only my own server which has the old IP - even 8-12 hours later. This is an issue for us as we're website developers and often making changes to DNS records so these huge delays are frustrating. It seems to be because our primary domain controller server (+Active Directory & DNS) on our LAN (which is also our local DNS server) caches records for AGES (Way beyond it's published TTL). How can I stop the Windows DNS server from caching, or reduce the caching to only an hour or so?

    Read the article

  • Export a single layer as an image in Photoshop

    - by wrburgess
    I have a lot of designers send me layered PSDs of their designs and I need to break out the pieces of the designs to place on web pages. I can do a decent number of things in Photoshop, but I'm hardly efficient with it. My old way of just copying the image that's in a layer and pasting into a new image seems to take forever as I screw around with cropping and such. I've got Photoshop CS5, so I don't need external software to do anything, but I just need to figure out how to take a single layer, that may hold something small like an icon, and export it as a PNG or JPG. I am aware of the script called "Export Layers to Files" but it took about an hour and exported ALL of my layers to a huge number of files. I wasn't looking for a solution that broad. Is there an easy way to do this?

    Read the article

  • Deleting large no of files on linux eats up CPU

    - by Sanjay
    I generate more than 50GB of cache files on my RHEL server (and typical file size is 200kb so no of files is huge). When I try to delete these files it takes 8-10 hours. However, the bigger issue is that the system load goes to critical for these 8-10 hours. Is there anyway where I can keep the system load under control during the deletion. I tried using nice -n19 rm -rf * but that doesn't help in system load. P.S. I asked the same question on superuser.com but didn't get a good enough answer so trying here.

    Read the article

  • URGENT: Need a Company to Fix My Plesk & IIS on windows 2008

    - by DevCompany
    Hello: I need to know a name of a company to fix my server install and get my sites running again. Windows 2008 IIS 7 Plesk 9.x The problem started when Level 2 of my hosting company adjusted a FTP users to have certain permissions....apparently he did this outside of plesk and it has been nothing but permission headaches all around. Connections....uploading files, etc. now somethings gone wrong where plesk isnt even loading properly and IIS too... I need to see if someone can fix this remotely before I give the green light to format and reinstall server, IIS, Plesk and Domains! looking to pay a company to get this working ASAP - This is not requested as a free job so I need someone good and who can fix it without the huge hassle of formatting and such. Need to resolve today Post expires on Monday 04-26-20010

    Read the article

  • rsync for coping file

    - by vinayrks
    I am migrating my old server to new server . I used this server for hosting website . first I tried sftp but due to huge number of files and connection time out , it simply didn't work . then I tried rsync .rsync working good , but only problem I am facing it updating file very nicely & fastly but do not copy new files please help me . because still i need to transfer lots of file. I am using this command : rsync -anv -e ssh oldserver:/path/ /path

    Read the article

  • Reconnoiter - Anyone using it?

    - by Marco Ramos
    Reconnoiter is a new tool in the world of monitoring. It is not only a trending tool but also alerting/fault detection one. In my particular case, I reckon that it's in the trending capacities that Reconnoiter has a very huge potential. One of the premises Recoinnoter is built upon is that RRDTool large installations are very inneficient regarding I/O and I think this is RRDTool major problem. One of the things that would make me change from Cacti is, obviously, the cost of change and the learning curve. So, any of you has experience with Reconnoiter? How's the learning curve? Was it difficult to move from RRDTool frontend applications (Cacti, Munin, Ganglia) to Reconnoiter? I'm looking forward to read your opinions.

    Read the article

  • rsync for copying file

    - by vinayrks
    I am migrating my old server to new server . I used this server for hosting website . first I tried sftp but due to huge number of files and connection time out , it simply didn't work . then I tried rsync .rsync working good , but only problem I am facing it updating file very nicely & fastly but do not copy new files please help me . because still i need to transfer lots of file. I am using this command : rsync -anv -e ssh oldserver:/path/ /path

    Read the article

  • HP Probook 4530s great specs, but lagging. Hard Drive?

    - by Mark
    I have this laptop, which has a i3 processor, 4gb memory, 7200rpm hard drive. So there is nothing wrong with the specs. Even when I have no applications open, simply closing and opening windows, lags. Or opening the start menu, or dragging icons across the desktop. sometimes even the cursor lags. So I checked out the resource monitor, and the resources using disk activity are svchost Avast ------- my antivirus, but not much System (PID 4) ------ This is using a huge chunk The total disk activity fluctuates between %50 - %100

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >