Search Results

Search found 3324 results on 133 pages for 'gb'.

Page 101/133 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Maximum number of memory segments that Notes can support has been exceeded

    - by Sagy
    hi All, I am using Domino.dll to access a NSF file in C#.NET 2.0 I am using multiple thread to access 4 NSF files at a time, its working fine for small NSF files, but if i try to access large NSF files i get the Out of Memory Exception and Maximum number of memory segments that Notes can support has been exceeded. This exception usually occurs when i access NotesDocument object from a large NSFVIewFolder in a while loop. I am releasing the instance of the NotesDocument by using the Marshal.ReleaseComObject(NotesDocument); still it throws the same exception. My goal is to access multiple NSF files at a time (MAX 4 NSF files at a time) for large NSF files (may be in GB). Kindly help me, if you got some solution. Thanks.

    Read the article

  • socat usage for FIFO speed vs socket speed on localhost

    - by Fishy
    Hello, As per a suggestion on stackoverflow, to compare IPC on a single machine using a) sockets (TCP) on localhost to localhost b) using FIFOs (between Java and C) To answer (a), I used netcat to gauge transfer speed (91 MBytes/sec)[1] (b) Q: How can I test FIFO write speed using socat? My approach(where /tmp/gus is created using mkfifo on RHEL): dd if=/dev/zero of=/tmp/gus bs=1G count=1 but i get: 1073741824 bytes (1.1 GB) copied, 1.1326 seconds, 948 MB/s Does this mean writing to a FIFO ~10 times faster? Or is my experiment completely wrong ? Thank you Sporsi [1] From machine A to B across 1Gbps link, this number dropped to ~80 MBytes/sec - I expected localhost to be much higher ...

    Read the article

  • ASP.NET MVC Route Default values

    - by Sadegh
    hi, i defined two routes in global.asax like below context.MapRoute("HomeRedirect", "", new { controller = "Home", action = "redirect" }); context.MapRoute("UrlResolver", "{culture}/some", new { culture = "en-gb", controller = "someController", action = "someAction" }, new { culture = new CultureRouteConstraint() }); according to above definition, when user request mysite.com/ redirect action of HomeController should be called and in that: public class HomeController : Controller { public ActionResult Redirect() { return RedirectToRoute("UrlResolver"); } } i want to redirect user to second defined route on above, so also i specified default values for that and some Constraint for each of those. but when RedirectToRoute("UrlResolver") turns, no default values passed to routeConstraints on second route and No route in the route table matches the supplied values shows. update my CultureRouteConstraint: public class CultureRouteConstraint : IRouteConstraint { bool IRouteConstraint.Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection) { try { var parameter = values[parameterName] as string; return (someCondition(parameter)); } catch { return false; } } } now values parameter haven't culture key/value, but route parameter have that.

    Read the article

  • Acordex Image viewer throws out of memory exception in CITRIX environment

    - by neha
    We have a .net 2.0 application. In the .aspx page we are calling the java applet using . This applet is calling the Acordex Image viewer. In the production environment users are facing "out of memory" or "insufficient memory" issues when users try to open the image or magnify an image in Acordex viewer. Strangely when the users logout and login again they are able to see the same image without any errors. The website is hosted in a CITRIX environment. We have access to this environment but we are not able to reproduce this issue on the test servers or the local machines. We dont know what is causing this issue. What should we do to troubleshoot the issue? Do we have to increase the memory allotted to the users in CITRIX? The RAM is around 4 gb. Number of simultaneous users - 10-13. image size is max 2 mb Following is the code used to call Acordex image viewer:

    Read the article

  • A better way of converting Codepage-1251 in RTF to Unicode

    - by blue painted
    I am trying to parse RTF (via MSEDIT) in various languages, all in Delphi 2010, in order to produce HTML in unicode. Taking Russian/Cyrillic as my starting point I find that the overall document codepage is 1252 (Western) but the Russian parts of the text are identified by the charset of the font (RUSSIAN_CHARSET 204). So far I am: 1) Use AnsiString (or RawByteString) when parsing the RTF 2) Determine the CodePage by a lookup from the font charset (see http://msdn.microsoft.com/en-us/library/cc194829.aspx) 3) Translating using a lookup table in my code: (This table generated from http://msdn.microsoft.com/en-gb/goglobal/cc305144.aspx) - I'm going to need one table per supported codepage! There MUST be a better way than this? Preferably something supplied by the OS and so less brittle than tables of constants.

    Read the article

  • Dectect ASCII codes for asian double byte / cyrillic character sets?

    - by jfroom
    Is it possible to detect if an ascii character belongs to Asian double byte or Cyrillic character sets? Perhaps specific code ranges? I've googled, but not finding anything at first glance. There's an RSS feed I'm tapping into that has the locale set as 'en-gb'. But there are some Asian double byte characters in the feed itself - which I need to handle differently. Just not sure how to detect it since the meta locale data is incorrect. I do not have access to correct the public feed.

    Read the article

  • PGP Command Line Decryption --- How to decrypt file?

    - by whitman6732
    I was sent a public key in order to decrypt a pgp-encrypted file. I imported the key with: gpg --import publickey.asc And verified it with gpg --list-keys Now, I'm trying to decrypt the file. I put the passphrase in a file called pass.txt and ran this at the command line: gpg --passphrase-fd ../../pass.txt --decrypt encryptedfile.txt.pgp --output encryptedfile.txt All it says is: Reading passphrase from file descriptor 0 ... And doesn't seem to be doing anything else. I can't tell if it's hanging or not. Is it a relatively quick process? The file is large ( about 2 GB ). Is the syntax for it correct?

    Read the article

  • Help make sense of a KillDisk error/log

    - by user284194
    I have a hard drive that I've been trying to reformat. I tried reformatting it in the windows XP and 7 installers, and in an Ubuntu live cd with gparted. I tried using dd to 'zero' the drive as well with no success. Finally I ran across KillDisk after a search. I tried to zero the disk again with KillDisk and after 8 hours of zeroing I get the following errors in the log: ----------------------------------------Erase Session Begin--------------------------------------- 2010-03-23 19:35:54 Active@ KILLDISK for Windows Build 5.1.39 started Target: WDC WD2500KS-00MJB0 232.9 GB Located on: WDC WD2500KS-00MJB0 (Serial number: WD-WCANK9604799) Erase method: One Pass Zeros (1 pass) Passes: 1 Bad (unwritable) sectors detected from 1701 to 488397167 on Hard Disk 1. Error (the handle is invalid) refreshing device Hard Disk 1. Error (the handle is invalid) reading sector 0 on 81h. 2010-03-24 02:28:25 Total number of erased device(s): 0, partition(s): 0 -----------------------------------------Erase Session End---------------------------------------- Is the drive dead?

    Read the article

  • Specifying culture for http request/reponse

    - by Akash
    I have a ReSTful web service which needs to parse culture-sensitive data from the request. This data could either be in an XML body or part of the query string. Is there any acepted way of determining which culture the data is being sent in (and by extension the culture in which the response should be sent)? One option is simply to specify to the clients the culture in which all requests should be sent. A friendlier option seems to be to allow the client to specify the culture. I've considered: a) using the accept-language http header to encode this information. b) using the xml:lang attribute for XML POSTs, and an extra field for query strings (e.g. ...&culture=en-GB) http://www.w3.org/International/questions/qa-accept-lang-locales warns of limitations in using the accept-language header, but most of the warnings seem to center around requests originating from browsers. In my case the requests will come from other applications. All advice greatly appreciated!

    Read the article

  • What is "Virtual Size" in sysinternals process explorer

    - by robert
    Hi My application runs for few hours, There is no increase in any value ( vmsize, memory) of Task Manager. But after few hours i get out of memory errors. In sysinternals i see that "Virtual Size" is contineously increasing, and when it reach around 2 GB i start getting memory errors. So what kind of memory leak is that ? How can i demonstrate it with a code ? Is it possible to reproduce same thing with any piece of code where none of the memory value increase but only the Virtual Size in sysinternsl process explorer increase ? thanks for any suggestions

    Read the article

  • Free Large datasets to experiment with Hadoop

    - by Sundar
    Do you know any large datasets to experiment with Hadoop which is free/low cost? Any pointers/links related is appreciated. Prefernce: Atleast one GB of data. Production log data of webserver. Few of them which I found so far: http://dumps.wikimedia.org/enwiki/20100130/ http://wiki.freebase.com/wiki/Data_dumps http://aws.amazon.com/publicdatasets/ Also can we run our own crawler to gather data from sites e.g. Wikipedia? Any pointers on how to do this is appreciated as well.

    Read the article

  • New line and returns ignored in setMessageBody

    - by Magic Bullet Dave
    Am I doing something dumb? I can pre-fill and email ok but the "\r\n" is ignored in the emailBody: - (void) sendEventInEmail { MFMailComposeViewController *picker = [[MFMailComposeViewController alloc] init]; picker.mailComposeDelegate = self; NSString *emailSubject = [eventDictionary objectForKey:EVENT_NAME_KEY]; [picker setSubject:emailSubject]; // Fill out the email body text NSString *iTunesLink = @"http://itunes.apple.com/gb/app/whats-on-reading/id347859140?mt=8"; // Link to iTune App link NSString *content = [eventDictionary objectForKey:@"Description"]; NSString *emailBody = [NSString stringWithFormat:@"%@\r\nSent using <a href = '%@'>What's On Reading</a> for the iPhone.", content, iTunesLink]; [picker setMessageBody:emailBody isHTML:YES]; picker.navigationBar.barStyle = UIBarStyleBlack; [self presentModalViewController:picker animated:YES]; [picker release]; } Regards Dave

    Read the article

  • Is Private Bytes >> Working Set normal?

    - by Jacob
    OK, this may sound weird, but here goes. There are 2 computers, A (Pentium D) and B (Quad Core) with almost the same amount of RAM running Windows XP. If I run the same code on both computers, the allocated private bytes in A never goes down resulting in a crash later on. In B it looks like the private bytes is constantly deallocated and everything looks fine. In both computers, the working set is deallocated and allocated similarly. Could this be an issue with manifests or DLLs (system)? I'm clueless. Also, I compiled the executable on A and ran it on B and it worked. Note: I observed the utilized memory with Process Explorer. Question: During execution (where we have several allocations and deallocations) is it normal for the number of private bytes to be much bigger (1.5 GB vs 70 MB) than the working set?

    Read the article

  • How to avoid clobbering files when creating a tar archive

    - by Andrew Grimm
    This question notes that it is possible to overwrite files when creating a tar archive, and I'm trying to see how to avoid that situation. Normally, I'd use file roller, but the version installed is playing up a bit (using 1.1 Gb of memory), and I'm not the system administrator. I looked at --confirmation and --interactive, but that only asks me if I want to add file x to the archive, not whether I want to overwrite an existing file. For example, tar --interactive -czvf innocent_text_file.txt foo* Will ask me about each file, but is perfectly happy to overwrite innocent_text_file.txt Is there any switch that acts like -i for cp? Note I am asking about creating an archive, not extracting an archive. Clarification What I'm worried about is accidentally doing something like this tar -czvf * #Don't do this! which would overwrite the first file listed in the glob. To avoid it, I want tar to complain if the first file mentioned already exists, like cp -i * #Don't do this! would check if it would cause you to overwrite an existing file.

    Read the article

  • OutOfMemoryException Processing Large File

    - by Krip
    We are loading a large flat file into BizTalk Server 2006 (Original release, not R2) - about 125 MB. We run a map against it and then take each row and make a call out to a stored procedure. We receive the OutOfMemoryException during orchestration processing, the Windows Service restarts, uses full 2 GB memory, and crashes again. The server is 32-bit and set to use the /3GB switch. Also I've separated the flow into 3 hosts - one for receive, the other for orchestration, and the third for sends. Anyone have any suggestions for getting this file to process wihout error? Thanks, Krip

    Read the article

  • JVM memory initializazion error after windows update

    - by Pier Luigi
    Hi all, I have three Windows Server 2003 with 2 GB RAM. Server1 tomcat 5.5.25 jvm version SUN 1.6.0_11-b03 Server2 tomcat 5.5.25 jvm version SUN 1.6.0_14-b08 Server3 tomcat 6.0.18 jvm version SUN 1.6.0_14-b08 For the three servers JVM parameters are: -XX:MaxPermSize=256m -Dcatalina.base=C:\Apache Group\apache-tomcat-5.5.25 -Dcatalina.home=C:\Apache Group\apache-tomcat-5.5.25 -Djava.endorsed.dirs=C:\Apache Group\apache-tomcat-5.5.25\common\endorsed -Djava.io.tmpdir=C:\Apache Group\apache-tomcat-5.5.25\temp vfprintf -Xms512m -Xmx1024m For some months everithing worked fine. Last friday we installed some windows updates. After the reboot tomcat doesn't start anymore, with error: Error occurred during initialization of VM Could not reserve enough space for object heap We reduced the parameter -Xmx1024m to -Xmx768m and now tomcat starts. But we need greater max heap size What happened to our servers ? Thanks in advance.

    Read the article

  • Hosting solution for images for website written in PHP

    - by tomaszs
    I've written a website in PHP and it will have ability for users to upload images. My website will have more than 100.000 users. Aprox. 1k users will upload image about 50 KB. And every image will be displayed on this website 5k times so it's transfer of: 1k x 50 KB x 5k = 250 GB per month. So my question is: Do you know any good solution (hosting or CDN network or else) that: will be payed for transfer not space used and no entrance fee will have API to upload images easily with PHP is extremely easy to use will be good for low budget will not require any special, complicated registration and formal things will allow commercial use will allow using this images in website layout ?

    Read the article

  • Slow insert speed in Postgresql memory tablespace

    - by Prashant
    Hi, I have a requirement where I need to store the records at rate of 10,000 records/sec into a database (with indexing on a few fields). Number of columns in one record is 25. I am doing a batch insert of 100,000 records in one transaction block. To improve the insertion rate, I changed the tablespace from disk to RAM.With that I am able to achieve only 5,000 inserts per second. I have also done the following tuning in the postgres config: Indexes : no fsync : false logging : disabled Other information: - Tablespace : RAM - Number of columns in one row : 25 (mostly integers) - CPU : 4 core, 2.5 GHz - RAM : 48 GB I am wondering why a single insert query is taking around 0.2 msec on average when database is not writing anything on disk (as I am using RAM based tablespace). Is there something I am doing wrong? Help appreciated. Prashant

    Read the article

  • How to use Django's filesizeformat

    - by Scott LaPlant
    I have a small app I'm working on where I'm trying to use Django's built in filesizeformat. Currently, the format looks like this: {{ value|filesizeformat }} I understand I need to define this in my view.py file but, I can't seem to figure out how to do that. I've tried to use the syntax below: def filesizeformat(bytes): """ Formats the value like a 'human-readable' file size (i.e. 13 KB, 4.1 MB, 102 bytes, etc). """ try: bytes = float(bytes) except (TypeError,ValueError,UnicodeDecodeError): return u"0 bytes" if bytes < 1024: return ungettext("%(size)d byte", "%(size)d bytes", bytes) % {'size': bytes} if bytes < 1024 * 1024: return ugettext("%.1f KB") % (bytes / 1024) if bytes < 1024 * 1024 * 1024: return ugettext("%.1f MB") % (bytes / (1024 * 1024)) return ugettext("%.1f GB") % (bytes / (1024 * 1024 * 1024)) filesizeformat.is_safe = True I've then replaced 'value' with 'bytes' in the template but, this does not seem to work. Any suggestions?

    Read the article

  • How to stop NpgsqlDataReader from blocking?

    - by Swingline Rage
    Running the following code against a large PostgreSQL table, the NpgsqlDataReader object blocks until all data is fetched. NpgsqlCommand cmd = new NpgsqlCommand(strQuery, _conn); NpgsqlDataReader reader = cmd.ExecuteReader(); // <-- takes 30 seconds How can I get it to behave such that it doesn't prefetch all the data? I want to step through the resultset row by row without having it fetch all 15 GB into memory at once. I know there were issues with this sort of thing in Npgsql 1.x but I'm on 2.0. This is against a PostgreSQL 8.3 database on XP/Vista/7. I also don't have any funky "force Npgsql to prefetch" stuff in my connection string. I'm at a complete loss for why this is happening.

    Read the article

  • Looking for a Magnetic Card Reader with data storage

    - by Omar Sharif
    I am looking for a magnetic card reader with data storage of about 2 GB. This reader be placed in open under a shade, but would be exposed to temperatures from -5 C to 50 C. Job is to swipe customer loyalty cards issued to regular customers of a gas station. Each time they get gas filled, they will swipe their card, to mark their presence. Swiped data be stored in the reader. And on intervals be transferred to a PC lying in the office. The customer visits data be used to award some gifts or benefits to frequently visiting clients. Any ready-made solutions available ? Please advise. Omar

    Read the article

  • strange messages in log file @^@^@^@^@^@^@^@^@^@^@^@^@^@^@^....

    - by celalo
    Hello, I have a application server for network operations written with JAVA based on Apache Mina. Recently I encounter a strange behavior in my log files. I noticed that the log file is full of @^@^@^@^@^@^@^@^@^@^@^@^.... characters. I mean those unexpected characters are vast amount of as such the log file gets hundreds of GB in a couple of hours! I have no clue about this problem and it is almost impossible to google it. What could be the reason? Are those set of characters any familiar to anybody? I can give more details about the application if needed. Thanks in advance.

    Read the article

  • Displaying large file in JTextArea.

    - by Sathish Gopal
    Hi All, I'm currently working in Swing UI Assignment. This work involves showing large file content in JTextArea. The file size can be as large as 2 GB. My initial idea is to lazily load content from the file, say 1 MB of content will be shown to the user. As the user scrolls i will retrieve the next 1 MB of content to be shown. All these operation will be happening in background thread (Swing Worker). I looked at the JTextArea API, the method insert takes String and int(position of the insert) as the parameter. This will suffice, but i'm worried about performance, because the content (1 MB at a time) retrieved will have to be converted to String object. Is there any other work around or any other alternative/better solution for this.

    Read the article

  • What's the best way to view/analyse/filter huge traces/logfiles?

    - by oliver
    this seems to be a reoccurring issue: we receive a bug report for our software and with it tons of traces or logfiles. since finding errors is much easier when having a visualization of the log messages/events over time it is convenient to use a tool that can display the progression of events in a graph etc. (e.g. wireshark (http://www.wireshark.org) for analyzing network traffic) what tool do you use for such a purpose? the problem with most tools i used so far is that they mercilessly break down when you feed them huge data traces ( 1GB) so some criteria for such a tool would be: can deal with huge input files ( 1 GB) is really fast (so you don't have to get coffee while a file is loading) has some sort of filtering mechanism

    Read the article

  • Building a case for solr

    - by Midhat
    Our product consists of multiple applications, All using Lucene. 2 of the applications I am involved with have Lucene indexes of about 3 GB and 12GB. Another team is building an application, for which they estimate the LUCENE INDEX size to be close to 1 Terabyte. New documents are added to the indexes every 15 days approx. We do not have any apparent performance issues with the current applications. So my question is SHould we be using Solr now? When should one stop using Lucene and graduate to Solr? Any disadvantages/problems for using Solr? The client applications are made in ASP.Net, but I assume they will be able to use a solr server using solrnet

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >