Search Results

Search found 3324 results on 133 pages for 'gb'.

Page 122/133 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Delphi 2009 dbExpress and Interbase: Unicode migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally. Update: one problem I found now is that there are two different persistent field types for Unicode and non Unicode character fields. For the existing database, dbExpress creates TStringField objects. For the Unicode database fields, dbExpress creates (or expects!) TWideStringField objects. This looks like a lot of work lies ahead. While we could try to avoid persistent fields (and add calculated fields at run time), Of course we would prefer a solution which does not require so many changes in existing units and DFM files.

    Read the article

  • What encoding does InstallShield expect non-latin-alphabet string table entries to use?

    - by DNS
    I work on an app that gets distributed via a single installer containing multiple localizations. The build process includes a script that updates the .ism string table with translations for each supported language. This works fine for languages like French and German. But when testing the installer in, i.e. Japanese, the text shows up as a series of squares. It's unlikely to be a font problem, since the InstallShield-supplied strings show up fine; only the string table entries are mangled. So the problem seems to be that the strings are in the wrong encoding. The .ism is in XML format, with UTF-8 declared as its encoding, so I assumed the strings needed to be UTF-8 encoded as well. Do they actually need to use the encoding of the target platform? Is there any concern, then, about targets having different encodings, i.e. Chinese systems using one GB-encoding versus another? What is the right thing to do here?

    Read the article

  • C# Regex stops after first line matched

    - by JD Guzman
    Ok so I have a regex and I need it to find matches in a multiline string. This is the string I am using: Device Identifier: disk0 Device Node: /dev/disk0 Part of Whole: disk0 Device / Media Name: OCZ-VERTEX2 Media Volume Name: Not applicable (no file system) Mounted: Not applicable (no file system) File System: None Content (IOContent): GUID_partition_scheme OS Can Be Installed: No Media Type: Generic Protocol: SATA SMART Status: Verified Total Size: 240.1 GB (240057409536 Bytes) (exactly 468862128 512-Byte-Blocks) Volume Free Space: Not applicable (no file system) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: Not applicable (no file system) Ejectable: No Whole: Yes Internal: Yes Solid State: Yes OS 9 Drivers: No Low Level Format: Not supported Basically I need to separate each line into two groups with the colon as the separator. The regex I am using is: @"([A-Za-z0-9\(\) \-\/]+):([A-Za-z0-9\(\) \-\/]+).*" It does work but only picks up the first line and separates it into the two groups like I want but it stops at that point. I have tried the Multiline option but it doesn't make any difference. I must admit I am new to the regex world. Any help is appreciated.

    Read the article

  • Why does ActiveMQ hold messages that should be deleted from Topic?

    - by rauch
    I use ActiveMQ as Notification System(Pub/Sub model). On server: if any changes of data occur, Server send this updated data (File) to Topic using BlobMessages. There are few Clients, that subscribe on this Topic and get updated File if it exsist in Topic. The problem is that all of BlobMessages, that were sent to Topic, are hold by ActiveMQ all time. this.producer = new ProducerTool.Builder("tcp://localhost:61616?jms.blobTransferPolicy.uploadUrl=http://localhost:8161/fileserver/", "ServerProdTopic").topic(true) .transacted(false).durable(false).timeToLive(10000L).build(); this.consumer = new ConsumerTool.Builder("tcp://localhost:61616", "ServerConsTopic").topic(true) .transacted(false).durable(false).build(); consumer.setMessageListener(this); The File is sent: connection = createConnection(); session = createSession(connection); producer = createProducer(session); BlobMessage blobMsg = ((ActiveMQSession) session).createBlobMessage(resource); blobMsg.setStringProperty("sourceName", resource.getName()); producer.send(blobMsg); if (transacted) { System.out.println("Producer Committing..."); session.commit(); } Where createProducer is: protected Connection createConnection() throws JMSException, Exception { ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(url); //connectionFactory.getBlobTransferPolicy().setUploadUrl("http://localhost:8161/fileserver/"); Connection connection = connectionFactory.createConnection(); connection.start(); ((ActiveMQConnection) connection).setCopyMessageOnSend(false); return connection; } All, that could be useful I set as need: Session.AUTO_ACKNOWLEDGE; Non-Durable Subscription; TimeToLive = 9000; JMSDeliveryMode = Non-Persistent; What I have at runtime: in ActiveMQ directory: ~/apache-activemq-5.3.0/webapps/fileserver/ tere are all File, that where delivered and not delivered to Subscribers. Why? Sometimes Server send Big Files about 1 GB....And even this Files are hold at that directory, Even after stopping Subscribers(Clients), Publisher(Server) and ActiveMQ Broker.

    Read the article

  • how to force operating system to give java more memory?

    - by Denny
    Hello, I've got problem with java jar files and memory. I use netbeans 6.7 to develop an application and this application need more memory to run because it converts another files. Whenever this application convert a 6-10 mb file, it'll crash. So I set netbeans VM Options : -Xms32m -Xmx256m and the application can convert 6-10mb files with no problem. I Clean and Build the project so it can make a jar file of my application. I run the jar on my computer and use jconsole to monitor the memory. The maximum memory to use by the application shows 256 mb. But whenever I move it to some other computers, it shows 65-66 mb on jconsole and the application will crash when convert 6-10 mb files. So I need to use command prompt : java -jar -Xmx256m myjar.jar to execute the jar with maximum memory Why it can be happen, in my computer the maximum memory shows 256 mb but on another computer 65-66 mb? Can I force another computer to give extra maximum memory to my application? Thank you for your answer. I'm sorry for my inadequate English. If you all find my question is hard to understand, please let me know. Best Regards Denny ps: fyi the computer i used to develop the application have 2gb ram, on the other computers i tested have 1-2 gb ram.

    Read the article

  • plupload with webpy.

    - by markus
    Hi, i have a problem. I want to upload a file with plupload with the HML5 runtime. This is my html/js code : jQuery(function(){ jQuery("#uploader").pluploadQueue({ // General settings runtimes : 'html5', name : 'file', url : 'http://server.name/addContent', max_file_size : '${maxSize}$_("GB")', }); jQuery('#form_upload_file').submit(function(e) { var uploader = jQuery('#uploader').pluploadQueue(); // Validate number of uploaded files if (uploader.total.uploaded == 0) { // Files in queue upload them first if (uploader.files.length > 0) { // When all files are uploaded submit form uploader.bind('UploadProgress', function() { if (uploader.total.uploaded == uploader.files.length) jQuery('#form_upload_file').submit(); }); uploader.start(); } else alert('You must at least upload one file.'); e.preventDefault(); } }); }); <form id="form_upload_file" action="#" method="POST"> <div id="uploader"></div> <input type="hidden" name="token" value="token" /> <input type="hidden" name="idUser" value="$idUser" /> </form> So, when i click in the button to upload(the submit() method is not called), it does an OPTIONS HTTP request to my server so i don't know what i must do to save the file? this is my webpy code : def OPTIONS(self): web.header('Content-type', 'text/plain: charset=utf-8') web.header('Cache-Control', 'no-store, no-cache, must-revalidate') web.header('Cache-Control', 'post-check=0, pre-check=0', False) web.header('Pragma', 'no-cache') def POST(self): input = web.input(_unicode=False, file={})#on récupère les input self.copy(input.file.file) etc. any idea ? thanks.

    Read the article

  • Performance of java on different hardware?

    - by tangens
    In another SO question I asked why my java programs run faster on AMD than on Intel machines. But it seems that I'm the only one who has observed this. Now I would like to invite you to share the numbers of your local java performance with the SO community. I observed a big performance difference when watching the startup of JBoss on different hardware, so I set this program as the base for this comparison. For participation please download JBoss 5.1.0.GA and run: jboss-5.1.0.GA/bin/run.sh (or run.bat) This starts a standard configuration of JBoss without any extra applications. Then look for the last line of the start procedure which looks like this: [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221634)] Started in 25s:264ms Please repeat this procedure until the printed time is somewhat stable and post this line together with some comments on your hardware (I used cpu-z to get the infos) and operating system like this: java version: 1.6.0_13 OS: Windows XP Board: ASUS M4A78T-E Processor: AMD Phenom II X3 720, 2.8 GHz RAM: 2*2 GB DDR3 (labeled 1333 MHz) GPU: NVIDIA GeForce 9400 GT disc: Seagate 1.5 TB (ST31500341AS) Use your votes to bring the fastest configuration to the top. I'm very curious about the results. EDIT: Up to now only a few members have shared their results. I'd really be interested in the results obtained with some other architectures. If someone works with a MAC (desktop) or runs an Intel i7 with less than 3 GHz, please once start JBoss and share your results. It will only take a few minutes.

    Read the article

  • Delphi dbExpress and Interbase: UTF8 migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally.

    Read the article

  • asp.net multiligual website culture settings

    - by Hemant Kothiyal
    Hi, In asp.net multilingual website in english Uk and swedish, i have three rsources file 1. en-GB.resx 2. sv-SE.resx 3. Culture neutral file. I have create one base class and all pages is inherited from that class. There i write following lines to set UICULTURE and culture 1. Thread.CurrentThread.CurrentUICulture = CultureInfo.CurrentUICulture.Name; 2. Thread.CurrentThread.CurrentCulture = CultureInfo.CurrentCulture.Name; Question: Suppose my browser language is Swedish(sv-SE) then this code will run because it find CurrentUICulture and CurrentCulture values as sv-SE. Now if suppose browser language is Swedish(sv) only, in that case values will be set as CurrentUICulture = sv; and CurrentCulture = sv-SE Now the problem is that user can able to view all text in Culture neutral resource file that i kept as english while all decimal saperators, currency and other will be appear in swedish. It looks confusing to usr. What would be right approach. I am thinking following solution. Please correct me? 1. i can create extra resource file for sv also. 2. I check value of CurrentUICulture in base class and if it is sv then replace it with sv-SE Please correct me which one is right approach or Is there any other good way of doing?

    Read the article

  • PayPal return URL

    - by Sam
    Here's the code for my Paypal button: <form action="https://www.sandbox.paypal.com/cgi-bin/webscr" method="post"> <input type="hidden" name="cmd" value="_xclick"> <input type="hidden" name="business" value="[email protected]"> <input type="hidden" name="lc" value="GB"> <input type="hidden" name="button_subtype" value="products"> <input type="hidden" name="no_note" value="1"> <input type="hidden" name="no_shipping" value="1"> <input type="hidden" name="rm" value="0"> <input type="hidden" name="return" value="http://www.example.com"> <input type="hidden" name="item_name" value="My Item"> <input type="hidden" name="amount" value="25.00"> <input type="hidden" name="currency_code" value="GBP"> <input type="hidden" name="bn" value="PP-BuyNowBF:proceed_btn.gif:NonHosted"> <input type="hidden" name="item_number" value="4BD9569402CDE"> <input type="image" src="http://www.example.com/image.gif" border="0" name="submit" alt="PayPal - The safer, easier way to pay online."> <img alt="" border="0" src="https://www.paypal.com/en_GB/i/scr/pixel.gif" width="1" height="1"> </form> Is it possible to add the item_number to the return URL? For example, after completing the payment within PayPal the user gets sent back to http://www.example.com?item_number=4BD9569402CDE

    Read the article

  • Python halts while iteratively processing my 1GB csv file

    - by Dan
    I have two files: metadata.csv: contains an ID, followed by vendor name, a filename, etc hashes.csv: contains an ID, followed by a hash The ID is essentially a foreign key of sorts, relating file metadata to its hash. I wrote this script to quickly extract out all hashes associated with a particular vendor. It craps out before it finishes processing hashes.csv stored_ids = [] # this file is about 1 MB entries = csv.reader(open(options.entries, "rb")) for row in entries: # row[2] is the vendor if row[2] == options.vendor: # row[0] is the ID stored_ids.append(row[0]) # this file is 1 GB hashes = open(options.hashes, "rb") # I iteratively read the file here, # just in case the csv module doesn't do this. for line in hashes: # not sure if stored_ids contains strings or ints here... # this probably isn't the problem though if line.split(",")[0] in stored_ids: # if its one of the IDs we're looking for, print the file and hash to STDOUT print "%s,%s" % (line.split(",")[2], line.split(",")[4]) hashes.close() This script gets about 2000 entries through hashes.csv before it halts. What am I doing wrong? I thought I was processing it line by line. ps. the csv files are the popular HashKeeper format and the files I am parsing are the NSRL hash sets. http://www.nsrl.nist.gov/Downloads.htm#converter UPDATE: working solution below. Thanks everyone who commented! entries = csv.reader(open(options.entries, "rb")) stored_ids = dict((row[0],1) for row in entries if row[2] == options.vendor) hashes = csv.reader(open(options.hashes, "rb")) matches = dict((row[2], row[4]) for row in hashes if row[0] in stored_ids) for k, v in matches.iteritems(): print "%s,%s" % (k, v)

    Read the article

  • jQuery server ping slowly but surely filling memory?

    - by danspants
    I use the following piece of code to test if our server is running whilst the user is in a page. I've also started adding other functions that grab small amounts of data that are constantly changing and are to be relayed to the user (Files waiting for download, messages, reports etc). I've noticed recently that if I leave any page open (all pages contain the same function), the browser takes up more and more system memory which I can only attribute to this regular task (overnight it reached 1.6 gb). Is there some way of clearing out the data that is being accumulated? Is this normal behaviour? As far as i can tell, every time I call the function it should overwrite the previously retrieved data? function testServer(){ jQuery.ajax({ type:"HEAD", url:"/media/d_arrow_blue.png", error: function(msg) { jQuery.jGrowl("Server Disconnected"); } }); //retrieves count of files awaiting download - move to seperate function jQuery.get("/get_files/",{"type":"count"},function(data) { jQuery("#downloadList").children("div").text(data); }); }; jQuery().doTimeout(6000,function() { testServer(); return true; });

    Read the article

  • Can a large transaction log cause cpu hikes to occur

    - by Simon Rigby
    Hello all, I have a client with a very large database on Sql Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out. I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan. The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur. I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk. I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation). Many Thanks,

    Read the article

  • jmap -histo is missing a lot of memory

    - by ripper234
    I have a JVM with 12 gigs of total RAM, out of which 7 GB is allocated to the old generation. There seems to be some memory leak, because almost the entire old gen is full, and will not release when I schedule a GC (the process is not doing anything else at that time). A jmap -histo dump only reveals less than 1 gigabyte worth of objects. Where are the missing 6 gigs? What better tool do you propose for diagnosing this? Here is the top of the jmap output: num #instances #bytes class name ---------------------------------------------- 1: 429853 68725736 <constMethodKlass> 2: 429853 51594040 <methodKlass> 3: 37503 49611368 <constantPoolKlass> 4: 37503 31109576 <instanceKlassKlass> 5: 191716 28019968 [C 6: 32573 26933152 <constantPoolCacheKlass> 7: 86158 13789560 [I 8: 53532 11244232 [B 9: 284 10507216 [J 10: 137608 7210664 <symbolKlass> 11: 203072 6498304 java.lang.String 12: 10132 5219512 <methodDataKlass> 13: 39694 4128176 java.lang.Class 14: 55713 3792816 [S 15: 61816 3141936 [[I 16: 90109 2883488 java.util.HashMap$Entry

    Read the article

  • UNIX-style RegExp Replace running extremely slowly under windows. Help? EDIT: Negative lookahead ass

    - by John Sullivan
    I'm trying to run a unix regEXP on every log file in a 1.12 GB directory, then replace the matched pattern with ''. Test run on a 4 meg file is took about 10 minutes, but worked. Obviously something is murdering performance by several orders of magnitude. Find: ^(?!.*155[0-2][0-9]{4}\s.*).*$ -- NOTE: match any line NOT starting 155[0-2]NNNN where in is a number 0-9. Replace with: ''. Is there some justifiable reason for my regExp to take this long to replace matching text, or is the program I am using (this is windows / a program called "grepWin") most likely poorly optimized? Thanks. UPDATE: I am noticing that searching for ^(155[0-2]).$ takes ~7 seconds in a 5.6 MB file with 77 matches. Adding the Negative Lookahead Assertion, ?=, so that the regExp becomes ^(?!155[0-2]).$ is causing it to take at least 5-10 minutes; granted, there will be thousands and thousands of matches. Should the negative lookahead assertion be extremely detrimental to performance, and/or a large quantity of matches?

    Read the article

  • Qt/C++, Problems with large QImage

    - by David Günzel
    I'm pretty new to C++/Qt and I'm trying to create an application with Visual Studio C++ and Qt (4.8.3). The application displays images using a QGraphicsView, I need to change the images at pixel level. The basic code is (simplified): QImage* img = new QImage(img_width,img_height,QImage::Format_RGB32); while(do_some_stuff) { img->setPixel(x,y,color); } QGraphicsPixmapItem* pm = new QGraphicsPixmapItem(QPixmap::fromImage(*img)); QGraphicsScene* sc = new QGraphicsScene; sc->setSceneRect(0,0,img->width(),img->height()); sc->addItem(pm); ui.graphicsView->setScene(sc); This works well for images up to around 12000x6000 pixel. The weird thing happens beyond this size. When I set img_width=16000 and img_height=8000, for example, the line img = new QImage(...) returns a null image. The image data should be around 512,000,000 bytes, so it shouldn't be too large, even on a 32 bit system. Also, my machine (Win 7 64bit, 8 GB RAM) should be capable of holding the data. I've also tried this version: uchar* imgbuf = (uchar*) malloc(img_width*img_height*4); QImage* img = new QImage(imgbuf,img_width,img_height,QImage::Format_RGB32); At first, this works. The img pointer is valid and calling img-width() for example returns the correct image width (instead of 0, in case the image pointer is null). But as soon as I call img-setPixel(), the pointer becomes null and img-width() returns 0. So what am I doing wrong? Or is there a better way of modifying large images on pixel level? Regards, David

    Read the article

  • Email Tracking - GMail

    - by Abs
    Hello all, I am creating my own email tracking system for email marketing tracking. I have been able to determine each persons email client they are using by using the http referrer but for some reason GMAIL does not send a HTTP_REFERRER at all! So I am trying to find another way of identifying when gmail requests a transparent image from my server. I get the following headers print_r($_SERVER);: DOCUMENT_ROOT = /usr/local/apache/htdocs GATEWAY_INTERFACE = CGI/1.1 HTTP_ACCEPT = */* HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP_ACCEPT_ENCODING = gzip,deflate,sdch HTTP_ACCEPT_LANGUAGE = en-GB,en-US;q=0.8,en;q=0.6 HTTP_CONNECTION = keep-alive HTTP_COOKIE = __utmz=156230011.1290976484.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utma=156230011.422791272.1290976484.1293034866.1293050468.7 HTTP_HOST = xx.xxx.xx.xxx HTTP_USER_AGENT = Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.237 Safari/534.10 PATH = /bin:/usr/bin QUERY_STRING = i=MTA= REDIRECT_STATUS = 200 REMOTE_ADDR = xx.xxx.xx.xxx REMOTE_PORT = 61296 REQUEST_METHOD = GET Is there anything of use in that list? Or is there something else I can do to actually get the http referrer, if not how are other ESPs managing to find whether gmail was used to view an email? Btw, I appreciate it if we can hold back on whether this is ethical or not as many ESPs do this already, I just don't want to pay for their service and I want to do it internally. Thanks all for any implementation advice. Update Just thought I would update this question and make it clearer in light of the bounty. I would like to find out when a user opens my email when sent to a GMail inbox. Assume, I have the usual transparent image tracking and the user does not block images. I would like to do this with the single request and the header details I get when the transparent image is requested.

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

  • Culture Sensitive GetHashCode

    - by user114928
    Hi, I'm writing a c# application that will process some text and provide basic query functions. In order to ensure the best possible support for other languages, I am allowing the users of the application to specify the System.Globalization.CultureInfo (via the "en-GB" style code) and also the full range of collation options using the System.Globalization.CompareOptions flags enum. For regular string comparison I'm then using a combination of: a) String.Compare overload that accepts the culture and options b) For some bulk processes I'm caching the byte data (KeyData) from CompareInfo.GetSortKey (overload that accepts the options) and using a byte-by-byte comparison of the KeyData. This seemed fine (although please comment if you think these two methods shouldn't be mixed), but then I had reason to use the HashSet< class which only has an overload for IEqualityComparer<. MS documentation seems to suggest that I should use StringComparer (which implements both IEqualityComparer< and IComparer<), but this only seems to support the "IgnoreCase" option from CompareOptions and not "IgnoreKanaType", "IgnoreSymbols", "IgnoreWidth" etc. I'm assuming that a StringComparer that ignores these other options could produce different hashcodes for two strings that might be considered the same using my other comparison options. I'd therefore get incorrect results from my application. Only thought at the moment is to create my own IEqualityComparer< that generates a hashcode from the SortKey.KeyData and compares eqality be using the String.Compare overload. Any suggestions?

    Read the article

  • Temporary storage for keeping data between program iterations?

    - by mr.b
    I am working on an application that works like this: It fetches data from many sources, resulting in pool of about 500,000-1,500,000 records (depends on time/day) Data is parsed Part of data is processed in a way to compare it to pre-existing data (read from database), calculations are made, and stored in database. Resulting dataset that has to be stored in database is, however, much smaller in size (compared to original data set), and ranges from 5,000-50,000 records. This process almost always updates existing data, perhaps adds few more records. Then, data from step 2 should be kept somehow, somewhere, so that next time data is fetched, there is a data set which can be used to perform calculations, without touching pre-existing data in database. I should point out that this data can be lost, it's not irreplaceable (key information can be read from database if needed), but it would speed up the process next time. Application components can (and will be) run off different computers (in the same network), so storage has to be reachable from multiple hosts. I have considered using memcached, but I'm not quite sure should I do so, because one record is usually no smaller than 200 bytes, and if I have 1,500,000 records, I guess that it would amount to over 300 MB of memcached cache... But that doesn't seem scalable to me - what if data was 5x that amount? If it were to consume 1-2 GB of cache only to keep data in between iterations (which could easily happen)? So, the question is: which temporary storage mechanism would be most suitable for this kind of processing? I haven't considered using mysql temporary tables, as I'm not sure if they can persist between sessions, and be used by other hosts in network... Any other suggestion? Something I should consider?

    Read the article

  • Dataflow Pipeline holding on to memory

    - by Jesse Carter
    I've created a Dataflow pipeline consisting of 4 blocks (which includes one optional block) which is responsible for receiving a query object from my application across HTTP and retrieving information from a database, doing an optional transform on that data, and then writing the information back in the HTTP response. In some testing I've done I've been pulling down a significant amount of data from the database (570 thousand rows) which are stored in a List object and passed between the different blocks and it seems like even after the final block has been completed the memory isn't being released. Ram usage in Task Manager will spike up to over 2 GB and I can observe several large spikes as the List hits each block. The signatures for my blocks look like this: private TransformBlock<HttpListenerContext, Tuple<HttpListenerContext, QueryObject>> m_ParseHttpRequest; private TransformBlock<Tuple<HttpListenerContext, QueryObject>, Tuple<HttpListenerContext, QueryObject, List<string>>> m_RetrieveDatabaseResults; private TransformBlock<Tuple<HttpListenerContext, QueryObject, List<string>>, Tuple<HttpListenerContext, QueryObject, List<string>>> m_ConvertResults; private ActionBlock<Tuple<HttpListenerContext, QueryObject, List<string>>> m_ReturnHttpResponse; They are linked as follows: m_ParseHttpRequest.LinkTo(m_RetrieveDatabaseResults); m_RetrieveDatabaseResults.LinkTo(m_ConvertResults, tuple => tuple.Item2 is QueryObjectA); m_RetrieveDatabaseResults.LinkTo(m_ReturnHttpResponse, tuple => tuple.Item2 is QueryObjectB); m_ConvertResults.LinkTo(m_ReturnHttpResponse); Is it possible that I can set up the pipeline such that once each block is done with the list they no longer need to hold on to it as well as once the entire pipeline is completed that the memory is released?

    Read the article

  • How to format the Facebook news feed description of grouped app posts?

    - by JcFx
    I have an app which posts a message like this to a user's Facebook timeline: This is working fine, but if I post a few times, my posts get grouped on my news feed, and I get this: What settings should I use to control the way this news report appears? Instead of 'All about periods' and the page link in the box at the top, I'd like 'Body iQ Quiz' and the app description. Where would I set these values? And is it possible to make the grouped report say 'Jay cee Effex shared a link via Body iQ Quiz', the way the original post does? I'm posting from the Facebook AS3 API, and my post code looks like this: var auth:FacebookAuthResponse = Facebook.getAuthResponse(); var token:String = auth.accessToken; var user:String = auth.uid; var values:Object = { access_token: token, name: "Body iQ Quiz", picture: "http://a7.sphotos.ak.fbcdn.net/hphotos-ak-snc6/282950_427728213914009_630526316_n.jpg", link:"http://www.lil-lets.co.uk/en-GB/Wellbeing", description: "Women, how well do we know our bodies? Click here to find out what your Body iQ is.", message: result.FacebookBody + " " + result.FacebookTitle }; Facebook.api("/" + user + "/feed", handleSubmitFeed, values, URLRequestMethod.POST); ... but I'm not sure if this is something I can fix in code, or if the app configuration needs tweaking? NOTE: Some users report getting the latter format in their news feed even with a single post (I can't reproduce this), so perhaps grouping is a red herring, and the real question is how to format the news feed report of a timeline post?

    Read the article

  • Ultra-Portable Laptop or Tablet PC for Development and Sketching

    - by Nelson LaQuet
    I am a software developer that primarily writes in PHP, [X]HTML, CSS, Javascript, C# and C++. I use Eclipse for web development, Visual Studio 2008 for C++ and C# work, TortoiseSVN, Subversion server for local repositories, SQL Server Express, Apache and MYSQL. I also use Office 2007 for word processing and spreadsheets and use Vista Ultimate 64 as my primary operating system. The only other things I do on my laptop are watch movies, surf the internet and listen to music. I currently have a Acer Aspire 5100 (1.4 GHz AMD Turion X2, 2 GB of RAM and a 15.4" screen). This thing does not cut it in performance or portability, and in addition, my DVD drive failed. And before anybody posts about vista: I have had XP Professional 32 on it for the last two years, and recently upgraded to Vista 64. It is actually faster (with areo disabled) then XP; so it is not the OS that is causing the laptop to be slow. I usually sketch a lot, for explaining things, developing user interfaces and software architecture. Because of my requirements, I was thinking about a Lenovo X61 Tablet PC. It outperforms my current laptop, is significantly more portable, and... is a tablet. My question is: do any other software developers use this (or other tablets) for programming? Does it help to be able to sketch on the computer itself? And is it capable of being a good development machine? Will it handle the above software listed? If not, what is the best ultra-portable laptop that is good for programming? Or are ultra-portable laptops even good for programming? I could manage with my 15.4" screen, but am spoiled by my two 19" at my home desktop and my job's workstation.

    Read the article

  • How to manipulate *huge* amounts of data

    - by Alejandro
    Hi there! I'm having the following problem. I need to store huge amounts of information (~32 GB) and be able to manipulate it as fast as possible. I'm wondering what's the best way to do it (combinations of programming language + OS + whatever you think its important). The structure of the information I'm using is a 4D array (NxNxNxN) of double-precission floats (8 bytes). Right now my solution is to slice the 4D array into 2D arrays and store them in separate files in the HDD of my computer. This is really slow and the manipulation of the data is unbearable, so this is no solution at all! I'm thinking on moving into a Supercomputing facility in my country and store all the information in the RAM, but I'm not sure how to implement an application to take advantage of it (I'm not a professional programmer, so any book/reference will help me a lot). An alternative solution I'm thinking on is to buy a dedicated server with lots of RAM, but I don't know for sure if that will solve the problem. So right now my ignorance doesn't let me choose the best way to proceed. What would you do if you were in this situation? I'm open to any idea. Thanks in advance!

    Read the article

  • how to make war file take up less memory

    - by Myy
    I need help on how to decrease the memory usage of my web app. so I can fit more into my webserver. so I'm building a java web app with JSF 2.0 developing in eclipse helios and running on an Apache tomcat Server. And I have a dedicated virtual server with a tomcat aswell where I deploy these war files. the webApp is about 35MB in size ( it has a lot of jars and such) but when I deploy it to my tomcat webserver, I can see it takes about 300MB of RAM, is this normal? my dedicated server only has 2GB of ram from which normally have 1 to use. so I as soon as I deploy 3 apps I get an OOM error, I've gotten permgen OOM and a out of swamp Memory error; to fix this I upped my MaxPermGen to about a gig and resytarted the server to get back some swamp space. so I tried deploying smaller older apps ( about 15MB) and they take up waay less memory. If I have 1 GB of ram I want to be able to fit more apps into my webserver without getting any OOM Errors. now I found this stack overflow Question, Can that be applied to my case? and if so, which are the common folders in the tomcat server? anyone done this before or have a different more effective, not so complicated approach? Any ideas, and or commets are more than appreciated. Thanks! Myy

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >