Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 653/837 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • Building a case for solr

    - by Midhat
    Our product consists of multiple applications, All using Lucene. 2 of the applications I am involved with have Lucene indexes of about 3 GB and 12GB. Another team is building an application, for which they estimate the LUCENE INDEX size to be close to 1 Terabyte. New documents are added to the indexes every 15 days approx. We do not have any apparent performance issues with the current applications. So my question is SHould we be using Solr now? When should one stop using Lucene and graduate to Solr? Any disadvantages/problems for using Solr? The client applications are made in ASP.Net, but I assume they will be able to use a solr server using solrnet

    Read the article

  • Very long strings as primary keys in a database for caching

    - by Bill Zimmerman
    Hi, I am working on a web app that allows users to create dynamic PDF files based on what they enter into a form (it is not very structured data). The idea is that User 1 enters several words (arbitrary # of words, practically capped of course), for example: A B C D E There is no such string in the database, so I was thinking: Store this string as a primary key in a MySQL database (it could be maybe around 50-100k of text, but usually probably less than 200 words) Generate the PDF file, and create a link to it in the database When the next user requests A B C D E, then I can just serve the file instead of recreating it each time. (simple cache) The PDF is cpu intensive to generate, so I am trying to cache as much as I can... My questions are: Does anyone have any alternative ideas to my approach What will the database performance be like? Is there a better way to design the schema than using the input string as the primary key?

    Read the article

  • how does NTFS actually work with B-tree ?

    - by bakra
    To improve performance, NTFS directories use a special data management structure called a B-tree. "B-tree" concept here refers to a "tree of storage units" that hold the contents of an individual directory. What I don't understand is where on the disk is this tree stored? Its surely not created every-time we reboot...that would take lots of time. and since its a tree(dynamic Data structure) unlike arrays it will grow. so space needs to be allocated every-time it grows. so how is this "dynamic meta-data" stored ?

    Read the article

  • Using Windows media foundation

    - by Martin Beckett
    Ok so my new gig is high performance video (think Google streetview but movies) - the hard work is all embedded capture and image processing but: I was looking at the new MS video offerings to display content = Windows Media Foundation. Is anyone actually using this ? There are no books on the topic. The only documentation is a developer team blog with a single entry 9 months old. I thought we had got past having to learn an MS api by spying on the com control messages! Is it just another wrapper around the same old activeX control?

    Read the article

  • How to SET ARITHABORT ON for connections in Linq To SQL

    - by Laurence
    By default, the SQL connection option ARITHABORT is OFF for OLEDB connections, which I assume Linq To SQL is using. However I need it to be ON. The reason is that my DB contains some indexed views, and any insert/update/delete operations against tables that are part of an indexed view fail if the connection does not have ARITHABORT ON. Even selects against the indexed view itself fail if the WITH(NOEXPAND) hint is used (which you have to use in SQL Standard Edition to get the performance benefit of the indexed view). Is there somewhere in the data context I can specify I want this option ON? Or somewhere in code I can do it?? I have managed a clumsy workaround, but I don't like it .... I have to create a stored procedure for every select/insert/update/delete operation, and in this proc first run SET ARITHABORT ON, then exec another proc which contains the actual select/insert/update/delete. In other words the first proc is just a wrapper for the second. It doesn't work to just put SET ARITHABORT ON above the select/insert/update/delete code.

    Read the article

  • Best way to handle SQL Server fulltext index updates

    - by tlianza
    Hi all, I have a fulltext index which doesn't need to be immediately up-to-date, I'd like to spare myself the I/O (when I do bulk updates, I see a ton of I/O related to the index) and do the index updates during low usage times (nightly, perhaps even weekly). It seems there are two ways to go about this: Turn off change tracking (SET CHANGE_TRACKING OFF) and add a timestamp field to the indexed table, so that you can run alter fulltext index on <table> start INCREMENTAL population, or Enable change tracking, but set it to MANUAL, so that you can run alter fulltext index on <table> start UPDATE population when you need it updated. Is there a preferred method? I couldn't tell from this overview if there was a performance benefit one way or the other. Tom

    Read the article

  • Dispelling the UIImage imageNamed: FUD

    - by Roger Nolan
    I see a lot of people saying imageNamed is bad but equal numbers of people saying the performance is good - especially when rendering UITableViews. See this SO question for example or this article on iPhoneDeveloperTips.com UIImage's imageNamed method used to leak so it was best avoided but has been fixed in recent releases. I'd like to understand the caching algorithm better in order to make a reasoned decision about where I can trust the system to cache my images and where I need to go the extra mile and do it myself. My current basic understanding is that it's a simple NSMutableDictionary of UIImages referenced by filename. It gets bigger and when memory runs out it gets a lot smaller. For example, does anyone know for sure that the image cache behind imageNamed does not respond to didReceiveMemoryWarning? It seems unlikely that Apple would not do this. If you have any insight into the caching algorithm, please post it here.

    Read the article

  • Java Swing: how to add an image to a JPanel ?

    - by Leonel
    I have a JPanel to which I'd like to add JPEG and PNG images that I generate on the fly. All the examples I've seen so far in the Swing Tutorials, specially in the Swing examples use ImageIcons. I'm generating these images as byte arrays, and they are usually larger than the common icon they use in the examples, at 640x480. Is there any (performance or other) problem in using the ImageIcon class to display an image that size in a JPanel ? What's the usual way of doing it ? How to add an image to a JPanel without using the ImageIcon class ? Edit: A more careful examination of the tutorials and the API shows that you cannot add an ImageIcon directly to a JPanel. Instead, they achieve the same effect by setting the image as an icon of a JLabel. This just doesn't fill right...

    Read the article

  • Can I get the stack traces of all threads in my c# app?

    - by Drew Shafer
    I'm debugging an apparent concurrency issue in a largish app that I hack on at work. The bug in question only manifests on certain lower-performance machines after running for many (12+) hours, and I have never reproduced it in the debugger. Because of this, my debugging tools are basically limited to analyzing log files. C# makes it easy to get the stack trace of the thread throwing the exception, but I'd like to additionally get the stack traces of every other thread currently executing in my AppDomain at the time the exception was thrown. Is this possible?

    Read the article

  • C#, WinForms: Which view type for periodically updated list?

    - by rdoubleui
    I'm having an application, that periodically polls a web service (about every 10 seconds). In my application logic I'm having a List<Message> holding the messages. All messages have an id, and might be received out of order. Therefore the class implements the Comparable Interface. What WinForm control would fit to be regurarly updated (with the items in order). I plan to hold the last 500 messages. Should I sort the list and then update the whole form? Or is data binding approriate (concerning performance)?

    Read the article

  • Looking for the most painless non-RDBMS storage method in C#

    - by NateD
    I'm writing a simple program that will run entirely client-side. (Desktop programming? do people still do that?) and I need a simple way to store trivial amounts of data in a structured form, but really don't see any need to use a database system. What's more, some of the data needs to be serialized and passed around to different users, like some kind of "file" or perhaps a "document". (has anyone ever done that before?) So, I've looked at using .Net DataSets, LINQ, direct XML manipulation, and they all seem like they would get the job done, but I would like to know before I dive into any of them if there's one method that is generally regarded as easier to code than others. As I said, the amount of data to be stored is trivial, even if one hundred people all used the same machine we're not talking about more than 10 MB, so performance is not as large a concern as is codeability/maintainability. Thank you all in advance!

    Read the article

  • Stream (.NET) handling best-practices

    - by Jader Dias
    The question is entitled with the word "Stream" because the question below is a concrete example of a more generic doubt I have about Streams: I have a problem that accepts two solutions and I want to know the best one: I download a file, save it to disk (2 min), read it and write the contents to the DB (+ 2 min). I download a file and write the contents directly to the DB (3 min). If the write to DB fails I'll have to download again in the second case, but not in the first case. Which is best? Which would you use?

    Read the article

  • arbitrary typed data in django model

    - by Dmitry Shevchenko
    I have a model, say, Item. I want to store arbitrary amount of attributes on it, like title, description, release_date. And i want them to be not just strings but have python type, so string, boolean, datetime etc. What are my options here? EAV pattern with separate name-value table won't work because of the same DB type across all values. JSONField can probably help, but it doesn't know about datetime, for example. Also i was looking at PickeField, it fits perfectly, but i'm a bit concerned about performance.

    Read the article

  • Concatenation Operator

    - by Chaitanya
    This might be a silly question but it struck me, and here i ask. <?php $x="Hi"; $y=" There"; $z = $x.$y; $a = "$x$y"; echo "$z"."<br />"."$a"; ?> $z uses the traditional concatenation operator provided by php and concatenates, conversely $a doesn't, My questions: by not using the concatenation operator, does it effect the performance? If it doesn't why at all have the concatenation operator. Why have 2 modes of implementation when one does the work?

    Read the article

  • SDK for writing DVD's

    - by Matt Warren
    I need to add DVD writing functionality to an application I'm working on. However it needs to be able to write out files that are being grabbed "live" from a camera, over a long period of time. I can't wait until all the files are captured before I start writing them to the DVD, I need to write them out in chunks as I go along. I've looked at IMAPI v2, but the main problems seems to be that you need to point it to all the files you plan to write out to disk before you start the burning process. I know it has to concept of "sessions", which means you can write to the DVD in several parts, before you finally "close" it. But I was wondering if there were any other DVD writing SDK's that allow you to be constantly writing files to a DVD and in particular files that are only in memory. It would be more efficient if I didn't have to write the captured images out to hard before they are burned to DVD. The solution needs to work under .NET on Windows XP and vista

    Read the article

  • setting up/installing/configuring nginx LEMP stack on fresh VPS server

    - by grant tailor
    I need some help in settingup/installing and configuring nginx LEMP stack on a fresh new VPS i have. The specs of the CentOS 5.7 VPS are 2GB DDR3 ECC RAM(4GB burst), 1 core 1.5Ghz(3Ghz burst) and 100GB RAID 10 storage, unmetered bandwidth @ 100Mpbs all for a whopping $25/month(unbeatable, yeah i know :) Anyways i have followed this LEMP(will also need MySQL and PHP) stack guide on linode http://library.linode.com/lemp-guides/centos-5 but basically what i want is to be able to host multiple website on this webserver after everything is setup. I am used to using DirectAdmin control panel on other server and want to have things setup so i can host multiple websites...mostly wordpress and drupal themes. Lets say 10 websites on this nginx web server. So can someone please help me on what i need to do to take "full" advantage of nginx power and performance, while been able to easily manage these multiple websites (wordpress and drupal themes)? Thanks.

    Read the article

  • touches event handler for UIImageView

    - by madmik3
    I am just getting stated with iPhone development and can't seem to find the answer I am looking for what I want to do. It seems like I should be able to programmatically create a UIImageView and then set up an event handler for it's touch functions. in c# i would have something that looks like Button b = new Button(); b.Click+= my handler code right now I have this CGRect myImageRect = CGRectMake(0.0f, 0.0f, 141.0f, 151.0f); UIImageView *myImage = [[UIImageView alloc] initWithFrame:myImageRect]; myImage.userInteractionEnabled = YES; [myImage setImage:[UIImage imageNamed:@"myImage.png"]]; myImage.opaque = YES; // explicitly opaque for performance [self.view addSubview:myImage]; [myImage release]; What do I need to do to override the touch events? thanks

    Read the article

  • Add Core Data Index to certain Attributes via migration

    - by steipete
    For performance reasons, i want to set the Indexed Attribute to some of my entities. I created a new core data model version to perform the changes. Core Data detects the changes and migrates my model to the new version, however, NO INDEXES ARE GENERATED. If I recreate the database from scratch, the indexes are there. I checked with SQLite Browser both on the iPhone and on the Simulator. The problem only occurs if a database in the prior format is already there. Is there a way to manually add the indexes? Write some sql for that? Or am I missing something? I did already some more critical migrations, no problems there. But those missing indexes are bugging me. Thanks for helping!

    Read the article

  • Writing temporary data from R

    - by Shane
    I want to write some temporary data to disk in an R package, and I want to be sure that it can run on every OS without assuming the user has admin rights. Is there an existing R function that can provide a path to a temporary directory on all major OS's? Or a way to reference a user's home directory? Otherwise, I was thinking of trying this: Sys.getenv("temp") I presume that I can't expect people to have write access to their R locations, otherwise I could reference a path within the package directory: .find.package("package.name").

    Read the article

  • Cloud Agnostic Architecture?

    - by Dave
    Hi, I'm doing some architecture work on a new solution which will initially run in Windows Azure. However I'd like the solution (or at least the architecture/design) to be Cloud Agnostic (to whatever extent is realistic). Has anyone done any work on this front or seen any good white papers/blog posts? Our highlevel architecture will consist of a payload being sent to a web service (WCF for instance), this will be dumped on a queue (for arguments sake) and a worker process will grab messages off this queue and proccess them. There will be a database of customer information which we'd ideally like to keep out of the cloud however there are obvious performance considerations. Keen to hear other's thoughts. Cheers Dave

    Read the article

  • Tokenizer for full-text

    - by user72185
    This should be an ideal case of not re-inventing the wheel, but so far my search has been in vain. Instead of writing one myself, I would like to use an existing C++ tokenizer. The tokens are to be used in an index for full text searching. Performance is very important, I will parse many gigabytes of text. Edit: Please note that the tokens are to be used in a search index. Creating such tokens is not an exact science (afaik) and requires some heuristics. This has been done a thousand time before, and probably in a thousand different ways, but I can't even find one of them :) Any good pointers? Thanks!

    Read the article

  • How can I calculate data for a boxplot (quartiles, median) in a Ralis app on Heroku? ( Heroku uses P

    - by hadees
    I'm trying to calculate the data needed to generate a box plot which means I need to figure out the 1st and 3rd Quartiles along with the median. I have found some solutions for doing it in Postgresql however they seem to depend on either PL/Python or PL/R which it seems like Heroku does not have either enabled for their postgresql databases. In fact I ran "select lanname from pg_language;" and only got back "internal". I also found some code to do it in pure ruby but that seems somewhat inefficient to me. I'm rather new to Box Plots, Postgresql, and Ruby on Rails so I'm open to suggestions on how I should handle this. There is a possibility to have a lot of data which is why I'm concerned with performance however if the solution ends up being too complex I may just do it in ruby and if my application gets big enough to warrant it get my own Postgresql I can host somewhere else. *note: since I was only able to post one link, cause I'm new, I decided to share a pastie with some relevant information

    Read the article

  • Multiple ParticleSystems in cocos2d

    - by Mattias Akerman
    I wonder about what road I should go with ParticleSystem. In this particular case I want to create 1-20 small explosions at the same time but with different positions. Right now I'm creating a new ParticleSystem for each explosion and then release it, but of course this is very punishing to the performance. My question is: Is there a way to create one ParticleSystem with multiple emitting sources. If not should I create an array of ParticleSystem in init and then use a free one when an explosion is needed? Or is there another approach I haven't thought of?

    Read the article

  • Portable C++ library for IPC (processes and shared memory), Boost vs ACE vs Poco?

    - by user363778
    Hi, I need a portable C++ library for doing IPC. I used fork() and SysV shared memory until now but this limits me to Linux/Unix. I found out that there are 3 major C++ libraries that offer a portable solution (including Windows and Mac OS X). I really like Boost, and would like to use it but I need processes and it seems like that this is only an experimental branch until now!? I have never heard of ACE or POCO before and thus I am stuck I do not know which one to choose. I need fork(), sleep() (usleep() would be great) and shared memory of course. Performance and documentation are also important criteria. Thanks, for your Help!

    Read the article

  • Tables with no Primary Key

    - by Matt Hamilton
    I have several tables whose only unique data is a uniqueidentifier (a Guid) column. Because guids are non-sequential (and they're client-side generated so I can't use newsequentialid()), I have made a non-primary, non-clustered index on this ID field rather than giving the tables a clustered primary key. I'm wondering what the performance implications are for this approach. I've seen some people suggest that tables should have an auto-incrementing ("identity") int as a clustered primary key even if it doesn't have any meaning, as it means that the database engine itself can use that value to quickly look up a row instead of having to use a bookmark. My database is merge-replicated across a bunch of servers, so I've shied away from identity int columns as they're a bit hairy to get right in replication. What are your thoughts? Should tables have primary keys? Or is it ok to not have any clustered indexes if there are no sensible columns to index that way?

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >