Search Results

Search found 9124 results on 365 pages for 'big sal'.

Page 294/365 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Eclipse PDT "tips" ?

    - by Pascal MARTIN
    Hi ! (Yes, this is a quite opened and general and subjective question -- it's by design, cause I want tips you think are great !) I'm using Eclipse PDT 2.1 to work in PHP, either for small and/or big projects -- I've been doing so for quite some times, now, actually (since before 1.0 stable, if I remember well)... I was wondering if any of you did know "tips" to be more efficient. Let met explain more in details : I know about things like plugins like Aptana (better editor for JS/CSS), Subversive (for SVN access), RSE, Filesync, integrating Xdebug's debugger, ... What I mean by "tips" is more some little things you discovered one day and since use all the time -- and allow you to be more efficient in your PHP projects. Some examples of "tips" that come to my mind, and that already know and use : ctrl+space to open the list of suggestions for functions / variables names ctrl+shift+R (navigate > open resource) to open a popup which show only files which names contain what you type ; ie, quick opening of files this one might be the perfect example : I know this one is not often known by coworkers and they find it as useful as I do ; so, I guess there might be lots of other things like this one I don't know myself ^^ ctrl+M to switch to full-screen view for the editor (instead of double-click on tabs bar) shift+F2 while on a function name, to open it's page if the PHP manual in a browser Attention Mac Users use Command instead Control. I guess you get the point ; but I'm really open to any suggestion (be it eclipse-related in general, of more PHP/PDT-specific) that can help be be more efficient :-) Anyway, thanks in advance for your help !

    Read the article

  • Debugged Program Window Won't Close

    - by Marc Bernier
    Hi, I'm using VS 2008 on a 64-bit XP machine. I'm debugging a 32-bit C++ DLL via a console program. The DLL and EXE projects are contained in the same SLN so that I can modify the DLL as I test. What happens is that every once in a while I kill the program with Debug | Stop Debugging (Shift-F5). VS stops the program, but the console window stays open! If I'm sitting at a breakpoint and hit Shift-F5, it will terminate properly, but if the program is running full-tilt when I stop it, I often see this instead. The big problem is that I can't close these zombie windows. Using End Task in Task Manager does nothing (no message, no nothing). When I shut down the machine, it is unable to due to the orphans and I have to resort to actually turning off the power. I think this is connected to having the DLL and EXE project in the same SLN, as for months I worked on this project in 2 VS instances, one for the DLL and the other for the EXE. I would continually jump back and forth between the windows as I worked. This problem never happened until I put the two projects into a single SLN. The single SLN works a lot better, but this anomaly is very irritating. Any ideas anyone? UPDATE After a bit of searching (here), I found that it appears to have to do with one of the updates from last Tuesday (KB977165 or KB978037). Thank you Microsoft for your excellent pre-release testing.

    Read the article

  • Ruby: Why is Array.sort slow for large objects?

    - by David Waller
    A colleague needed to sort an array of ActiveRecord objects in a Rails app. He tried the obvious Array.sort! but it seemed surprisingly slow, taking 32s for an array of 3700 objects. So just in case it was these big fat objects slowing things down, he reimplemented the sort by sorting an array of small objects, then reordering the original array of ActiveRecord objects to match - as shown in the code below. Tada! The sort now takes 700ms. That really surprised me. Does Ruby's sort method end up copying objects about the place rather than just references? He's using Ruby 1.8.6/7. def self.sort_events(events) event_sorters = Array.new(events.length) {|i| EventSorter.new(i, events[i])} event_sorters.sort! event_sorters.collect {|es| events[es.index]} end private # Class used by sort_events class EventSorter attr_reader :sqn attr_reader :time attr_reader :index def initialize(index, event) @index = index @sqn = event.sqn @time = event.time end def <=>(b) @time != b.time ? @time <=> b.time : @sqn <=> b.sqn end end

    Read the article

  • What image processing Library should I use

    - by Swippen
    I have been reading What is the best image manipulation library? And tried a few libraries and are now looking for inputs on what is the best for our need. I will start by describing our current setting and problems. We have a system that needs to resize and crop a large amount of images from big original images. We handle 50 000+ images every day on 2 powerfull servers. Today we use ImageGlue from WebSupergoo but we don't like it at all, it is slow and hangs the service now and then (Its in another unanswered stack overflow question). We have a threaded windows service that uses Microsoft ThreadPool to resize as much as possible on the 8 core machines. I have tried AForge and it went very well it was loads faster and never crashed or anything. But I had problems with quality on a few images. This due to what algorithms I used ofc so can be tweaked. But want to widen our eyes to see if thats the right way to go. so: It needs to be c# .net and run in a windows service. (Since we wont change the rest of the service only image handling) It needs to handle threaded environment well. We have a great need of it being fast since today its too slow. But we also want good quality and small filesize since the images are later displayed on webpage with loads of visitors and needs good quality. So we have a lot of demands on ability to get god quality at a fast pace, and also secondary keep filesizes lowered even if that can be adjusted with compression a bit. Any comments or suggestions on what library to use?

    Read the article

  • Rails: creating a custom data type, to use with generator classes and a bunch of questions related t

    - by Shyam
    Hi, After being productive with Rails for some weeks, I learned some tricks and got some experience with the framework. About 10 days ago, I figured out it is possible to build a custom data type for migrations by adding some code in the Table definition. Also, after learning a bit about floating points (and how evil they are) vs integers, the money gem and other possible solutions, I decided I didn't WANT to use the money gem, but instead try to learn more about programming and finding a solution myself. Some suggestions said that I should be using integers, one for the whole numbers and one for the cents. When playing in script/console, I discovered how easy it is to work with calculations and arrays. But, I am talking to much (and the reason I am, is to give some sufficient background). Right now, while playing with the scaffold generator (yes, I use it, because I like they way I can quickly set up a prototype while I am still researching my objectives), I like to use a DRY method. In my opinion, I should build a custom "object", that can hold two variables (Fixnum), one for the whole, one for the cents. In my big dream, I would be able to do the following: script/generate scaffold Cake name:string description:text cost:mycustom Where mycustom should create two integer columns (one for wholes, one for cents). Right now I could do this by doing: script/generate scaffold Cake name:string description:text cost_w:integer cost_c:integer I had also had an idea that would be creating a "cost model", which would hold two columns of integers and create a cost_id column to my scaffold. But wouldn't that be an extra table that would cause some kind of performance penalty? And wouldn't that be defy the purpose of the Cake model in the first place, because the costs are an attribute of individual Cake entries? The reason why I would want to have such a functionality because I am thinking of having multiple "costs" inside my rails application. Thank you for your feedback, comments and answers! I hope my message got through as understandable, my apologies for incorrect grammar or weird sentences as English is not my native language.

    Read the article

  • Midiplayer stops playing sounds after 16 notes.

    - by user349673
    Hi. I am currently programming a Piano Keyboard editor, much like the one you can find in Cubase, Logic, Reason etc.. I have this big grid, double array new int [13][9], which makes it 13 rows, 9 columns. The first column [0-12][0] is the Keyboard, at the top there's "high C" (midi note 72) and at the bottom there's "low C" (midi note 60). That column is an array of JButtons and when you press for example "low C", note 60 is being played by the Synthesizer. I've gotten this to work pretty OK as for now, but one problem I have is that I can only play 16 notes in a row, then it's like the Synthesizer shuts down or something. Do you guys have ANY idea of what the problem is? A bit of the code: import java.awt.*; import javax.swing.*; import java.awt.event.*; import java.util.*; import javax.sound.midi.*; actionPerformed(ActionEvent ae){ for(int i = 0; i<13; i++){ if(o== instr[i]){//instr is the button array SpelaTangent(i); } } } public void SpelaTangent(int tangent){ int [] klaviatur = new int[13]; for(int i = 0; i<13; i++){ klaviatur[i] = (72-i); } try { Synthesizer synth = MidiSystem.getSynthesizer(); synth.open(); final MidiChannel[] mc = synth.getChannels(); Instrument[] instrument = synth.getDefaultSoundbank().getInstruments(); synth.loadInstrument(instrument[1]); mc[0].noteOn(klaviatur[tangent],350); mc[0].noteOff(klaviatur[tangent],350); } catch (MidiUnavailableException e) {} } Help is very much appreciated!

    Read the article

  • Handling user interface in a multi-threaded application (or being forced to have a UI-only main thre

    - by Patrick
    In my application, I have a 'logging' window, which shows all the logging, warnings, errors of the application. Last year my application was still single-threaded so this worked [quite] good. Now I am introducing multithreading. I quickly noticed that it's not a good idea to update the logging window from different threads. Reading some articles on keeping the UI in the main thread, I made a communication buffer, in which the other threads are adding their logging messages, and from which the main thread takes the messages and shows them in the logging window (this is done in the message loop). Now, in a part of my application, the memory usage increases dramatically, because the separate threads are generating lots of logging messages, and the main thread cannot empty the communication buffer quickly enough. After the while the memory decreases again (if the other threads have finished their work and the main thread gradually empties the communication buffer). I solved this problem by having a maximum size on the communication buffer, but then I run into a problem in the following situation: the main thread has to perform a complex action the main thread takes some parts of the action and let's separate threads execute this while the seperate threads are executing their logic, the main thread processes the results from the other threads and continues with its work if the other threads are finished Problem is that in this situation, if the other threads perform logging, there is no UI-message loop, and so the communication buffer is filled, but not emptied. I see two solutions in solving this problem: require the main thread to do regular polling of the communication buffer only performing user interface logic in the main thread (no other logic) I think the second solution seems the best, but this may not that easy to introduce in a big application (in my case it performs mathematical simulations). Are there any other solutions or tips? Or is one of the two proposed the best, easiest, most-pragmatic solution? Thanks, Patrick

    Read the article

  • why do we need advanced knowledge of mathematics & physics for programming?

    - by Sumeet
    Guys, I have been very good in mathematics and physics in my schools and colleges. Right now I am a programmer. Even in the colleges I have to engrossed my self into computers and programming things all the time. As I used to like it very much. But I have always felt the lack of advanced mathematics and physics in all the work I have done (Programs). Programming never asked me any advanced mathematics and physics knowledge in what I was very good. It always ask u some optimized loops, and different programming technologies which has never been covered in advanced mathematics and physics. Even at the time of selection in big College , such a kind of advanced knowledge is required. Time by time I got out of touch of all that facts and concepts (advanced mathematics and physics). And now after, 5 years in job I found it hard to resolve Differentiations and integrations from Trigonometry. Which sometimes make me feel like I have wasted time in those concepts because they are never used. (At that time I knew that I am going to be a programmer) If one need to be a programmer why do all this advanced knowledge is required. One can go with elementry knowledge a bit more. You never got to think like scientists and R&D person in your Schols and colleges for being a programmer? Just think and let me know your thoughts. I must be wrong somewhere in what I think , but not able to figure that out..? Regards Sumeet

    Read the article

  • Pseudo-quicksort time complexity

    - by Ord
    I know that quicksort has O(n log n) average time complexity. A pseudo-quicksort (which is only a quicksort when you look at it from far enough away, with a suitably high level of abstraction) that is often used to demonstrate the conciseness of functional languages is as follows (given in Haskell): quicksort :: Ord a => [a] -> [a] quicksort [] = [] quicksort (p:xs) = quicksort [y | y<-xs, y<p] ++ [p] ++ quicksort [y | y<-xs, y>=p] Okay, so I know this thing has problems. The biggest problem with this is that it does not sort in place, which is normally a big advantage of quicksort. Even if that didn't matter, it would still take longer than a typical quicksort because it has to do two passes of the list when it partitions it, and it does costly append operations to splice it back together afterwards. Further, the choice of the first element as the pivot is not the best choice. But even considering all of that, isn't the average time complexity of this quicksort the same as the standard quicksort? Namely, O(n log n)? Because the appends and the partition still have linear time complexity, even if they are inefficient.

    Read the article

  • architecture and tools for a remote control application?

    - by slothbear
    I'm working on the design of a remote control application. From my iPhone or a web browser, I'll send a few commands. Soon my home computer will perform the commands and send back results. I know there are remote desktop apps, but I want something programmable, something simpler, and something that I wrote. My current direction is to use Amazon Simple Queue Service (SQS) as the message bus. The iPhone places some messages in a queue. My local Java/JRuby program notices the messages on the queue, performs the work and sends back status via a different queue. This will be a very low-volume application. At $1.00 for a million requests (plus a handful of data transfer charges), Amazon SQS looks a lot more affordable than having my own server of any type. And super reliable, that's important for me too. Are there better/standard toolkits or architectures for this kind of remote control? Cost is not a big issue, but I prefer the tons I learn by doing it myself. I'm moderately concerned about security, but doubt it will be a problem. The list of commands recognized will be very short, and only recognized in specific contexts. No "erase hard drive" stuff. update: I'll probably distribute these programs to some other people who want the same function, but who don't have Amazon SQS accounts. For now, they'll use anonymous access to my queues, with random 80-character queue names.

    Read the article

  • Core Data: migrating entities with self-referential properties

    - by Dan
    My Core Data model contains an entity, Shape, that has two self-referential relationships, which means four properties. One pair is a one-to-many relationship (Shape.containedBy <- Shape.contains) and the another is a many-to-many relationship (Shape.nextShapes <<- Shape.previousShapes). It all works perfectly in the application, so I don't think self-referencing relationships is a problem in general. However, when it comes to migrating the model to a new version, then Xcode fails to compile the automatically generated mapping model, with this error message: 2009-10-30 17:10:09.387 mapc[18619:607] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Unable to parse the format string "FUNCTION($manager ,'destinationInstancesForSourceRelationshipNamed:sourceInstances:' , 'contains' , $source.contains) == 1"' *** Call stack at first throw: ( 0 CoreFoundation 0x00007fff80d735a4 __exceptionPreprocess + 180 1 libobjc.A.dylib 0x00007fff83f0a313 objc_exception_throw + 45 2 Foundation 0x00007fff819bc8d4 _qfqp2_performParsing + 8412 3 Foundation 0x00007fff819ba79d +[NSPredicate predicateWithFormat:arguments:] + 59 4 Foundation 0x00007fff81a482ef +[NSExpression expressionWithFormat:arguments:] + 68 5 Foundation 0x00007fff81a48843 +[NSExpression expressionWithFormat:] + 155 6 XDBase 0x0000000100038e94 -[XDDevRelationshipMapping valueExpressionAsString] + 260 7 XDBase 0x000000010003ae5c -[XDMappingCompilerSupport generateCompileResultForMappingModel:] + 2828 8 XDBase 0x000000010003b135 -[XDMappingCompilerSupport compileSourcePath:options:] + 309 9 mapc 0x0000000100001a1c 0x0 + 4294973980 10 mapc 0x0000000100001794 0x0 + 4294973332 ) terminate called after throwing an instance of 'NSException' Command /Developer/usr/bin/mapc failed with exit code 6 The 'contains' is the name of one of the self-referential properties. Anyway, the really big problem is that I can't even look at this Mapping Property as Xcode crashes as soon as I select the entity mapping when viewing the mapping model. So I'm a bit lost really where to go from here. I really can't remove the self-referential properties, so I'm thinking I've got manually create a mapping model that compiles? Any ideas? Cheers

    Read the article

  • Easiest way to programmatically create email accounts for use by web app?

    - by cadwag
    I have a web app that create groups. Each group gets their own discussion board. I would like to add the feature of allowing users to send emails to their "group" within the web app to start a new discussion or reply to an email from the "group" to make a new post in an already ongoing discussion. For example, to start a new discussion a user would send: From: [email protected] To: [email protected] Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? All members of the group would receive an email: From: [email protected] Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? Reply-To: group1@ example.com And, the app would start a new discussion with: Author: Bill Fake Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? This is a pretty standard feature for Google Groups and other big sites. So how do us mere mortals go about implementing this? Is there an easy way? Or do I: 1. Install postfix 2. Write scripts to create new accounts for each new group 3. Access the server periodically via pop3 (or imap?) to retrieve the email messages sent to each account? 4. Parse the message for content If it's the latter, did I miss a step?

    Read the article

  • Any diff/merge tool that provides a report (metrics) of conflicts?

    - by cad
    CONTEXT: I am preparing a big C# merge using visual studio 2008 and TFS. I need to create a report with the files and the number of collisions (total changes and conflicts) for each file (and in total of course) PROBLEM: I cannot do it for two reasons (first one is solved): 1- Using TFS merge I can have access to the file comparison but I cannot export the list of conflicting files... I can only try to resolve the conflicts. (I have solved problem 1 using beyond compare. It allows me to export the file list) 2- Using TFS merge I can only access manually for each file to get the number of conflicts... but I have more than 800 files (and probably will have to repeat it in the close future so is not an option doing it manually) There are dozens of file comparison tools (http://en.wikipedia.org/wiki/Comparison_of_file_comparison_tools ) but I am not sure which one could (if any) give me these metrics. I have also read several forums and questions here but are more general questions (which diff tool is better) and I am looking for a very specific report. So my questions are: Is Visual Studio 2010 (using still TFS2008) capable of doing such reports/exportation? Is there any tool that provide this kind of metrics (Now I am trying Beyond Compare)

    Read the article

  • Designing a class in such a way that it doesn't become a "God object"

    - by devoured elysium
    I'm designing an application that will allow me to draw some functions on a graphic. Each function will be drawn from a set of points that I will pass to this graphic class. There are different kinds of points, all inheriting from a MyPoint class. For some kind of points it will be just printing them on the screen as they are, others can be ignored, others added, so there is some kind of logic associated to them that can get complex. How to actually draw the graphic is not the main issue here. What bothers me is how to make the code logic such that this GraphicMaker class doesn't become the so called God-Object. It would be easy to make something like this: class GraphicMaker { ArrayList<Point> points = new ArrayList<Point>(); public void AddPoint(Point point) { points.add(point); } public void DoDrawing() { foreach (Point point in points) { if (point is PointA) { //some logic here else if (point is PointXYZ) { //...etc } } } } How would you do something like this? I have a feeling the correct way would be to put the drawing logic on each Point object (so each child class from Point would know how to draw itself) but two problems arise: There will be kinds of points that need to know all the other points that exist in the GraphicObject class to know how to draw themselves. I can make a lot of the methods/properties from the Graphic class public, so that all the points have a reference to the Graphic class and can make all their logic as they want, but isn't that a big price to pay for not wanting to have a God class?

    Read the article

  • Total row count for pagination using JPA Criteria API

    - by ThinkFloyd
    I am implementing "Advanced Search" kind of functionality for an Entity in my system such that user can search that entity using multiple conditions(eq,ne,gt,lt,like etc) on attributes of this entity. I am using JPA's Criteria API to dynamically generate the Criteria query and then using setFirstResult() & setMaxResults() to support pagination. All was fine till this point but now I want to show total number of results on results grid but I did not see a straight forward way to get total count of Criteria query. This is how my code looks like: CriteriaBuilder builder = em.getCriteriaBuilder(); CriteriaQuery<Brand> cQuery = builder.createQuery(Brand.class); Root<Brand> from = cQuery.from(Brand.class); CriteriaQuery<Brand> select = cQuery.select(from); . . //Created many predicates and added to **Predicate[] pArray** . . select.where(pArray); // Added orderBy clause TypedQuery typedQuery = em.createQuery(select); typedQuery.setFirstResult(startIndex); typedQuery.setMaxResults(pageSize); List resultList = typedQuery.getResultList(); My result set could be big so I don't want to load my entities for count query, so tell me efficient way to get total count like rowCount() method on Criteria (I think its there in Hibernate's Criteria).

    Read the article

  • How do I know which Object I clicked?

    - by Nick
    Here's the deal: I'm working on a personal portfolio in AS3 and I've run into a problem which I can't seem to find a logical answer to. I want everything (well, most of it) to be editable with an XML file, including my menu. My menu is just a Sprite with some text on it and a Tweener-tween, no big deal. But, I forgot to think of a way how I can determine which menu-item I have clicked. This is in my Main.as private function xmlLoaded(e:Event):void { xml = e.target.xml; menu = new Menu(xml); menu.x = 0; menu.y = stage.stageHeight / 2 - menu.height / 2; addChild(menu); } In Menu.as public function Menu(xml:XML) { for each (var eachMenuItem:XML in xml.menu.item) { menuItem = new MenuItem(eachMenuItem); menuItem.y += yPos; addChild(menuItem); yPos += menuItem.height + 3; } } and in my MenuItem.as, everything works - I have a fancy tween when I hover over it, but when I click a menu-item, I want something to appear ofcourse. How do I know which one I clicked? I've tried with pushing everything in an array, but that didn't work out well (or maybe I'm doing it wrong). Also tried a global counter, but that's not working either because the value will always be amount of items in my XML file. Also tried e.currentTarget in my click-function, but when I trace that, all of them are "Object Sprite".. I need something so I can give each a unique "name"? Thanks in advance!

    Read the article

  • How can a software agency deliver quality software/win projects?

    - by optician
    I currently work for a bespoke software agency. Does anyone have any experience of how to win well priced work? It seems there is so much competition from offshore/bedroom program teams, that cost is extremely competetive these days. I feel that it is very different compared to a software product company or an internal it department, in terms of budget. As someone else said before, we only ever really get to version 1.0 of a lot of our software, unless the client is big enough. In which case it doesn't make business sense to spend ages making the software the best we can. Its like we are doing the same quality of work of internal it. Also a Lot of our clients are not technically minded and so therefor will not pay for things they don't understand. As our company does not have the money to turn down work it often goes that we take on complicated work for far too little money. I have got a lot better at managing change and keeping tight specs etc. It is still hard.

    Read the article

  • PHP package manager

    - by Mathias
    Hey, does anyone know a package manager library for PHP (as e.g. apt or yum for linux distros) apart from PEAR? I'm working on a system which should include a package management system for module management. I managed to get a working solution using PEAR, but using the PEAR client for anything else than managing a PEAR installation is not really the optimal solution as it's not designed for that. I would have to modify/extend it (e.g. to implement actions on installation/upgrade or to move PEAR specific files like lockfiles away from the system root) and especially the CLI client code is quite messy and PHP4. So maybe someone has some suggestions for an alternative PEAR client library which is easy to use and extend (the server side has some nice implementations like Pirum and pearhub) for completely different package management systems written in PHP (ideally including dependency tracking and different channels) for some general ideas how to implement such a PM system (yes, I'm still tinkering with the idea of implementing such a system from scratch) I know that big systems like Magento and symfony use PEAR for their PM. Magento uses a hacked version of the original PEAR client (which I'd like to avoid), symfony's implementation seems quite integrated with the framework, but would be a good starting point to at least write the client from scratch. Anyway, if anybody has suggestions: please :)

    Read the article

  • WYSIWYG with Qt - font size woes

    - by Rob
    I am creating a custom Qt widget that mimics an A4 printed page and am having problems getting fonts to render at the correct size. My widget uses QPainter::setViewport and QPainter::setWindow to mimic the A4 page, using units of 10ths of a millimetre which enables me to draw easily. However, attempting to create a font at a specific point size doesn't seem to work and using QFont:setPixelSize isn't accurate. Here is some code: View::View(QWidget *parent) : QWidget(parent), printer(new QPrinter) { printer->setPaperSize(QPrinter::A4); printer->setFullPage(true); } void View::paintEvent(QPaintEvent*) { QPainter painter(this); painter.setWindow(0, 0, 2100, 2970); painter.setViewport(0, 0, printer->width(), printer->height()); // Draw a rect at x = 1cm, y = 1cm, 6cm wide and 1 inch high painter.drawRect(100, 100, 600, 254); // Create a 72pt (1 inch) high font QFont font("Arial"); font.setPixelSize(254); painter.setFont(font); // Draw in the same box // The font is too large painter.drawText(QRect(100, 100, 600, 254), tr("Wg\u0102")); // Ack - the actual font size reported by the metrics is 283 pixels! const QFontMetrics fontMetrics = painter.fontMetrics(); qDebug() << "Font height = " << fontMetrics.height(); } So I'm asking for a 254 high font (1 inch, 72 pts) and it's too big and sure enough when I query for the font height via QFontMetrics it is 283 high. Does anyone else know how to use font sizes in points when using custom mapping modes like this? It must be possible. Note that I cannot see how to convert between logical/device points either (i.e. the Win32 DPtoLP/LPtoDP equivalents.)

    Read the article

  • Which MySQL Fork/Version to Pick??

    - by Drew
    As most of you know, Sun acquired MySQL (and later Oracle acquired Sun), and during these acquisitions, there were a lot of FUD in MySQL community which resulted in creation of various forks. Today we have MySQL from MySQL, Percona (XtraDB) MySQL, OurDelta MySQL, MariaDB, Drizzle to name a few. Which brings us to the source of the problem. We are in the process of upgrading our databases (hardware/software) and I would like to know which one of the forks should I go with. Each has their own set of pros/cons. We are currently using MySQL 5.0.x from MySQL/Linux on an 8-core machine. Our new hardware is a monster with 32 cores and 32GB of memory connecting to a fast NetApp Storage via FC. I would like to stick with MySQL from MySQL but I have heard horror stories on how badly MySQL 5.1 performs on many cores. I have also heard that MySQL 5.4 performs better on multi-core machines but that's still not production ready. In addition, I have also heard a lot of good things about Percona builds. This is what I know so far: MySQL 5.1 from MySQL: Reliable choice, but doesn't scale well on a big machine Percona: Scales well, good backing company. I don't have much experience with it MariaDB: Don't know much about it besides that it was founded by Original MySQL developers (including Monty) OurDelta: Don't know much Drizzle: Mostly optimized for cloud computing I would like to know what's the general notion about this problem. Which build/version should I go with? How are you guys picking your builds/versions? Thanks!

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • What's the recommended implementation for hashing OLE Variants?

    - by Barry Kelly
    OLE Variants, as used by older versions of Visual Basic and pervasively in COM Automation, can store lots of different types: basic types like integers and floats, more complicated types like strings and arrays, and all the way up to IDispatch implementations and pointers in the form of ByRef variants. Variants are also weakly typed: they convert the value to another type without warning depending on which operator you apply and what the current types are of the values passed to the operator. For example, comparing two variants, one containing the integer 1 and another containing the string "1", for equality will return True. So assuming that I'm working with variants at the underlying data level (e.g. VARIANT in C++ or TVarData in Delphi - i.e. the big union of different possible values), how should I hash variants consistently so that they obey the right rules? Rules: Variants that hash unequally should compare as unequal, both in sorting and direct equality Variants that compare as equal for both sorting and direct equality should hash as equal It's OK if I have to use different sorting and direct comparison rules in order to make the hashing fit. The way I'm currently working is I'm normalizing the variants to strings (if they fit), and treating them as strings, otherwise I'm working with the variant data as if it was an opaque blob, and hashing and comparing its raw bytes. That has some limitations, of course: numbers 1..10 sort as [1, 10, 2, ... 9] etc. This is mildly annoying, but it is consistent and it is very little work. However, I do wonder if there is an accepted practice for this problem.

    Read the article

  • How can I "pack()" a printable Java Swing component?

    - by Jonas
    I have implemented a Java Swing component that implements Printable. If I add the component to a JFrame, and do this.pack(); on the JFrame, it prints perfect. But if I don't add the component to a JFrame, just a blank page is printed. This code gives a great printout: final PrintablePanel p = new PrintablePanel(pageFormat); new JFrame() {{ getContentPane().add(p); this.pack(); }}; job.setPrintable(p, pageFormat); try { job.print(); } catch (PrinterException ex) { System.out.println("Fail"); } But this code gives a blank page: final PrintablePanel p = new PrintablePanel(pageFormat); // new JFrame() {{ getContentPane().add(p); this.pack(); }}; job.setPrintable(p, pageFormat); try { job.print(); } catch (PrinterException ex) { System.out.println("Fail"); } I think that this.pack(); is the big difference. How can I do pack() on my printable component so it prints fine, without adding it to a JFrame? The panel is using several LayoutManagers. I have tried with p.validate(); and p.revalidate(); but it's not working. Any suggestions? Or do I have to add it to a hidden JFrame before I print the component?

    Read the article

  • Liskov Substition and Composition

    - by FlySwat
    Let say I have a class like this: public sealed class Foo { public void Bar { // Do Bar Stuff } } And I want to extend it to add something beyond what an extension method could do....My only option is composition: public class SuperFoo { private Foo _internalFoo; public SuperFoo() { _internalFoo = new Foo(); } public void Bar() { _internalFoo.Bar(); } public void Baz() { // Do Baz Stuff } } While this works, it is a lot of work...however I still run into a problem: public void AcceptsAFoo(Foo a) I can pass in a Foo here, but not a super Foo, because C# has no idea that SuperFoo truly does qualify in the Liskov Substitution sense...This means that my extended class via composition is of very limited use. So, the only way to fix it is to hope that the original API designers left an interface laying around: public interface IFoo { public Bar(); } public sealed class Foo : IFoo { // etc } Now, I can implement IFoo on SuperFoo (Which since SuperFoo already implements Foo, is just a matter of changing the signature). public class SuperFoo : IFoo And in the perfect world, the methods that consume Foo would consume IFoo's: public void AcceptsAFoo(IFoo a) Now, C# understands the relationship between SuperFoo and Foo due to the common interface and all is well. The big problem is that .NET seals lots of classes that would occasionally be nice to extend, and they don't usually implement a common interface, so API methods that take a Foo would not accept a SuperFoo and you can't add an overload. So, for all the composition fans out there....How do you get around this limitation? The only thing I can think of is to expose the internal Foo publicly, so that you can pass it on occasion, but that seems messy.

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >