Search Results

Search found 3722 results on 149 pages for 'choice'.

Page 121/149 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Is there any danger in committing to a component library such as SmartGwt or Swing?

    - by Banang
    Since February this year I have been working on an app that's built using SmartGWT components. Generally, I find the components very nice to work with, and the fact that they're open source and free to use is just fantastic. However, I can't seem to shake the feeling that it's not a durable way of developing, but I can't quite explain why. Maybe it's because I know that any minute now the team developing it could decide to stop, which would leave me and my team in a bit of a pickle, but I'm sure there must be something more. I have been trying to find ways of explaining this feeling to myself, but to no avail. Therefore I turn to you, dear community, to ask if you can come up with a good reason why committing to building your app (that's supposed to be around for many more years to come) using a component library such as SmartGWT is a bad idea? Is there any reason I should just have developed the components myself? Or did I make the right choice when deciding not to reinvent the wheel and just go for what was readily available?

    Read the article

  • Object-oriented GUI development in python

    - by ptabatt
    Hey guys, new programmer here. I have an assignment for class and I'm stuck... What I need to do is a create a GUI that gives someone a basic arithmetic problem in one box, asks the person to answer it, evaluates it, and tells you if you're right or wrong... Basically, what I have is this: [code] class Lesson(Frame): def init (self, parent=None): Frame.init(self, parent) self.pack() Lesson.make_widgets(self) def make_widgets(self): Label(self, text="").pack(side=TOP) ent = Entry(self) self.a = randrange(1,10) self.b = randrange(1,10) self.expr = choice(["+","-"]) ent.insert(END, str(self.a) + str(self.expr) + str(self.a)) [/code] I've broken this down into many little steps and basically, what I'm trying to do right now is insert a default random expression into the first entry widget. When I run this code, I just get a blank Label. Why is that? How can I put a something like "7+7" into the box? If you absolutely need background to the problem, it's question #3 on this link. http://reed.cs.depaul.edu/lperkovic/csc242/homeworks/Homework8.html -Thanks for all help in advance.

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • How can I have sub-elements of a complex/mixed type with unrestricted order and count?

    - by mbmcavoy
    I am working with XML where some elements will contain text with additional markup. This is similar to this example at W3Schools. However, I need the markup tags to be able to appear in any order and possibly more than once. To modify their example for illustration: <letter> Dear Mr.<name>John Smith</name>. Your order <orderid>1032</orderid> will be shipped on <shipdate>2001-07-13</shipdate>. Thank you, <name>Bob Adams</name> </letter> None of the options presented by W3Schools (on the page following the linked example) allow this XML due to the second <name> element. Their explanation of the "indicators" and my testing are consistent. <xs:sequence> - violates the element order <xs:choice> - more than one kind of element is used. <xs:all> - maxOccurs is restricted to "1". This seems like it should be basic, after all, XHTML allows such things. How do I define my schema to allow this?

    Read the article

  • javascript toolkit for offline webapps

    - by anjanb
    hi all, we're building an survey webapp which will let the user to add new records to the survey when offline and will upload when the browser reconnects with the server. We've identified that this will need offline storage and hence google gears seems to be an obvious choice (we understand that adobe Flash has Offline Storage but not sure if that is the best way). I am aware of Dojo offline javascript toolkit which uses google gears for the underlying functionality. However, dojo offline is not part of the dojo toolkit after version 1.3. (currently dojo is 1.4.2). Google gears toolkit is currently frozen except for critical vulnerability fixes (it has not been updated almost for the last 1 yr) because they think that HTML 5 is the way to go ahead. Hence, we're looking for a higher abstraction on top of Google Gears engine TODAY, AND which will (in the future) switch the underlying engine to HTML5 if the browser supports HTML5 standards. We'd love to use Dojo but they have discontinued Dojo offline -- we'd prefer something that will be maintained for some time. Which are possible good strategies, JS toolkits/libraries to use for building this webapp ? Pls. advise.

    Read the article

  • Animation is slow on iPhone

    - by Anthony Chan
    I'm developing an app that would display images and change them according to the user's action. I've created a subclass of UIView to contain an image, an index number and an array of image names. The code is like this: @interface CustomPic : UIView { UIImageView *pic; NSInteger index; NSMutableArray *picNames; //<-- an array of NSString } And in the implementation part, it has a method to change the image using a dissolve effect. - (void)nextPic { index++; if (index >= [picNames count]) { index = 0; } UIImageView *temp = pic; pic.image = [UIImage imageNamed:[picNames objectAtIndex:index]]; temp.alpha = 0; [UIView beginAnimations:nil context:nil]; [UIView setAnimationCurve:UIViewAnimationCurveEaseOut]; [UIView setAnimationDuration:0.25]; pic.alpha = 0; temp.alpha = 1; [UIView commitAnimations]; } In the viewController, there are several CustomPic which would change the images depends on users' choice. The images would change as expected with the fade in/out effect, but the animation performance is really bad. I've tested it on an iPhone 3G, the Instruments shows that the animation is only 2-3FPS! I tried many methods to simplify and modify the codes but with no hope. Is there something wrong in my code or in my concept? Thanks for any help. P.S. all the images are 320*480 PNGs with a max size of 15KB.

    Read the article

  • alternative to #include within namespace { } block

    - by Jeff
    Edit: I know that method 1 is essentially invalid and will probably use method 2, but I'm looking for the best hack or a better solution to mitigate rampant, mutable namespace proliferation. I have multiple class or method definitions in one namespace that have different dependencies, and would like to use the fewest namespace blocks or explicit scopings possible but while grouping #include directives with the definitions that require them as best as possible. I've never seen any indication that any preprocessor could be told to exclude namespace {} scoping from #include contents, but I'm here to ask if something similar to this is possible: (see bottom for explanation of why I want something dead simple) // NOTE: apple.h, etc., contents are *NOT* intended to be in namespace Foo! // would prefer something most this: namespace Foo { #include "apple.h" B *A::blah(B const *x) { /* ... */ } #include "banana.h" int B::whatever(C const &var) { /* ... */ } #include "blueberry.h" void B::something() { /* ... */ } } // namespace Foo ... // over this: #include "apple.h" #include "banana.h" #include "blueberry.h" namespace Foo { B *A::blah(B const *x) { /* ... */ } int B::whatever(C const &var) { /* ... */ } void B::something() { /* ... */ } } // namespace Foo ... // or over this: #include "apple.h" namespace Foo { B *A::blah(B const *x) { /* ... */ } } // namespace Foo #include "banana.h" namespace Foo { int B::whatever(C const &var) { /* ... */ } } // namespace Foo #include "blueberry.h" namespace Foo { void B::something() { /* ... */ } } // namespace Foo My real problem is that I have projects where a module may need to be branched but have coexisting components from the branches in the same program. I have classes like FooA, etc., that I've called Foo::A in the hopes being able to branch less painfully as Foo::v1_2::A, where some program may need both a Foo::A and a Foo::v1_2::A. I'd like "Foo" or "Foo::v1_2" to show up only really once per file, as a single namespace block, if possible. Moreover, I tend to prefer to locate blocks of #include directives immediately above the first definition in the file that requires them. What's my best choice, or alternatively, what should I be doing instead of hijacking the namespaces?

    Read the article

  • Conditional column values in NSTableView?

    - by velocityb0y
    I have an NSTableView that binds via an NSArrayController to an NSMutableArray. What's in the array are derived classes; the first few columns of the table are bound to properties that exist on the base class. That all works fine. Where I'm running into problem is a column that should only be populated if the row maps to one specific subclass. The property that column is meant to display only exists in that subclass, since it makes no sense in terms of the base class. The user will know, from the first two columns, why the third column's cell is populated/editable or not. The binding on the third column's value is on arrangedObjects, with a model path of something like "foo.name" where foo is the property on the subclass. However, this doesn't work, as the other subclasses in the hierarchy are not key-value compliant for foo. It seems like my only choice is to have foo be a property on the base class so everybody responds to it, but this clutters up the interfaces of the model objects. Has anyone come up with a clean design for this situation? It can't be uncommon (I'm a relative newcomer to Cocoa and I'm just learning the ins and outs of bindings.)

    Read the article

  • Technology and language for a stable Digital Audio Workstation development

    - by Kill KRT
    Hi, I'm designing a cross platform (Windows/Linux/OS X) application, something like a digital audio workstation. I'd like to create a software where users have a fully featured sequencer (multiple tracks with automation) and where it is possible to create instruments using a visual language (as Pure Data/Max MSP). Ehm... I know that I've already posted a question about a related issue... But in order to decide which technology I should use, I think I'd better to make more investigation. I'm a quite experted user of audio trackers (Renoise, Protracker,...) and sequencers (FL Studio, Cubase 5), but I didn't ever try to develop even a basic audio tracker. I know just the basic theory of mixing sound and know how basically a DSP works. My questions are: Where I can find a good tutorial/guide/book about this issue? Do you think using C# (with NAudio) could dramatically reduce performance? I know C++ would be the best choice, but I find C# so elegant and easy to build and port, while C++ is so powerful and fast, but there are too #define and bad things for my taste! ;-) Thank you.

    Read the article

  • How are declared private ivars different from synthesized ivars?

    - by lemnar
    I know that the modern Objective-C runtime can synthesize ivars. I thought that synthesized ivars behaved exactly like declared ivars that are marked @private, but they don't. As a result, come code compiles only under the modern runtime that I expected would work on either. For example, a superclass: @interface A : NSObject { #if !__OBJC2__ @private NSString *_c; #endif } @property (nonatomic, copy) NSString *d; @end @implementation A @synthesize d=_c; - (void)dealloc { [_c release]; [super dealloc]; } @end and a subclass: @interface B : A { #if !__OBJC2__ @private NSString *_c; #endif } @property (nonatomic, copy) NSString *e; @end @implementation B @synthesize e=_c; - (void)dealloc { [_c release]; [super dealloc]; } @end A subclass can't have a declared ivar with the same name as one of its superclass's declared ivars, even if the superclass's ivar is private. This seems to me like a violation of the meaning of @private, since the subclass is affected by the superclass's choice of something private. What I'm more concerned about, however, is how should I think about synthesized ivars. I thought they acted like declared private ivars, but without the fragile base class problem. Maybe that's right, and I just don't understand the fragile base class problem. Why does the above code compile only in the modern runtime? Does the fragile base class problem exist when all superclass instance variables are private?

    Read the article

  • What are the advantages of using J2EE over ASP.net?

    - by m_oLogin
    We are currently planning to launch a couple of internal web projects in the future. Our company's dev teams are mostly experienced in J2EE and have worked with it for years. Today, we have the choice of launching a couple of our projects on .net. I have checked out a couple of sources on the net, and it seems like the "J2EE vs ASP.net" combat brings out as much discord as the overseen "Apple vs Microsoft" or "Free Eclipse vs Visual Studio"... Nevertheless, I have been somewhat quite impressed with ASP.net's abilities to create great things with huge simplicity (for ex. asp.net ajax's demos). No more tons of xmls to play with, no more tons of frameworks to configure (we usually use the famous combo struts/spring/hibernate)... It just seemed to me that ASP.net had some good advantages over J2EE, but then again, I may speak by ignorance. What I want to know is this : What are the real advantages of using J2EE over ASP.net? Is there anything that cannot be done in ASP.net that can be done in J2EE? Once the frameworks are all in place and configured, is it faster to develop apps in J2EE than it is in .net? Are the applications generally easier to maintain in J2EE than in ASP.net? Is it worth it for some developpers to leave their J2EE knowledge on the side and move on to ASP.net if it does exactly the same thing?

    Read the article

  • Hashtable is that fast

    - by Costa
    Hi s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]. Is the hash function of the java string, I assume the rest of languages is similar or close to this implementation. If we have hash-Table and a list of 50 elements. each element is 7 chars ABCDEF1, ABCDEF2, ABCDEF3..... ABCDEFn If each bucket of hashtable contains 5 strings (I think this function will make it one string per bucket, but let us assume it is 5). If we call col.Contains("ABCDEFn"); // will do 6 comparisons and discover the difference on the 7th. The hash-table will take around 70 operations (multiplication and additions) to get the hashcode and to compare with 5 strings in bucket. and BANG it found. For list it will take around 300 comparisons to find it. for the case that there is only 10 elements, the list will take around 70 operations but the Hashtable will take around 50 operations. and note that hashtable operations are more time consuming (it is multiplications). I conclude that HybirdDictionary in .Net probably is the best choice for that most cases that require Hashtable with unknown size, because it will let me use a list till the list becomes more than 10 elements. still need something like HashSet rather than a Dictionary of keys and values, I wonder why there is no HybirdSet!! So what do u think? Thanks

    Read the article

  • How should I organize complex SQL views in Rails?

    - by Benjamin Oakes
    I manage a research database with Ruby on Rails. The data that is entered is primarily used by scientists who prefer to have all the relevant information for a study in one single massive table for use in their statistics software of choice. I'm currently presenting it as CSV, as it's very straightforward to do and compatible with the tools people want to use. I've written many views (the SQL kind, not the Rails HTML/ERB kind) to make the output they expect a reality. Some of these views are quite large and have a fair amount of complexity behind them. I wrote them in SQL because there are many calculations and comparisons that are more easily done with SQL. They're currently loaded into the database straight from a file named views.sql. To get the requested data, I do a select * from my_view;. The views.sql file is getting quite large. Part of the problem is that we're still figuring out what the data we collect means, so there's a lot of changes being made to the views all the time -- and a ton of them are being created. Many of them need to be repeatable. I've recently run into issues organizing and testing these views. Rails works great for user interface stuff and business logic, but I'm not aware of much existing structure for handling the reporting we require. Some options I've thought of: Should I move them into the most relevant models somehow? Several of the views interact with each other, which makes this situation more complex than just doing a single find_by_sql, so I don't know if they should only be part of the model. Perhaps they should be treated as a "view" in the MVC sense? (That is, they could be moved into app/views/ and live alongside the HTML, perhaps as files named something like my_view.csv.sql which return CSV.) How would you deal with a complex reporting problem like this?

    Read the article

  • Using an ORM with a database that has no defined relationships?

    - by Ahmad
    Consider a database(MSSQL 2005) that consists of 100+ tables which have primary keys defined to a certain degree. There are 'relationships' between tables, however these are not enforced with foreign key constraints. Consider the following simplified example of typical types of tables I am dealing with. The are clear relations between the User and City and Province tables. However, they key issues is the inconsistent data types in the tables and naming conventions. User: UserRowId [int] PK Name [varchar(50)] CityId [smallint] ProvinceRowId [bigint] City: CityRowId [bigint] PK CityDescription [varchar(100)] Province: ProvinceId [int] PK ProvinceDesc [varchar(50)] I am considering a rewrite of the application (in ASP.net MVC) that uses this data source as is similar in design to MVC storefront. However I am going through a proof of concept phase and this is one of the stumbling blocks I have come across. What are my options in terms of ORM choice that can be easily used and why? Should I even be considering an ORM? (The reason I ask this is that most explanations and tutorials all work with relatively cleanly designed existing databases, or newly created ones when compared to mine. I am thus having a very hard time trying to find a way forward with this problem) There is a huge amount of existing SQL queries, would a datamappper(eg IBatis.net) be more suitable since we could easily modify them to work and reuse the investment already made? I have found this question on SO which indicates to me that an ORM can be used - however I get the impression that this a question of mapping? Note: at the moment, the object model is not clearly defined as it was non-existent. The existing system pretty much did almost everything in SQL or consisted of overly complicated, and numerous queries to complete fucntionality. I am pretty much a noob and have zero experience around ORMs and MVC - so this an awesome learning curve I am on.

    Read the article

  • Which plugin framework to use for native C++/Win32

    - by Kerido
    Hi everybody. I have an extensible product that allows 3rd party developers to extend it. The aspects that can be extended are documented and interfaces are provided in the SDK. Currently, I'm using COM and I'm getting pretty comfortable with it. I especially like the ability to provide interface versioning in a unified manner. I consider it to be a requirement because you never know what you're gonna need in the future. Just to be precise, here's an example. Let's suppose I have an interface representing a particular feature: class IFeature { public: virtual void DoFeatureTask() = 0; }; Then after the interface is already documented (and someone may have used it in the plugin code) I'm realizing, I need more from this feature. Maybe, there is an option I need to provide. I just define the second version: class IFeature2 { public: virtual void DoFeatureTask(int theOption) = 0; }; I don't mean I intend to have lots of versions. But it just may happen. In COM, because every interface is associated with a GUID, I can query a preferred implementation, determine its presence, and, finally, fall back to a legacy one. But after glancing through C++/COM-related questions, I noticed many recommendations against COM. So maybe it's not the best choice and I'm just too old-school. Can you advise on an alternative?

    Read the article

  • DotNetOpenAuth / WebSecurity Basic Info Exchange

    - by Jammer
    I've gotten a good number of OAuth logins working on my site now. My implementation is based on the WebSecurity classes with amends to the code to suit my needs (I pulled the WebSecurity source into mine). However I'm now facing a new set of problems. In my application I have opted to make the user email address the login identifier of choice. It's naturally unique and suits this use case. However, the OAuth "standards" strikes again. Some providers will return your email address as "username" (Google) some will return the display name (Facebook). As it stands I see to options given my particular scenario: Option 1 Pull even more framework source code into my solution until I can chase down where the OpenIdRelyingParty class is actually interacted with (via the DotNetOpenAuth.AspNet facade) and make addition information requests from the OpenID Providers. Option 2 When a user first logs in using an OpenID provider I can display a kind of "complete registration" form that requests missing info based on the provider selected.* Option 2 is the most immediate and probably the quickest to implement but also includes some code smells through having to do something different based on the provider selected. Option 1 will take longer but will ultimately make things more future proof. I will need to perform richer interactions down the line so this also has an edge in that regard. The more I get into the code it does seem that the WebSecurity class itself is actually very limiting as it hides lots of useful DotNetOpenAuth functionality in the name of making integration easier. Andrew (the author of DNOA) has said that the Attribute Exchange stuff happens in the OpenIdRelyingParty class but I cannot see from the DotNetOpenAuth.AspNet source code where this class is used so I'm unsure of what source would need to be pulled into my code in order to enable the functionality I need. Has anyone completely something similar?

    Read the article

  • Possible to capture all events in a web browser?

    - by David
    I am working on a pet project and am at the research stage. Quick summary I am trying to intercept all form submits, onclick, and every single keydown. My library of choice is either jquery, or jquery + prototypejs. I figure I can batch up the events into a queue/stack and send it back to the server in time interval batches to keep performance relatively stable. Concerns Form submits and change's would be relatively simple.. Something like $("form :inputs").bind("change", function() { ... record event... }); But how to ensure I get precedence over the applications handlers as I have a habit of putting return false on a lot of my form handlers when there is a validation issue. As I understand it, that effectively stops the event in its tracks. My project For my smaller remote clients I will put their products onto a VPS or run it in my home data center. Currently I use a basic authentication system, given a username/password they see the website and then hopefully send me somewhat sane notes on what is broken or should be tweaked. As a better solution, I've written a simple proxy web server that does the above but allows me to have one DNS entry and then depending on credentials it makes an internal request relaying headers and re-writing URLS as needed. Every single html/text or application/* request is compressed and shoved into a sqlite table so I can partially replay what they've done. Now I am shifting to the frontend and would like to capture clicks, keydown's, and submits on everything on the page.

    Read the article

  • wxPython: How to handle event binding and Show() properly.

    - by Gopal
    Hi all, I'm just starting out with wxPython and this is what I would like to do: a) Show a Frame (with Panel inside it) and a button on that panel. b) When I press the button, a dialog box pops up (where I can select from a choice). c) When I press ok on dialog box, the dialog box should disappear (destroyed), but the original Frame+Panel+button are still there. d) If I press that button again, the dialog box will reappear. My code is given below. Unfortunately, I get the reverse effect. That is, a) The Selection-Dialog box shows up first (i.e., without clicking on any button since the TopLevelframe+button is never shown). b) When I click ok on dialog box, then the frame with button appears. c) Clicking on button again has no effect (i.e., dialog box does not show up again). What am I doing wrong ? It seems that as soon as the frame is initialized (even before the .Show() is called), the dialog box is initialized and shown automatically. I am doing this using Eclipse+Pydev on WindowsXP with Python 2.6 ============File:MainFile.py=============== import wx import MyDialog #This is implemented in another file: MyDialog.py class TopLevelFrame(wx.Frame): def __init__(self,parent,id): wx.Frame.__init__(self,parent,id,"Test",size=(300,200)) panel=wx.Panel(self) button=wx.Button(panel, label='Show Dialog', pos=(130,20), size=(60,20)) # Bind EVENTS --> HANDLERS. button.Bind(wx.EVT_BUTTON, MyDialog.start(self)) # Run the main loop to start program. if __name__=='__main__': app=wx.PySimpleApp() TopLevelFrame(parent=None, id=-1).Show() app.MainLoop() ============File:MyDialog.py=============== import wx def start(parent): inputbox = wx.SingleChoiceDialog(None,'Choose Fruit', 'Selection Title', ['apple','banana','orange','papaya']) if inputbox.ShowModal()==wx.ID_OK: answer = inputbox.GetStringSelection() inputbox.Destroy()

    Read the article

  • Create new or update existing entity at one go with JPA

    - by Alex R
    A have a JPA entity that has timestamp field and is distinguished by a complex identifier field. What I need is to update timestamp in an entity that has already been stored, otherwise create and store new entity with the current timestamp. As it turns out the task is not as simple as it seems from the first sight. The problem is that in concurrent environment I get nasty "Unique index or primary key violation" exception. Here's my code: // Load existing entity, if any. Entity e = entityManager.find(Entity.class, id); if (e == null) { // Could not find entity with the specified id in the database, so create new one. e = entityManager.merge(new Entity(id)); } // Set current time... e.setTimestamp(new Date()); // ...and finally save entity. entityManager.flush(); Please note that in this example entity identifier is not generated on insert, it is known in advance. When two or more of threads run this block of code in parallel, they may simultaneously get null from entityManager.find(Entity.class, id) method call, so they will attempt to save two or more entities at the same time, with the same identifier resulting in error. I think that there are few solutions to the problem. Sure I could synchronize this code block with a global lock to prevent concurrent access to the database, but would it be the most efficient way? Some databases support very handy MERGE statement that updates existing or creates new row if none exists. But I doubt that OpenJPA (JPA implementation of my choice) supports it. Event if JPA does not support SQL MERGE, I can always fall back to plain old JDBC and do whatever I want with the database. But I don't want to leave comfortable API and mess with hairy JDBC+SQL combination. There is a magic trick to fix it using standard JPA API only, but I don't know it yet. Please help.

    Read the article

  • Which MySQL Fork/Version to Pick??

    - by Drew
    As most of you know, Sun acquired MySQL (and later Oracle acquired Sun), and during these acquisitions, there were a lot of FUD in MySQL community which resulted in creation of various forks. Today we have MySQL from MySQL, Percona (XtraDB) MySQL, OurDelta MySQL, MariaDB, Drizzle to name a few. Which brings us to the source of the problem. We are in the process of upgrading our databases (hardware/software) and I would like to know which one of the forks should I go with. Each has their own set of pros/cons. We are currently using MySQL 5.0.x from MySQL/Linux on an 8-core machine. Our new hardware is a monster with 32 cores and 32GB of memory connecting to a fast NetApp Storage via FC. I would like to stick with MySQL from MySQL but I have heard horror stories on how badly MySQL 5.1 performs on many cores. I have also heard that MySQL 5.4 performs better on multi-core machines but that's still not production ready. In addition, I have also heard a lot of good things about Percona builds. This is what I know so far: MySQL 5.1 from MySQL: Reliable choice, but doesn't scale well on a big machine Percona: Scales well, good backing company. I don't have much experience with it MariaDB: Don't know much about it besides that it was founded by Original MySQL developers (including Monty) OurDelta: Don't know much Drizzle: Mostly optimized for cloud computing I would like to know what's the general notion about this problem. Which build/version should I go with? How are you guys picking your builds/versions? Thanks!

    Read the article

  • Tool to detect use/abuse of String.Concat (where StringBuilder should be used)

    - by Mark Rushakoff
    It's common knowledge that you shouldn't use a StringBuilder in place of a small number of concatenations: string s = "Hello"; if (greetingWorld) { s += " World"; } s += "!"; However, in loops of a significant size, StringBuilder is the obvious choice: string s = ""; foreach (var i in Enumerable.Range(1,5000)) { s += i.ToString(); } Console.WriteLine(s); Is there a tool that I can run on either raw C# source or a compiled assembly to identify where in the source code that String.Concat is being called? (If you're not familiar, s += "foo" is mapped to String.Concat in the IL output.) Obviously, I can't realistically search through an entire project and evaluate every += to identify whether the lvalue is a string. Ideally, it would only point out calls inside a for/foreach loop, but I would even put up with all the false positives of noting every String.Concat. Also, I'm aware that there are some refactoring tools that will automatically refactor my code to use StringBuilder, but I am only interested in identifying the Concat usage at this point. I routinely run Gendarme and FxCop on my code, and neither of those tools identify what I've described.

    Read the article

  • Clone existing structs with different alignment in Visual C++

    - by Crend King
    Is there a way to clone an existing struct with different member alignment in Visual C++? Here is the background: I use an 3rd-party library, which uses several structs. To fill up the structs, I pass the address of the struct instances to some functions. Unfortunately, the functions only returns unaligned buffer, so that data of some members are always wrong. /Zp is out of choice, since it breaks the other parts of the program. I know #pragma pack modifies the alignment of the following struct, but I would like to avoid copying the structs into my code, for the definitions in the library might change in the future. Sample code: test.h: struct am_aligned { BYTE data1[10]; ULONG data2; }; test.cpp: #include "test.h" // typedef alignment(1) struct am_aligned am_unaligned; int APIENTRY wWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { char buffer[20] = {}; for (int i = 0; i < sizeof(unaligned_struct); i++) { buffer[i] = i; } am_aligned instance = *(am_aligned*) buffer; return 0; } Consider am_aligned is defined in the library header file. am_unaligned is my custom declaration, and only effective in test.cpp. The commented line does not work of course. instance.data2 is 0x0f0e0d0c, while 0x0d0c0b0a is desired. Thanks for help!

    Read the article

  • iPhone OpenGL ES Texture2D Masking

    - by Robert Neagu
    What's the best choice when trying to mask a texture like ColorSplash or other apps like iSteam, etc? I started learning OPENGL ES like... 4 days ago (I'm a total rookie) and tried the following approach: 1) I created a colored texture2D, a grayscale version of the first texture and a third texture2D called mask 2) I also created a texture2D for the brush... which is grayscale and it's opaque (brush = black = 0,0,0,1 and surroundings = white = 1,1,1,1). My intention was to create an antialiased brush with smooth edges but i'm fine with a normal one right now 3) I searched for masking techniques on the internet and found this tutorial ZeusCMD - Design and Development Tutorials : OpenGL ES Programming Tutorials - Masking about masking. The tutorial tells me to use blending to achieve masking... first draw colored, then mask with glBlendFunc(GL_DST_COLOR, GL_ZERO) and then grayscale with glBlendFunc(GL_ONE, GL_ONE) ... and this gives me something close to what i want... but not exactly what i want. The result is masked but it's somehow overbright-ed 4) For drawing to the mask texture i used an extra frame buffer object (FBO) I'm not really happy with the resulting image (overbright-ed picture) nor with the speed achieved with this method. I think the normal way was to draw directly to the grayscale (overlay) texture2D affecting only it's alpha channel in the places where the brush hits. Is there a fast way to achieve this? I have searched a lot and never got an answer that's clear and understandable. Then, in the main draw loop I could only draw the colored texture and then blend the grayscale ontop with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I just want to learn to use OPENGL ES and it's driving me nuts because i can't get it to work properly. An advice, a link to a tutorial would be much appreciated.

    Read the article

  • Entity Framework and associations between string keys

    - by fredrik
    Hi, I am new to Entity Framework, and ORM's for that mather. In the project that I'm involed in we have a legacy database, with all its keys as strings, case-insensitive. We are converting to MSSQL and want to use EF as ORM, but have run in to a problem. Here is an example that illustrates our problem: TableA has a primary string key, TableB has a reference to this primary key. In LINQ we write something like: var result = from t in context.TableB select t.TableA; foreach( var r in result ) Console.WriteLine( r.someFieldInTableA ); if TableA contains a primary key that reads "A", and TableB contains two rows that references TableA but with different cases in the referenceing field, "a" and "A". In our project we want both of the rows to endup in the result, but only the one with the matching case will end up there. Using the SQL Profiler, I have noticed that both of the rows are selected. Is there a way to tell Entity Framework that the keys are case insensitive? Edit:We have now tested this with NHibernate and come to the conclution that NHibernate works with case-insensitive keys. So NHibernate might be a better choice for us.I am however still interested in finding out if there is any way to change the behaviour of Entity Framework.

    Read the article

  • Stored Procedures In Source Control - Automate Build/Deployment Process

    - by Alex
    My company provides a large .NET service-oriented solution. The services layer interact with a T-SQL back-end consisting of hundreds of tables and stored procedures. Our C# code is in version-control (SVN) but our stored procedures and schema are not. After much lobbying of expedient upper-management, I was allowed to review our (non-existent) build/deployment process to accomplish the following goals: Place schema and stored procedures under source-control. Automate the build/deployment process. I would like to proceed per the accepted answer's strategy in this post but have additional questions: I would like to use Hudson as my build server. Is this a reasonable choice for a C#/SQL solution? What better alternatives should I explore? Assuming I have all triggers, stored-procedures, schema, etc... under source control, and that they are scripted to individual files, how do I generate a build script which will take into account dependencies/references between these items? (SQL Server does this automatically, but it generates one giant script) What does the workflow of performing an update at the client look like? i.e. I have to keep existing table data. How do I roll-back schema changes? I am the only programmer. Several other pseudo-technical staff like to make changes directly inside SQL Management Studio. Is it realistic to expect others to adhere to this solution -- how can I enforce this? Thank you in advance for your help.

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >