Search Results

Search found 4557 results on 183 pages for 'prefer'.

Page 153/183 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • Project management: Implementing custom errors in VS compilation process

    - by David Lively
    Like many architects, I've developed coding standards through years of experience to which I expect my developers to adhere. This is especially a problem with the crowd that believes that three or four years of experience makes you a senior-level developer.Approaching this as a training and code review issue has generated limited success. So, I was thinking that it would be great to be able to add custom compile-time errors to the build process to more strictly enforce this and other guidelines. For instance, we use stored procedures for ALL database access, which provides procedure-level security, db encapsulation (table structure is hidden from the app), and other benefits. (Note: I am not interested in starting a debate about this.) Some developers prefer inline SQL or parametrized queries, and that's fine - on their own time and own projects. I'd like a way to add a compilation check that finds, say, anything that looks like string sql = "insert into some_table (col1,col2) values (@col1, @col2);" and generates an error or, in certain circumstances, a warning, with a message like Inline SQL and parametrized queries are not permitted. Or, if they use the var keyword var x = new MyClass(); Variable definitions must be explicitly typed. Do Visual Studio and MSBuild provide a way to add this functionality? I'm thinking that I could use a regular expression to find unacceptable code and generate the correct error, but I'm not sure what, from a performance standpoint, is the best way to to integrate this into the build process. We could add a pre- or post-build step to run a custom EXE, but how can I return line- and file-specifc errors? Also, I'd like this to run after compilation of each file, rather than post-link. Is a regex the best way to perform this type of pattern matching, or should I go crazy and run the code through a C# parser, which would allow node-level validation via the parse tree? I'd appreciate suggestions and tales of prior experience.

    Read the article

  • How to use Data aware controls "correctly"?

    - by lyborko
    Hi, I would like to ask experienced users, if you prefer to use data aware controls to add, insert, delete and edit data in DB or you favor to do it manualy. I developed some DB applications, in which for the sake of "user friendly policy" I run into complicated web of table events (afterinsert, afteredit, after... and beforeedit, beforeinsert, before...). After that it was a quite nasty work to debug the application. Aware of this risk (later by another application) I tried to avoid this problem, so I paid increased attention to write code well, readable and comprehensive. It seemed everything all right from the beginning, but as I needed to handle some preprocessing stuff before sending and loading data etc, I run into the same problems again, "slowly and inevitably". Sometime I could not use dataaware controls anyway, and what seemed to be a "cool" feature of DAControl at the beginning it turned to an obstacle on the end. I "had to" write special routine for non-dataaware controls, in order to behave as dataaware. Then I asked myself, why on earth should I use dataaware controls? Is it better to found application architecture on non-dataaware controls? It requires more time to write bug-proof code, of course, but does it worth of it? I do not know... I happened to me several times, like jinxed : paradise on the beginning hell on the end... I do not know, if I use wrong method to write DB program, if there is some standard common practice how to proceed. Or if it is common problem to everybody? Thanx for advices and your experiences

    Read the article

  • Destructors not called when native (C++) exception propagates to CLR component

    - by Phil Nash
    We have a large body of native C++ code, compliled into DLLs. Then we have a couple of dlls containing C++/CLI proxy code to wrap the C++ interfaces. On top of that we have C# code calling into the C++/CLI wrappers. Standard stuff, so far. But we have a lot of cases where native C++ exceptions are allowed to propagate to the .Net world and we rely on .Net's ability to wrap these as System.Exception objects and for the most part this works fine. However we have been finding that destructors of objects in scope at the point of the throw are not being invoked when the exception propagates! After some research we found that this is a fairly well known issue. However the solutions/ workarounds seem less consistent. We did find that if the native code is compiled with /EHa instead of /EHsc the issue disappears (at least in our test case it did). However we would much prefer to use /EHsc as we translate SEH exceptions to C++ exceptions ourselves and we would rather allow the compiler more scope for optimisation. Are there any other workarounds for this issue - other than wrapping every call across the native-managed boundary in a (native) try-catch-throw (in addition to the C++/CLI layer)?

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

  • complex web forms and javascript

    - by Casey
    I need to create a few data heavy complicated forms. Currently, the information is being entered into a spread sheet, but the users will need to enter the information into the online form where it will be saved to a database. The problem is that the business users currently using the spread sheet aren't going to want to use the online application if it isn't as easy as entering the information into the spread sheet. This is further complicated in that the information they are entering into the spread sheet is represented by three different DB tables where one "object" is composed of two of the others. I would prefer to not have them have to go through multiple forms. Some of what I have been thinking is: Use of auto complete where possible Hiding/removing form fields dynamically possible wizard style page flow?? I've been googling for other data heavy web forms but can't seem to really find any good examples. I am familiar with jQuery and prototypejs and have also tried googling for frameworks designed for data heavy applications but didn't come up with anything. Any thoughts? Thanks.

    Read the article

  • Disable Eclipse warning about generated html?

    - by Chadwick
    When developing Flex projects, Eclipse gives warnings about the default index.html file generated by Flex Builder. The file is in the 'target' folder (or "generated artifacts" folder. Yes, I'm also using Maven). Can I eliminate or disable this warning? The code which generates the warning is below, though I would definitely prefer not changing the html - as I say this is the template suggested by Adobe. Eclipse warns of "Undefined attribute name (xxx)" for scroll on the body tag, and most of the embed attributes. There is no DOCTYPE declaration in the html file. <html lang="en"> ... <body scroll="no"> ... <embed src="myswf.swf" quality="high" bgcolor="#869ca7" width="100%" height="100%" name="myswf-flex" align="middle" play="true" loop="false" quality="high" allowScriptAccess="sameDomain" type="application/x-shockwave-flash" pluginspage="http://www.adobe.com/go/getflashplayer"> </embed> ...

    Read the article

  • Java SWT: wrapping syncExec and asyncExec to clean up code

    - by jonescb
    I have a Java Application using SWT as the toolkit, and I'm getting tired of all the ugly boiler plate code it takes to update a GUI element. Just to set a disabled button to be enabled I have to go through something like this: shell.getDisplay().asyncExec(new Runnable() { public void run() { buttonOk.setEnabled(true); } }); I prefer keeping my source code as flat as I possibly can, but I need a whopping 3 indentation levels just to do something simple. Is there some way I can wrap it? I would like a class like: public class UIUpdater { public static void updateUI(Shell shell, *function_ptr*) { shell.getDisplay().asyncExec(new Runnable() { public void run() { //Execute function_ptr } }); } } And can be used like so: UIUpdater.updateUI(shell, buttonOk.setEnabled(true)); Something like this would be great for hiding that horrible mess SWT seems to think is necessary to do anything. As I understand it, Java cannot do functions pointers. But Java 7 will have something called Closures which should be what I want. But in the meantime is there anything at all I can do to pass a function pointer or callback to another function to be executed? As an aside, I'm starting to think it'd be worth the effort to redo this application in Swing, and I don't have to put up with this ugly crap and non-cross-platformyness of SWT.

    Read the article

  • Lost with hibernate - OneToMany resulting in the one being pulled back many times..

    - by Andy
    I have this DB design: CREATE TABLE report ( ID MEDIUMINT PRIMARY KEY NOT NULL AUTO_INCREMENT, user MEDIUMINT NOT NULL, created TIMESTAMP NOT NULL, state INT NOT NULL, FOREIGN KEY (user) REFERENCES user(ID) ON UPDATE CASCADE ON DELETE CASCADE ); CREATE TABLE reportProperties ( ID MEDIUMINT NOT NULL, k VARCHAR(128) NOT NULL, v TEXT NOT NULL, PRIMARY KEY( ID, k ), FOREIGN KEY (ID) REFERENCES report(ID) ON UPDATE CASCADE ON DELETE CASCADE ); and this Hibernate Markup: @Table(name="report") @Entity(name="ReportEntity") public class ReportEntity extends Report{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name="ID") private Integer ID; @Column(name="user") private Integer user; @Column(name="created") private Timestamp created; @Column(name="state") private Integer state = ReportState.RUNNING.getLevel(); @OneToMany(mappedBy="pk.ID", fetch=FetchType.EAGER) @JoinColumns( @JoinColumn(name="ID", referencedColumnName="ID") ) @MapKey(name="pk.key") private Map<String, ReportPropertyEntity> reportProperties = new HashMap<String, ReportPropertyEntity>(); } and @Table(name="reportProperties") @Entity(name="ReportPropertyEntity") public class ReportPropertyEntity extends ReportProperty{ @Embeddable public static class ReportPropertyEntityPk implements Serializable{ /** * long#serialVersionUID */ private static final long serialVersionUID = 2545373078182672152L; @Column(name="ID") protected int ID; @Column(name="k") protected String key; } @EmbeddedId protected ReportPropertyEntityPk pk = new ReportPropertyEntityPk(); @Column(name="v") protected String value; } And i have inserted on Report and 4 Properties for that report. Now when i execute this: this.findByCriteria( Order.asc("created"), Restrictions.eq("user", user.getObject(UserField.ID)) ) ); I get back the report 4 times, instead of just the once with a Map with the 4 properties in. I'm not great at Hibernate to be honest, prefer straight SQL but I must learn, but i can't see what it is that is wrong.....? Any suggestions?

    Read the article

  • Git Workflow With Capistrano

    - by jerhinesmith
    I'm trying to get my head around a good git workflow using capistrano. I've found a few good articles, but I'm either not grasping completely what they're suggesting (likely) or they're somewhat lacking. Here's kind of what I had in mind so far, but I get caught up when to merge back into the master branch (i.e. before moving to stage? after?) and trying to hook it into capistrano for deployments: Make sure you’re up to date with all the changes made on the remote master branch by other developers git checkout master git pull Create a new branch that pertains to the particular bug you're trying to fix git checkout -b bug-fix-branch Make your changes git status git add . git commit -m "Friendly message about the commit" So, this is usually where I get stuck. At this point, I have a master branch that is healthy and a new bug-fix-branch that contains my (untested -- other than unit tests) changes. If I want to push my changes to stage (through cap staging deploy), do I have to merge my changes back into the master branch (I'd prefer not to since it seems like master should be kept free of untested code)? Do I even deploy from master (or should I be tagging a release first and then modifying my production.rb file to deploy from that tag)? git-deployment seems to address some of these workflow issues, but I can't seem to find out how on earth it actually hooks into cap staging deploy and cap production deploy. Thoughts? I assume there's a likely canonical way to do this, but I either can't find it or I'm too new to git to recognize that I have found it. Help!

    Read the article

  • Choosing a method for a webservice

    - by Wrikken
    I'm asked to set up a new webservice which should be easily usable in whatever language (php, .NET, Java, etc.) possible. Of course rolling my own can be done, accepting different content-types (xml / x-www-form-urlencoded (normal post) / json / etc.), but an existing method or mechanism would of course be prefered, cutting down time spent on development for the consumers of the service. The webservice does accept modifications / sets (it is not only simply data retrieval), but those will most likely be quite a lot less then gets (we estimate about 2.5% sets, 97.5 gets). The term webservice here indicates the protocol should go over HTTP, not being able to implement it totally client sided (javascript in the end-users browser etc.), as it needs specific user authentication. Both gets and sets are pretty light on the parameter count (usually 1 to 4). Methods like REST (which I'd prefer for only gets), XML-RPC & SOAP (might be a bit overkill, but has the advantage of explicitly defined methods and returns) are the usual suspects. What in your opinion / experience is the most widely 'spoken' and most easily implementable protocol in different languages (seen from the consumers' viewpoint) which could fullfill this need?

    Read the article

  • A regex I have working in Ruby doesn't in PHP; what could the cause be?

    - by Alex R
    I do not know ruby. I am trying to use the following regex that was generated by ruby (namely by http://www.a-k-r.org/abnf/ running on the grammar given rfc1738) in php. It is failing to match in php, but it is successfully matching in ruby. Does anyone see what differences between php's and ruby's handling of regexes that might explain this discrepancy? http:\/\/(?:(?:(?:(?:[0-9a-z]|[0-9a-z](?:[\x2d0-9a-z]?)*[0-9a-z])\x2e)?)*(?:[a-z]|[a-z](?:[\x2d0-9a-z]?)*[0-9a-z])|\d+\x2e\d+\x2e\d+\x2e\d+)(?::\d+)?(?:\/(?:(?:[!\x24'-\x2e0-9_a-z]|%[0-9a-f][0-9a-f]|[&:;=@])?)*(?:(?:\/(?:(?:[!\x24'-\x2e0-9_a-z]|%[0-9a-f][0-9a-f]|[&:;=@])?)*)?)*(?:\x3f(?:(?:[!\x24'-\x2e0-9_a-z]|%[0-9a-f][0-9a-f]|[&:;=@])?)*)?)?/i Since you all love regexes so much, how about an alternate solution. Given the ABNF in an rfc, I want a way (in php) to check if an arbitrary string is in the grammar. APG fails to compile on a 64-bit system, VTC is not Free, and I have not found any other such tools. I would also prefer not to use a regex, but it's the closest I've come to success.

    Read the article

  • What is the preferred way in C++ for converting a builtin type (int) to bool?

    - by Martin
    When programming with Visual C++, I think every developer is used to see the warning warning C4800: 'BOOL' : forcing value to bool 'true' or 'false' from time to time. The reason obviously is that BOOL is defined as int and directly assigning any of the built-in numerical types to bool is considered a bad idea. So my question is now, given any built-in numerical type (int, short, ...) that is to be interpreted as a boolean value, what is the/your preferred way of actually storing that value into a variable of type bool? Note: While mixing BOOL and bool is probably a bad idea, I think the problem will inevitably pop up whether on Windows or somewhere else, so I think this question is neither Visual-C++ nor Windows specific. Given int nBoolean; I prefer this style: bool b = nBoolean?true:false; The following might be alternatives: bool b = !!nBoolean; bool b = (nBoolean != 0); Is there a generally preferred way? Rationale? I should add: Since I only work with Visual-C++ I cannot really say if this is a VC++ specific question or if the same problem pops up with other compilers. So it would be interesting to specifically hear from g++ or users how they handle the int-bool case. Regarding Standard C++: As David Thornley notes in a comment, the C++ Standard does not require this behavior. In fact it seems to explicitly allow this, so one might consider this a VC++ weirdness. To quote the N3029 draft (which is what I have around atm.): 4.12 Boolean conversions [conv.bool] A prvalue of arithmetic, unscoped enumeration, pointer, or pointer to member type can be converted to a prvalue of type bool. A zero value, null pointer value, or null member pointer value is converted to false; any other value is converted to true. (...)

    Read the article

  • Which are the RDBMS that minimize the server roundtrips? Which RDBMS are better (in this area) than

    - by user193655
    When the latency is high ("when pinging the server takes time") the server roundtrips make the difference. Now I don't want to focus on the roundtrips created in programming, but the roundtrips that occur "under the hood" in the DB engine, so the roundtrips that are 100% dependant on how the RDBMS is written itself. I have been told that FireBird has more roundtrips than MySQL. But this is the only information I know. I am currently supporting MS SQL but I'd like to change RDBMS (because I use Express Editions and in my scenario they are quite limiting from the performance point of view), so to make a wise choice I would like to include also this point into "my RDBMS comparison feature matrix" to understand which is the best RDBMS to choose as an alternative to MS SQL. So the bold sentence above would make me prefer MySQL to Firebird (for the roundtrips concept, not in general), but can anyone add informations? And MS SQL where is it located? Is someone able to "rank" the roundtrip performance of the main RDBMS, or at least: MS SQL, MySql, Postegresql, Firebird (I am not interested in Oracle since it is not free, and if I have to change I would change to a free RDBMS). Anyway MySql (as mentioned several times on stackoverflow) has a not clear future and a not 100% free license. So my final choice will probably dall on PostgreSQL or Firebird. Additional info: somehow you can answer my question by making a simple list like: MSSQL:3; MySQL:1; Firebird:2; Postgresql:2 (where 1 is good, 2 average, 3 bad). Of course if you can post some links where the roundtrips per RDBMSs are compared it would be great

    Read the article

  • Left/Right/Inner joins using C# and LINQ

    - by Keith Barrows
    I am trying to figure out how to do a series of queries to get the updates, deletes and inserts segregated into their own calls. I have 2 tables, one in each of 2 databases. One is a Read Only feeds database and the other is the T-SQL R/W Production source. There are a few key columns in common between the two. What I am doing to setup is this: List<model.AutoWithImage> feedProductList = _dbFeed.AutoWithImage.Where(a => a.ClientID == ClientID).ToList(); List<model.vwCompanyDetails> companyDetailList = _dbRiv.vwCompanyDetails.Where(a => a.ClientID == ClientID).ToList(); foreach (model.vwCompanyDetails companyDetail in companyDetailList) { List<model.Product> productList = _dbRiv.Product.Include("Company").Where(a => a.Company.CompanyId == companyDetail.CompanyId).ToList(); } Now that I have a (source) list of products from the feed, and an existing (target) list of products from my prod DB I'd like to do 3 things: Find all SKUs in the feed that are not in the target Find all SKUs that are in both, that are active feed products and update the target Find all SKUs that are in both, that are inactive and soft delete from the target What are the best practices for doing this without running a double loop? Would prefer a LINQ 4 Objects solution as I already have my objects. EDIT: BTW, I will need to transfer info from feed rows to target rows in the first 2 instances, just set a flag in the last instance. TIA

    Read the article

  • what are all the Optimize tricks that you know for asp.net code ?

    - by Aristos
    After some time of many code programming on asp.net, I discover the very big speed different between string and StringBuilder. I know that is very common and known but I just mention it for start. The second think that I have found to speed up the code, is to use the const, and not the static, for declare my configuration constants value (especial the strings). With the const, the compiler not create new object, but just place the value, on the point that you have ask it, but with the static declaration, is create a new memory object and keep its on the memory. My third trick, is when I search for string, I use hash values, and not the string itself. For example, if I need a List<string SomeValues, and place inside strings that I need to search them, I prefer to use List<int SomeHashValue, and I use the hash value to locate the strings. My forth thought that I was wandering, is if is better to place big strings in one line, or separate them in different lines with the + symbol to be more easy to read out. I make some tests and see that the compiler make a good job is some split the string, in many lines, using the + symbol. What other tricks/tips do you know and use on your programming to make it run faster, and maybe use less memory. Well I know, that some times, to make something run faster, you need more memory, more cache. My priority is on speed. Because Speed Counts.

    Read the article

  • Java try finally variations

    - by Petr Gladkikh
    This question nags me for a while but I did not found complete answer to it yet (e.g. this one is for C# http://stackoverflow.com/questions/463029/initializing-disposable-resources-outside-or-inside-try-finally). Consider two following Java code fragments: Closeable in = new FileInputStream("data.txt"); try { doSomething(in); } finally { in.close(); } and second variation Closeable in = null; try { in = new FileInputStream("data.txt"); doSomething(in); } finally { if (null != in) in.close(); } The part that worries me is that the thread might be somewhat interrupted between the moment resource is acquired (e.g. file is opened) but resulting value is not assigned to respective local variable. Is there any other scenarios the thread might be interrupted in the point above other than: InterruptedException (e.g. via Thread#interrupt()) or OutOfMemoryError exception is thrown JVM exits (e.g. via kill, System.exit()) Hardware fail (or bug in JVM for complete list :) I have read that second approach is somewhat more "idiomatic" but IMO in the scenario above there's no difference and in all other scenarios they are equal. So the question: What are the differences between the two? Which should I prefer if I do concerned about freeing resources (especially in heavily multi-threading applications)? Why? I would appreciate if anyone points me to parts of Java/JVM specs that support the answers.

    Read the article

  • LGPL library with plugins of varied licenses

    - by Chris
    Note: "Plugins" here refers to shared objects that are accessed via dlopen() and friends. I'm writing a library that I'm planning on releasing under the LGPL. Its functionality can be extended (supporting new audio file formats, specifically) through plugins. I'm planning on creating an exception to the LGPL for this library so that plugins can be released under any license. So far so good. I've written a number of plugins already, some of which use LGPL and some of which use GPL libraries. I'm wary of releasing them with the main library, however, due to licensing issues. The LGPL-based ones would generally be fine, but for my "any license" clause. Would distributing these LGPL-based plugins with the library require the consent of the other license holders to create this exception? Along the same lines, would the inclusion of GPL-based plugins with my library force the whole thing to go GPL? I could also release the plugins separately. The advantage, I presume, is that the plugins an d library will now not be distributed together, creating more separation. But this seems to be no different, really, in the end. Boiled down: Can I include, with my LGPL library, plugins of varied licenses? If not, is it really any different releasing them separately? And if so, there's no real need to create an exception for non-LGPL plugins, is there? It's LGPL or nothing. I'd prefer asking a lawyer, of course, but this is just a hobby and I can't afford to hire a lawyer when I don't expect or want monetary compensation. I'm just hoping others have been in similar situations and have insight.

    Read the article

  • Is there a way to set two C#.Net projects to trust one another?

    - by Eric
    I have two C#.NET projects in a single solution ModelProject and PluginProject. PlugInProject is a plug-in for another application, and consequently references its API. PlugInProject is used to extract data from the application it plugs into. ModelProject contains the data model of classes that are extracted by PlugInProject. The extracted data can be used independent of the application or the plug-in, which is why I am keeping PlugInProject separate from ModelProject. I want ModelProject to remain independent of PlugInProject, and the Applications API. Or in other words I want someone to be able to access the extracted data without needing access to PlugInProject, the application, or the application's API. The problem I'm running into though is PlugInProject needs to be able to create and modify classes in ModelProject. However, I'd prefer to not make these actions public to anyone using ModelProject. The extracted data should effectively be read-only, unless later modified by PlugInProject. How can I keep these projects separate but give PlugInProject exclusive access to ModelProject? Is this possible?

    Read the article

  • Problems with Json Serialize Dictionary<Enum, Int32>

    - by dbemerlin
    Hi, whenever i try to serialize the dictionary i get the exception: System.ArgumentException: Type 'System.Collections.Generic.Dictionary`2[[Foo.DictionarySerializationTest+TestEnum, Foo, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null],[System.Int32, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]' is not supported for serialization/deserialization of a dictionary, keys must be strings or object My Testcase is: public class DictionarySerializationTest { private enum TestEnum { A, B, C } public void SerializationTest() { Dictionary<TestEnum, Int32> data = new Dictionary<TestEnum, Int32>(); data.Add(TestEnum.A, 1); data.Add(TestEnum.B, 2); data.Add(TestEnum.C, 3); JavaScriptSerializer serializer = new JavaScriptSerializer(); String result = serializer.Serialize(data); // Throws } public void SerializationStringTest() { Dictionary<String, Int32> data = new Dictionary<String, Int32>(); data.Add(TestEnum.A.ToString(), 1); data.Add(TestEnum.B.ToString(), 2); data.Add(TestEnum.C.ToString(), 3); JavaScriptSerializer serializer = new JavaScriptSerializer(); String result = serializer.Serialize(data); // Succeeds } } Of course i could use .ToString() whenever i enter something into the Dictionary but since it's used quite often in performance relevant methods i would prefer using the enum. My only solution is using .ToString() and converting before entering the performance critical regions but that is clumsy and i would have to change my code structure just to be able to serialize the data. Does anyone have an idea how i could serialize the dictionary as <Enum, Int32>? I use the System.Web.Script.Serialization.JavaScriptSerializer for serialization.

    Read the article

  • How do I make a web interface for a socket server

    - by mgroat
    I've got a socket server running (it's something that's basically like a chat server). Users can telnet into it, but I'd like to make a web interface. This is the first time I've ever done something like this, so I'm not really sure where to start. A few thoughts I've had: Have some server-side Python (or PHP) on my webserver, which accesses the socket server. I think I know enough about sockets to have Python interact with the server, but how do I go about getting the website that the user sees to update in real time? Should I just have the website refresh few seconds? I would prefer to do things this way if I can figure out how. Write a Java applet that interacts with the socket server, and embed the applet in the website. I would have to re-learn a language that I haven't touched in years, but my main goal here is learning -- so that wouldn't be such a bad thing. The main problem I have with this is that it requires end users to have Java installed on their computers, which I'd rather not do. Is one of these two solutions the right way to go? Anybody know where I can find a good tutorial to get started? Edit: There's no real security concerns with exposing the server to the internet.

    Read the article

  • HTML Submit button vs AJAX based Post (ASP.NET MVC)

    - by Graham
    I'm after some design advice. I'm working on an application with a fellow developer. I'm from the Webforms world and he's done a lot with jQuery and AJAX stuff. We're collaborating on a new ASP.MVC 1.0 app. He's done some pretty amazing stuff that I'm just getting my head around, and used some 3rd party tools etc. for datagrids etc. but... He rarely uses Submit buttons whereas I use them most of the time. He uses a button but then attaches Javascript to it that calls an MVC action which returns a JSON object. He then parses the object to update the datagrid. I'm not sure how he deals with server-side validation - I think he adds a message property to the JSON object. A sample scenario would be to "Save" a new record that then gets added to the gridview. The user doesn't see a postback as such, so he uses jQuery to disable the UI whilst the controller action is running. TBH, it looks pretty cool. However, the way I'd do it would be to use a Submit button to postback, let the ModelBinder populate a typed model class, parse that in my controller Action method, update the model (and apply any validation against the model), update it with the new record, then send it back to be rendered by the View. Unlike him, I don't return a JSON object, I let the View (and datagrid) bind to the new model data. Both solutions "work" but we're obviously taking the application down different paths so one of us has to re-work our code... and we don't mind whose has to be done. What I'd prefer though is that we adopt the "industry-standard" way of doing this. I'm unsure as to whether my WebForms background is influencing the fact that his way just "doesn't feel right", in that a "submit" is meant to submit data to the server. Any advice at all please - many thanks.

    Read the article

  • Choosing between instance methods and separate functions?

    - by StackedCrooked
    Adding functionality to a class can be done by adding a method or by defining a function that takes an object as its first parameter. Most programmers that I know would choose for the solution of adding a instance method. However, I sometimes prefer to create a separate function. For example, in the example code below Area and Diagonal are defined as separate functions instead of methods. I find it better this way because I think these functions provide enhancements rather than core functionality. Is this considered a good/bad practice? If the answer is "it depends", then what are the rules for deciding between adding method or defining a separate function? class Rect { public: Rect(int x, int y, int w, int h) : mX(x), mY(y), mWidth(w), mHeight(h) { } int x() const { return mX; } int y() const { return mY; } int width() const { return mWidth; } int height() const { return mHeight; } private: int mX, mY, mWidth, mHeight; }; int Area(const Rect & inRect) { return inRect.width() * inRect.height(); } float Diagonal(const Rect & inRect) { return std::sqrt(std::pow(static_cast<float>(inRect.width()), 2) + std::pow(static_cast<float>(inRect.height()), 2)); }

    Read the article

  • Is there any reason to use C instead of C++ for embedded development?

    - by Piotr Czapla
    Question I have two compilers on my hardware C++ and C89 I'm thinking about using C++ with classes but without polymorphism (to avoid vtables). The main reasons I’d like to use C++ are: I prefer to use “inline” functions instead of macro definitions. I’d like to use namespaces as I prefixes clutter the code. I see C++ a bit type safer mainly because of templates, and verbose casting. I really like overloaded functions and constructors (used for automatic casting). Do you see any reason to stick with C89 when developing for very limited hardware (4kb of RAM)? Conclusion Thank you for your answers, they were really helpful! I though the subject through and I will stick with C mainly because: It is easier to predict actual code in C and this is really important if you have only 4kb of ram. My team consists of C developers mainly so advance features of C++ won't be frequently used. I've found a way to inline functions in my C compiler (C89). It is hard to accept one answer as you provided so many good answers. Unfortunately I can't create a wiki and accept it so I will choose one answer that made me think most.

    Read the article

  • Is there really such a thing as a char or short in modern programming?

    - by Dean P
    Howdy all, I've been learning to program for a Mac over the past few months (I have experience in other languages). Obviously that has meant learning the Objective C language and thus the plainer C it is predicated on. So I have stumbles on this quote, which refers to the C/C++ language in general, not just the Mac platform. With C and C++ prefer use of int over char and short. The main reason behind this is that C and C++ perform arithmetic operations and parameter passing at integer level, If you have an integer value that can fit in a byte, you should still consider using an int to hold the number. If you use a char, the compiler will first convert the values into integer, perform the operations and then convert back the result to char. So my question, is this the case in the Mac Desktop and IPhone OS environments? I understand when talking about theses environments we're actually talking about 3-4 different architectures (PPC, i386, Arm and the A4 Arm variant) so there may not be a single answer. Nevertheless does the general principle hold that in modern 32 bit / 64 bit systems using 1-2 byte variables that don't align with the machine's natural 4 byte words doesn't provide much of the efficiency we may expect. For instance, a plain old C-Array of 100,000 chars is smaller than the same 100,000 ints by a factor of four, but if during an enumeration, reading out each index involves a cast/boxing/unboxing of sorts, will we see overall lower 'performance' despite the saved memory overhead?

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >