Search Results

Search found 2838 results on 114 pages for 'considered harmful'.

Page 90/114 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Embedding Lua functions as member variables in Java

    - by Zarion
    Although the program I'm working on is in Java, answering this from a C perspective is also fine, considering that most of this is either language-agnostic, or happens on the Lua side of things. In the outline I have for the architecture of a game I'm programming, individual types of game objects within a particular class (eg: creatures, items, spells, etc.) are loaded from a data file. Most of their properties are simple data types, but I'd like a few of these members to actually contain simple scripts that define, for example, what an item does when it's used. The scripts will be extremely simple, since all fundamental game actions will be exposed through an API from Java. The Lua is simply responsible for stringing a couple of these basic functions together, and setting arguments. The question is largely about the best way to store a reference to a specific Lua function as a member of a Java class. I understand that if I store the Lua code as a string and call lua_dostring, Lua will compile the code fresh every time it's called. So the function needs to be defined somehow, and a reference to this specific function wrapped in a Java function object. One possibility that I've considered is, during the data loading process, when the loader encounters a script definition in a data file, it extracts this string, decorates the function name using the associated object's unique ID, calls lua_dostring on the string containing a full function definition, and then wraps the generated function name in a Java function object. A function declared in script run with lua_dostring should still be added to the global function table, correct? I'm just wondering if there's a better way of going about this. I admit that my knowledge of Lua at this point is rather superficial and theoretical, so it's possible that I'm overlooking something obvious.

    Read the article

  • Pass a data.frame column name to a function

    - by Kevin Middleton
    I'm trying to write a function to accept a data.frame (x) and a column from it. The function performs some calculations on x and later returns another data.frame. I'm stuck on the best-practices method to pass the column name to the function. The two minimal examples fun1 and fun2 below produce the desired result, being able to perform operations on x$column, using max() as an example. However, both rely on the seemingly (at least to me) inelegant (1) call to substitute() and possibly eval() and (2) the need to pass the column name as a character vector. fun1 <- function(x, column){ do.call("max", list(substitute(x[a], list(a = column)))) } fun2 <- function(x, column){ max(eval((substitute(x[a], list(a = column))))) } df <- data.frame(A = 1:20, B = rnorm(10)) fun1(df, "B") fun2(df, "B") I would like to be able to call the function as fun(df, B), for example. Other options I have considered but have not tried: Pass column as an integer of the column number. I think this would avoid substitute(). Ideally, the function could accept either. with(x, get(column)), but, even if it works, I think this would still require substitute Make use of formula() and match.call(), neither of which I have much experience with. Subquestion: Is do.call() preferred over eval()? Thanks, Kevin

    Read the article

  • Generating JavaScript in C# and subsequent testing

    - by Codebrain
    We are currently developing an ASP.NET MVC application which makes heavy use of attribute-based metadata to drive the generation of JavaScript. Below is a sample of the type of methods we are writing: function string GetJavascript<T>(string javascriptPresentationFunctionName, string inputId, T model) { return @"function updateFormInputs(value){ $('#" + inputId + @"_SelectedItemState').val(value); $('#" + inputId + @"_Presentation').val(value); } function clearInputs(){ " + helper.ClearHiddenInputs<T>(model) + @" updateFormInputs(''); } function handleJson(json){ clearInputs(); " + helper.UpdateHiddenInputsWithJson<T>("json", model) + @" updateFormInputs(" + javascriptPresentationFunctionName + @"()); " + model.GetCallBackFunctionForJavascript("json") + @" }"; } This method generates some boilerplace and hands off to various other methods which return strings. The whole lot is then returned as a string and written to the output. The question(s) I have are: 1) Is there a nicer way to do this other than using large string blocks? We've considered using a StringBuilder or the Response Stream but it seems quite 'noisy'. Using string.format starts to become difficult to comprehend. 2) How would you go about unit testing this code? It seems a little amateur just doing a string comparison looking for particular output in the string. 3) What about actually testing the eventual JavaScript output? Thanks for your input!

    Read the article

  • PHP Post Count in Forum

    - by Chris
    I'm currently desiging a forum application, I considered using a premade but decided against it as it's useful for me to learn some of the techniques. So I've written a fairly full featured forum... great. One of the problems I want to solve is to include user data for each post, at the minute the post table includes the poster ID (obviously) and I added the poster's username at a later date so I didn't have to query the User DB for X number of posts in a thread. However, it's become apparent I now want to do this, usernames don't need to update retrospectively, however avatars, sigs, and especially post counts need to update actively, so data in some form needs keeping up to date somewhere... What would be a good way of implementing this? I obviously don't want to include any more user data on the Posts DB table than necessary, but I'm struggling to find an easy way to do this short of querying the DB for each post in a thread, which is potentially going to create a lot of traffic. How have other people solved this, I've been examining the code on some other open source apps but I can't find what I'm looking for. Is it possible to select multiple records in one query? In which case I could build an array dynamically on each page request (eg 'SQL blah blah' then a for each loop to insert the ID's). Could I join the tables each time? Do I submit a query for each post? Hmm.

    Read the article

  • Final classes in Python 3.x- something Guido isn't telling me?

    - by GlenCrawford
    This question is built on top of many assumptions. If one assumption is wrong, then the whole thing falls over. I'm still relatively new to Python and have just entered the curious/exploratory phase. It is my understanding that Python does not support the creating of classes that cannot be subclassed (final classes). However, it seems to me that the bool class in Python cannot be subclassed. This makes sense when the intent of the bool class is considered (because bool is only supposed to have two values: true and false), and I'm happy with that. What I want to know is how this class was marked as final. So my question is: how exactly did Guido manage to prevent subclassing of bool? >>> class TestClass(bool): pass Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> class TestClass(bool): TypeError: type 'bool' is not an acceptable base type Related question: http://stackoverflow.com/questions/2172189/why-i-cant-extend-bool-in-python

    Read the article

  • php user authentication libraries / frameworks ... what are the options?

    - by es11
    I am using PHP and the codeigniter framework for a project I am working on, and require a user login/authentication system. For now I'd rather not use SSL (might be overkill and the fact that I am using shared hosting discourages this). I have considered using openID but decided that since my target audience is generally not technical, it might scare users away (not to mention that it requires mirroring of login information etc.). I know that I could write a hash based authentication (such as sha1) since there is no sensitive data being passed (I'd compare the level of sensitivity to that of stackoverflow). That being said, before making a custom solution, it would be nice to know if there are any good libraries or packages out there that you have used to provide semi-secure authentication? I am new to codeigniter, but something that integrates well with it would be preferable. Any ideas? (i'm open to criticism on my approach and open to suggestions as to why I might be crazy not to just use ssl). Thanks in advance. Update: I've looked into some of the suggestions. I am curious to try out zend-auth since it seems well supported and well built. Does anyone have experience with using zend-auth in codeigniter (is it too bulky?) and do you have a good reference on integrating it with CI? I do not need any complex authentication schemes..just a simple login/logout/password-management authorization system. Also, dx_auth seems interesting as well, however I am worried that it is too buggy. Has anybody else had success with this? I realized that I would also like to manage guest users (i.e. users that do not login/register) in a similar way to stackoverflow..so any suggestions that have this functionality would be great

    Read the article

  • Best way to do interprocess communication on Mac OS X

    - by jbrennan
    I'm looking at building a Cocoa application on the Mac with a back-end daemon process (really just a mostly-headless Cocoa app, probably), along with 0 or more "client" applications running locally (although if possible I'd like to support remote clients as well; the remote clients would only ever be other Macs or iPhone OS devices). The data being communicated will be fairly trivial, mostly just text and commands (which I guess can be represented as text anyway), and maybe the occasional small file (an image possibly). I've looked at a few methods for doing this but I'm not sure which is "best" for the task at hand. Things I've considered: Reading and writing to a file (…yes), very basic but not very scalable. Pure sockets (I have no experience with sockets but I seem to think I can use them to send data locally and over a network. Though it seems cumbersome if doing everything in Cocoa Distributed Objects: seems rather inelegant for a task like this NSConnection: I can't really figure out what this class even does, but I've read of it in some IPC search results I'm sure there are things I'm missing, but I was surprised to find a lack of resources on this topic.

    Read the article

  • Web application date time localization best practice at 201x

    - by Hieu Lam
    Hi all, I have worked for various web projects but correct date time localization have not been done and considered throroughly so I want to ask this very typical problem here and I want to hear comments from expert in this problem What is the correct strategy for storing a date/time value from client from server As I understand, because of locale and timezone so we have to do the conversion, I have heard about GMT or UTC time and after do some search it seems that UTC is more accurate ? so we will convert from client time - UTC+0 when saving and when we read the value from server to client, we convert from server time back to client time again ? However, I see in some website, at the bottom have the sentence "All times are in UTC", "All times are in GMT" and also "All times are in your local time". So maybe not all the sites do the convertion back and forth ? And in that case the user has to manually do the date/time conversion ? How to display the date/time convenient to user based on his locale and region How to provide personalization on date/time value ? I had one time depends on vbscript to do the display and the format is read from windows regional and format settings automatically. But without vbscript how can we determine a date/time pattern for a user of a specific locale. Do we have to store a mapping between a locale and pattern somewhere and do the conversion at the server side ? Although date/time conversion is needed in most case, there's situation where only date matter for example if my birthday is 2 Feb 1980, it should be the same for all locale and no conversion should be done. How can we address this issue.

    Read the article

  • Internal "Tee" setup

    - by RadlyEel
    I have inherited some really old VC6.0 code that I am upgrading to VS2008 for building a 64-bit app. One required feature that was implemented long, long ago is overriding std::cout so its output goes simultaneously to a console window and to a file. The implementation depended on the then-current VC98 library implementation of ostream and, of course, is now irretrievably broken with VS2008. It would be reasonable to accumulate all the output until program termination time and then dump it to a file. I got part of the way home by using freopen(), setvbuf(), and ios::sync_with_stdio(), but to my dismay, the internal library does not treat its buffer as a ring buffer; instead when it flushes to the output device it restarts at the beginning, so every flush wipes out all my accumulated output. Converting to a more standard logging function is not desirable, as there are over 1600 usages of "std::cout << " scattered throughout almost 60 files. I have considered overriding ostream's operator<< function, but I'm not sure if that will cover me, since there are global operator<< functions that can't be overridden. (Or can they?) Any ideas on how to accomplish this?

    Read the article

  • How do I write a J2EE/EJB Singleton?

    - by Bears will eat you
    A day ago my application was one EAR, containing one WAR, one EJB JAR, and a couple of utility JAR files. I had a POJO singleton class in one of those utility files, it worked, and all was well with the world: EAR |--- WAR |--- EJB JAR |--- Util 1 JAR |--- Util 2 JAR |--- etc. Then I created a second WAR and found out (the hard way) that each WAR has its own ClassLoader, so each WAR sees a different singleton, and things break down from there. This is not so good. EAR |--- WAR 1 |--- WAR 2 |--- EJB JAR |--- Util 1 JAR |--- Util 2 JAR |--- etc. So, I'm looking for a way to create a Java singleton object that will work across WARs (across ClassLoaders?). The @Singleton EJB annotation seemed pretty promising until I found that JBoss 5.1 doesn't seem to support that annotation (which was added as part of EJB 3.1). Did I miss something - can I use @Singleton with JBoss 5.1? Upgrading to JBoss AS 6 is not an option right now. Alternately, I'd be just as happy to not have to use EJB to implement my singleton. What else can I do to solve this problem? Basically, I need a semi-application-wide* hook into a whole bunch of other objects, like various cached data, and app config info. As a last resort, I've already considered merging my two WARs into one, but that would be pretty hellish. *Meaning: available basically anywhere above a certain layer; for now, mostly in my WARs - the View and Controller (in a loose sense).

    Read the article

  • Maximizing the number of true concurrent / parrallel http requests in Silverlight

    - by Clems
    Hi all. I'm using SL 4 beta and my app needs to do a lot of small http requests to the server. I believe that when exceeding the number of allowed concurrent requests, the subsequent requests are put in a queue. I am also aware that SL 4 has both a http browser stack and a http client stack, with both different limit in terms of the number of concurrent requests. Let's say call those limits MAX_BROWSER and MAX_CLIENT. Also I think I read somewhere that the number of concurrent requests is limited per domain, not overall. But I'm sure if this applies to both the http client stack. That means that you CAN have MAX_BROWSER requests to domain1.com AND MAX_BROWSER requests to domain2.com at the same time. And I even believe that sub domains are considered different so you can also have MAX_BROWSER requests to domain1.com AND MAX_BROWSER requests to sub.domain1.com at the same time. I have ownership of the services and domain names so I could easily setup sub domains for my services. Given those considerations I'm trying to optimize the number of concurrent http requests to my server. Here are few questions ? Is is possible to use both stack at the same time ? Is the subdomain/domain story true for both stacks ? None ? If so that would mean that I could potentially have a number of concurrent requests equal to : (MAX_BROWSER + MAX_CLIENT) * NUMBER_OF_DOMAINS which would be fairly good. Is this correct ? I'm kind of sharing my morning thoughts here, hoping somebody has experimented with those things. Thank you.

    Read the article

  • error detection/correction/recovery in serial protocols

    - by Jason S
    I have some designing to do for a serial protocol and am running into some questions that I figure must have been considered elsewhere. So I'm wondering if there are some recommendations for best practices in designing serial protocols. (Please either state a fact that is easily verifiable, or cite a reputable source if you make a claim.) General recommendations for websites/books are also welcome. In particular I have to deal with issues like parsing a stream of bytes into packets verifying a packet is correct (easy with a CRC, for instance) identifying reasonable types of errors that can occur (e.g. in a point-to-point serial stream, sporadic single bit errors, and dropped series of bytes, are both likely, but extra phantom bytes are unlikely; whereas with a record stored in flash memory or on a disk drive the types of errors that predominate are different) error correction or recovery (if I detect an error in a packet, can I correct it? If not, can I resync to the boundary of the next packet?) how to make variable-length packets robust to error correction / recovery. Any suggestions?

    Read the article

  • How can "today's date" be varied for unit testing purposes?

    - by ck
    I use VS2008 targetting .NET 2.0 Framework, and, just in case, no I can't change this :) I have a DateCalculator class. Its method GetNextExpirationDate attempts to determine the next expiration, internally using DateTime.Today as a baseline date. As I was writing unit tests, I realized that I wanted to test GetNextExpirationDate for different 'today' dates. What's the best way to do this? Here are some alternatives I've considered: Expose a property/overloaded method with argument baselineDate and only use it from the unit test. In actual client code, disregard the property/overloaded method in favour of the method that defaults baselineDate to DateTime.Today. I'm reluctant to do this as it makes the public interface of the DateCalculator class awkward. Create a protected field called baselineDate that is internally set to DateTime.Today. When testing, derive a DateCalculatorForTesting from DateCalculator and set baslineDate via the constructor. It keeps the public interface clean, but still isn't great - baselineDate was made protected and a derived class is required, both solely for testing. Use extension methods. I tried this after adding the ExtensionAttribute, then realized it wouldn't work because extension methods can't access private/protected variables. I initially thought this was really quite an elegant solution. :( I'd be interested in hearing what others think.

    Read the article

  • Practical rules for premature optimization

    - by DougW
    It seems that the phrase "Premature Optimization" is the buzz-word of the day. For some reason, iphone programmers in particular seem to think of avoiding premature optimization as a pro-active goal, rather than the natural result of simply avoiding distraction. The problem is, the term is beginning to be applied more and more to cases that are completely inappropriate. For example, I've seen a growing number of people say not to worry about the complexity of an algorithm, because that's premature optimization (eg http://stackoverflow.com/questions/2190275/help-sorting-an-nsarray-across-two-properties-with-nssortdescriptor/2191720#2191720). Frankly, I think this is just laziness, and appalling to disciplined computer science. But it has occurred to me that maybe considering the complexity and performance of algorithms is going the way of assembly loop unrolling, and other optimization techniques that are now considered unnecessary. What do you think? Are we at the point now where deciding between an O(n^n) and O(n!) complexity algorithm is irrelevant? What about O(n) vs O(n*n)? What do you consider "premature optimization"? What practical rules do you use to consciously or unconsciously avoid it? This is a bit vague, but I'm curious to hear other peoples' opinions on the topic.

    Read the article

  • MKL Accelerated Math Libraries for Java...

    - by Kaopua
    I've looked at the related threads on StackOverflow and Googled with not much luck. I'm also very new to Java (I'm coming from a C# and .NET background) so please bear with me. There is so much available in the Java world it's pretty overwhelming. I'm starting on a new Java-on-Linux project that requires some heavy and highly repetitious numerical calculations (i.e. statistics, FFT, Linear Algebra, Matrices, etc.). So maximizing the performance of the mathematical operations is a requirement, as is ensuring the math is correct. So hence I have an interest in finding a Java library that perhaps leverages native acceleration such as MKL, and is proven (so commercial options are definitely a possibility here). In the .NET space there are highly optimized and MKL accelerated commercial Mathematical libraries such as Centerspace NMath and Extreme Optimization. Is there anything comparable in Java? Most of the math libraries I have found for Java either do not seem to be actively maintained (such as Colt) or do not appear to leverage MKL or other native acceleration (such as Apache Commons Math). I have considered trying to leverage MKL directly from Java myself (e.g. JNI), but me being new to Java (let alone interoperating between Java and native libraries) it seemed smarter finding a Java library that has already done this correctly, efficiently, and is proven. Again I apologize if I am mistaken or misguided (even in regarding any libraries I've mentioned) and my ignorance of the Java offerings. It's a whole new world for me coming from the heavily commercialized Microsoft stock so I could easily be mistaken on where to look and regarding the Java libraries I've mentioned. I would greatly appreciate any help or advice.

    Read the article

  • Need help finding a good curriculum/methodology for self-teaching to program from scratch

    - by BrotherGA2
    My friend and I have both dedicated ourselves to learning the essentials of programming by June of this year from nearly no programming experience. I have done some research and have come to the conclusion that using the Python language will be the best for us, but I am open to suggestions with good reasoning behind them. My motives for learning programming are: Potential Career Path to be able to create programs that can: solve problems; entertain, i.e. useful applications and games. Online college lectures + book (which I am willing to purchase) sounds like a good combination, but I do not know which would be most suitable for me. tl;dr: What I would like to find from the excellent people here is the following: a good, potentially best, programming course and/or book that is well structured and uses good pedagogy so that a person dedicated to learn programming may do so by following its curriculum (or use it to develop a curriculum) over the course of a few months. Thanks! (I apologize if this type of question is not considered proper etiquette, but I haven't found a consensus on this, and would like some guidance beyond the research I've already done)

    Read the article

  • Culture Sensitive GetHashCode

    - by user114928
    Hi, I'm writing a c# application that will process some text and provide basic query functions. In order to ensure the best possible support for other languages, I am allowing the users of the application to specify the System.Globalization.CultureInfo (via the "en-GB" style code) and also the full range of collation options using the System.Globalization.CompareOptions flags enum. For regular string comparison I'm then using a combination of: a) String.Compare overload that accepts the culture and options b) For some bulk processes I'm caching the byte data (KeyData) from CompareInfo.GetSortKey (overload that accepts the options) and using a byte-by-byte comparison of the KeyData. This seemed fine (although please comment if you think these two methods shouldn't be mixed), but then I had reason to use the HashSet< class which only has an overload for IEqualityComparer<. MS documentation seems to suggest that I should use StringComparer (which implements both IEqualityComparer< and IComparer<), but this only seems to support the "IgnoreCase" option from CompareOptions and not "IgnoreKanaType", "IgnoreSymbols", "IgnoreWidth" etc. I'm assuming that a StringComparer that ignores these other options could produce different hashcodes for two strings that might be considered the same using my other comparison options. I'd therefore get incorrect results from my application. Only thought at the moment is to create my own IEqualityComparer< that generates a hashcode from the SortKey.KeyData and compares eqality be using the String.Compare overload. Any suggestions?

    Read the article

  • First site going live real soon. Last minute questions

    - by user156814
    I am really close to finishing up on a project that I've been working on. I have done websites before, but never on my own and never a site that involved user generated data. I have been reading up on things that should be considered before you go live and I have some questions. 1) Staging... (Deploying updates without affecting users). I'm not really sure what this would entail, since I'm sure that any type of update would affect users in some way. Does this mean some type of temporary downtime for every update? can somebody please explain this and a solution to this as well. 2) Limits... I'm using the Kohana framework and I'm using the Auth module for logging users in. I was wondering if this already has some type of limit (on login attempts) built in, and if not, what would be the best way to implement this. (save attempts in database, cookie, etc.). If this is not whats meant by limits, can somebody elaborate. 3) Caching... Like I said, this is my first site built around user content. Considering that, should I cache it? 4) Back Ups... How often should I backup my (MySQL) database, and how should I back it up (MySQL export?). The site is currently up, yet not finished, if anybody wants to look at it and see if something pops out to you that should be looked at/fixed. Clashing Thoughts. If there is anything else I overlooked, thats not already in the list linked to above, please let me know. Thanks.

    Read the article

  • ways to avoid global temp tables in oracle

    - by Omnipresent
    We just converted our sql server stored procedures to oracle procedures. Sql Server SP's were highly dependent on session tables (INSERT INTO #table1...) these tables got converted as global temporary tables in oracle. We ended up with aroun 500 GTT's for our 400 SP's Now we are finding out that working with GTT's in oracle is considered a last option because of performance and other issues. what other alternatives are there? Collections? Cursors? Our typical use of GTT's is like so: Insert into GTT INSERT INTO some_gtt_1 (column_a, column_b, column_c) (SELECT someA, someB, someC FROM TABLE_A WHERE condition_1 = 'YN756' AND type_cd = 'P' AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12' AND (lname LIKE (v_LnameUpper || '%') OR lname LIKE (v_searchLnameLower || '%')) AND (e_flag = 'Y' OR it_flag = 'Y' OR fit_flag = 'Y')); Update the GTT UPDATE some_gtt_1 a SET column_a = (SELECT b.data_a FROM some_table_b b WHERE a.column_b = b.data_b AND a.column_c = 'C') WHERE column_a IS NULL OR column_a = ' '; and later on get the data out of the GTT. These are just sample queries, in actuality the queries are really complext with lot of joins and subqueries. I have a three part question: Can someone show how to transform the above sample queries to collections and/or cursors? Since with GTT's you can work natively with SQL...why go away from the GTTs? are they really that bad. What should be the guidelines on When to use and When to avoid GTT's

    Read the article

  • Making GUI applications on Linux/Windows. What languages/tools to use?

    - by Javed Ahamed
    My student group and I are trying to continue working on a project we worked on this semester over the summer to become a professional, deployable app. We originally did it in Adobe AIR but it seems now that the computers this program will be running on will be very slow, maybe 600mhz and 128-256mb ram so flash just isn't going to cut it. It is basically a health diagnosis application that we will be shipping out to impoverished countries. Now comes the real question. We are wondering what language to rebuild our application in. It has to have a good gui builder associated with it, like adobe flex/air gui builder or visual studio's gui builder but the application should run on linux primarily, and if it can run on windows thats just a plus. We are all students too without really any outside help so whatever we decide to do this in there must be ample documentation available when we hit problems. Some things we have considered so far are using python and glade or c# and monodevelop, but again we really are not experts on any of this which is why I am asking for help as I would rather spend the time now choosing the right tools instead of wasting time down the line when we hit a roadblock. Thanks in advance!

    Read the article

  • How to preserve order of temp table rows when inner joined with another table?

    - by Triynko
    Does an SQL Server "join" preserve any kind of row order consistently (i.e. that of the left table or that of the right table)? Psuedocode: create table #p (personid bigint); foreach (id in personid_list) insert into #p (personid) values (id) select id from users inner join #p on users.personid = #p.id Suppose I have a list of IDs that correspond to person entries. Each of those IDs may correspond to zero or more user accounts (since each person can have multiple accounts). To quickly select columns from the users table, I populate a temp table with person ids, then inner join it with the users table. I'm looking for an efficient way to ensure that the order of the results in the join matches the order of the ids as they were inserted into the temp table, so that the user list that's returned is in the same order as the person list as it was entered. I've considered the following alternatives: using "#p inner join users", in case the left table's order is preserved using "#p left join users where id is not null", in case a left join preserves order and the inner join doesn't using "create table (rownum int, personid bigint)", inserting an incrementing row number as the temp table is populated, so the results can be ordered by rownum in the join using an SQL Server equivalent of the "order by order of [tablename]" clause available in DB2 I'm currently using option 3, and it works... but I hate the idea of using an order by clause for something that's already ordered. I just don't know if the temp table preserves the order in which the rows were inserted or how the join operates and what order the results come out in.

    Read the article

  • What is the performance penalty of XML data type in SQL Server when compared to NVARCHAR(MAX)?

    - by Piotr Owsiak
    I have a DB that is going to keep log entries. One of the columns in the log table contains serialized (to XML) objects and a guy on my team proposed to go with XML data type rather than NVARCHAR(MAX). This table will have logs kept "forever" (archiving some very old entries may be considered in the future). I'm a little worried about the CPU overhead, but I'm even more worried that DB can grow faster (FoxyBOA from the referenced question got 70% bigger DB when using XML). I have read this question http://stackoverflow.com/questions/514827/microsoft-sql-server-2005-2008-xml-vs-text-varchar-data-type and it gave me some ideas but I am particulairly interrested in clarification on whether the DB size increases or decreases. Can you please share your insight/experiences in that matter. BTW. I don't currently have any need to depend on XML features within SQL Server (there's nearly zero advantage to me in the specific case). Ocasionally log entries will be extracted, but I prefer to handle the XML using .NET (either by writing a small client or using a function defined in a .NET assembly).

    Read the article

  • Legitimate uses of the Function constructor

    - by Marcel Korpel
    As repeatedly said, it is considered bad practice to use the Function constructor (also see the ECMAScript Language Specification, 5th edition, § 15.3.2.1): new Function ([arg1[, arg2[, … argN]],] functionBody) (where all arguments are strings containing argument names and the last (or only) string contains the function body). To recapitulate, it is said to be slow, as explained by the Opera team: Each time […] the Function constructor is called on a string representing source code, the script engine must start the machinery that converts the source code to executable code. This is usually expensive for performance – easily a hundred times more expensive than a simple function call, for example. (Mark ‘Tarquin’ Wilton-Jones) Though it's not that bad, according to this post on MDC (I didn't test this myself using the current version of Firefox, though). Crockford adds that [t]he quoting conventions of the language make it very difficult to correctly express a function body as a string. In the string form, early error checking cannot be done. […] And it is wasteful of memory because each function requires its own independent implementation. Another difference is that a function defined by a Function constructor does not inherit any scope other than the global scope (which all functions inherit). (MDC) Apart from this, you have to be attentive to avoid injection of malicious code, when you create a new Function using dynamic contents. Lots of disadvantages and it is intelligible that ECMAScript 5 discourages the use of the Function constructor by throwing an exception when using it in strict mode (§ 13.1). That said, T.J. Crowder says in an answer that [t]here's almost never any need for the similar […] new Function(...), either, again except for some advanced edge cases. So, now I am wondering: what are these “advanced edge cases”? Are there legitimate uses of the Function constructor?

    Read the article

  • Passing IDisposable objects through constructor chains

    - by Matt Enright
    I've got a small hierarchy of objects that in general gets constructed from data in a Stream, but for some particular subclasses, can be synthesized from a simpler argument list. In chaining the constructors from the subclasses, I'm running into an issue with ensuring the disposal of the synthesized stream that the base class constructor needs. Its not escaped me that the use of IDisposable objects this way is possibly just dirty pool (plz advise?) for reasons I've not considered, but, this issue aside, it seems fairly straightforward (and good encapsulation). Codes: abstract class Node { protected Node (Stream raw) { // calculate/generate some base class properties } } class FilesystemNode : Node { public FilesystemNode (FileStream fs) : base (fs) { // all good here; disposing of fs not our responsibility } } class CompositeNode : Node { public CompositeNode (IEnumerable some_stuff) : base (GenerateRaw (some_stuff)) { // rogue stream from GenerateRaw now loose in the wild! } static Stream GenerateRaw (IEnumerable some_stuff) { var content = new MemoryStream (); // molest elements of some_stuff into proper format, write to stream content.Seek (0, SeekOrigin.Begin); return content; } } I realize that not disposing of a MemoryStream is not exactly a world-stopping case of bad CLR citizenship, but it still gives me the heebie-jeebies (not to mention that I may not always be using a MemoryStream for other subtypes). It's not in scope, so I can't explicitly Dispose () it later in the constructor, and adding a using statement in GenerateRaw () is self-defeating since I need the stream returned. Is there a better way to do this? Preemptive strikes: yes, the properties calculated in the Node constructor should be part of the base class, and should not be calculated by (or accessible in) the subclasses I won't require that a stream be passed into CompositeNode (its format should be irrelevant to the caller) The previous iteration had the value calculation in the base class as a separate protected method, which I then just called at the end of each subtype constructor, moved the body of GenerateRaw () into a using statement in the body of the CompositeNode constructor. But the repetition of requiring that call for each constructor and not being able to guarantee that it be run for every subtype ever (a Node is not a Node, semantically, without these properties initialized) gave me heebie-jeebies far worse than the (potential) resource leak here does.

    Read the article

  • How do I create a good evaluation function for a new board game?

    - by A. Rex
    I write programs to play board game variants sometimes. The basic strategy is standard alpha-beta pruning or similar searches, sometimes augmented by the usual approaches to endgames or openings. I've mostly played around with chess variants, so when it comes time to pick my evaluation function, I use a basic chess evaluation function. However, now I am writing a program to play a completely new board game. How do I choose a good or even decent evaluation function? The main challenges are that the same pieces are always on the board, so a usual material function won't change based on position, and the game has been played less than a thousand times or so, so humans don't necessarily play it enough well yet to give insight. (PS. I considered a MoGo approach, but random games aren't likely to terminate.) Any ideas? Game details: The game is played on a 10-by-10 board with a fixed six pieces per side. The pieces have certain movement rules, and interact in certain ways, but no piece is ever captured. The goal of the game is to have enough of your pieces in certain special squares on the board. The goal of the computer program is to provide a player which is competitive with or better than current human players.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >