Search Results

Search found 10297 results on 412 pages for 'real tuty'.

Page 338/412 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • iPhone OpenGLES: Textures are consuming too much memory and the program crashes with signal "0"

    - by CustomAppsMan
    I am not sure what the problem is. My app is running fine on the simulator but when I try to run it on the iPhone it crashes during debugging or without debugging with signal "0". I am using the Texture2D.m and OpenGLES2DView.m from the examples provided by Apple. I profiled the app on the iPhone with Instruments using the Memory tracer from the Library and when the app died the final memory consumed was about 60Mb real and 90+Mb virtual. Is there some other problem or is the iPhone just killing the application because it has consumed too much memory? If you need any information please state it and I will try to provide it. I am creating thousands of textures at load time which is why the memory consumption is so high. Really cant do anything about reducing the number of pics being loaded. I was running before on just UIImage but it was giving me really low frame rates. I read on this site that I should use OpenGLES for higher frame rates. Also sub question is there any way not to use UIImage to load the png file and then use the Texture class provided to create the texture for OpenGLES functions to use it for drawing? Is there some function in OpenGLES which will create a texture straight from a png file?

    Read the article

  • How to model in J2EE / JEE?

    - by Harry
    Let's say, I have decided to go with J(2)EE stack for my enterprise application. Now, for domain modelling (or: for designing the M of MVC), which APIs can I safely assume and use, and which I should stay away from... say, via a layer of abstraction? For example, Should I go ahead and litter my Model with calls to Hibernate/JPA API? Or, should I build an abstraction... a persistence layer of my own to avoid hard-coding against these two specific persistence APIs? Why I ask this: Few years ago, there was this Kodo API which got superseded by Hibernate. If one had designed a persistence layer and coded the rest of the model against this layer (instead of littering the Model with calls to specific vendor API), it would have allowed one to (relatively) easily switch from Kodo to Hibernate to xyz. Is it recommended to make aggressive use of the *QL provided by your persistence vendor in your domain model? I'm not aware of any real-world issues (like performance, scalability, portability, etc) arising out of a heavy use of an HQL-like language. Why I ask this: I would like to avoid, as much as possible, writing custom code when the same could be accomplished via query language that is more portable than SQL. Sorry, but I'm a complete newbie to this area. Where could I find more info on this topic? Many thanks in advance. /HS

    Read the article

  • Sub Query making Query slow.

    - by Muhammad Kashif Nadeem
    Please copy and paste following script. DECLARE @MainTable TABLE(MainTablePkId int) INSERT INTO @MainTable SELECT 1 INSERT INTO @MainTable SELECT 2 DECLARE @SomeTable TABLE(SomeIdPk int, MainTablePkId int, ViewedTime1 datetime) INSERT INTO @SomeTable SELECT 1, 1, DATEADD(dd, -10, getdate()) INSERT INTO @SomeTable SELECT 2, 1, DATEADD(dd, -9, getdate()) INSERT INTO @SomeTable SELECT 3, 2, DATEADD(dd, -6, getdate()) DECLARE @SomeTableDetail TABLE(DetailIdPk int, SomeIdPk int, Viewed INT, ViewedTimeDetail datetime) INSERT INTO @SomeTableDetail SELECT 1, 1, 1, DATEADD(dd, -7, getdate()) INSERT INTO @SomeTableDetail SELECT 2, 2, NULL, DATEADD(dd, -6, getdate()) INSERT INTO @SomeTableDetail SELECT 3, 2, 2, DATEADD(dd, -8, getdate()) INSERT INTO @SomeTableDetail SELECT 4, 3, 1, DATEADD(dd, -6, getdate()) SELECT m.MainTablePkId, (SELECT COUNT(Viewed) FROM @SomeTableDetail), (SELECT TOP 1 s2.ViewedTimeDetail FROM @SomeTableDetail s2 INNER JOIN @SomeTable s1 ON s2.SomeIdPk = s1.SomeIdPk WHERE s1.MainTablePkId = m.MainTablePkId) FROM @MainTable m Above given script is just sample. I have long list of columns in SELECT and around 12+ columns in Sub Query. In my From clause there are around 8 tables. To fetch 2000 records full query take 21 seconds and if I remove Subquiries it just take 4 seconds. I have tried to optimize query using 'Database Engine Tuning Advisor' and on adding new advised indexes and statistics but these changes make query time even bad. Note: As I have mentioned that this is test data to explain my question the real data has lot of tables joins columns but without Sub-Query the results us fine. Any help thanks.

    Read the article

  • Should I go to school and get my degree in computer science?

    - by ryan
    I'll try and keep this short and simple. I've always enjoyed programming and I've been doing it since high school. Right after I graduated from high school (2002), I opted to skip college because I was offered a software engineer position. I quit after a couple of years later to team up on various startup companies. However, most of them did not launch as well as expected. But it honestly did not matter to me because I've learned so much from that experience. So fast forwarding to today, now turned 25, I need a job due to this tough economic climate. Looking on Craigslist, a lot of the listings require computer science degrees. It's evident now that programming is what I want to do because I seem to never get enough of it. But just the thought of having to push 2 years without attending any real computer class for an Associates at age 25 is very, very discouraging. And the thought of having to learn from basic (Hello WOOOOORRLLLD) just does not seem exciting. I guess I have 3 questions to wrap this up: Should I just suck it up and go back to school while working at McDonalds at age 25? Is there a way where I can just skip all the boring stuff and just get tested with what I know? From your experience, how many jobs use computer science degrees as prerequisites? Or am I screwed and better pray that my next startup will be the next big thing?

    Read the article

  • Implementing Tagging System with PHP and mySQL. Caching help!!!

    - by Hamid Sarfraz
    With reference to this post: http://stackoverflow.com/questions/2122546/how-to-implement-tag-counting I have implemented the suggested 3 table tagging system completely. To count the number of Articles per tag, i am using another column named tagArticleCount in the tag definition table. (other columns are tagId, tagText, tagUrl, tagArticleCount). If i implement realtime editing of this table, so that whenever user adds another tag to article or deletes an existing tag, the tag_definition_table is updated to update the counter of the added/removed tag. This will cost an extra query each time any modification is made. (at the same time, related link entry for tag and article is deleted from tagLinkTable). An alternative to this is not allowing any real time editing to the counter, instead use CRONs to update counter of each tag after a specified time period. Here comes the problem that i want to discuss. This can be seen as caching the article count in database. Can you please help me find a way to present the articles in a list when a tag is explored and when the article counter for that tag is not up to date. For example: 1. Counter shows 50 articles, but there are infact 55 entries in the tag link table (that links tags and articles). 2. Counter shows 50 articles, but there are infact 45 extries in the tag link table. How to handle these 2 scenerios given in example. I am going to use APC to keep cache of these counters. Consider it too in your solution. Also discuss performance in the realtime / CRONNED counter updates.

    Read the article

  • Will TFS 2010 support non-contiguous merging?

    - by steve_d
    I know that merging non-contiguous changesets at once may not be a good idea. However there is at least one situation in which merging non-contiguous changesets is (probably) not going to break anything: when there are no intervening changes on the affected individual files. (At least, it wouldn't break any worse than would a series of cherry-picked merges, checked in each time; and at least this way you would discover breakage before checking in). For instance, let's say you have a Main and a Development branch. They start out identical (e.g. after a release). They have two files, foo.cs and bar.cs. Alice makes a change in Development\foo.cs and checks it in as changeset #1001. Bob makes a change in Development\bar.cs and checks it in as #1002. Alice makes another change to Development\foo.cs and checks it in as #1003. Now we could in theory merge both changes #1001 and #1003 from dev-to main in a single operation. If we try to merge at the branch level, dev-to-main, we will have to do it as two operations. In this simple, contrived example it's simple enough to merge the one file - but in the real world where there would be many files involved, it's not so simple. Non-contiguous merging is one of the reasons given for why "merge by workitem" is not implemented in TFS.

    Read the article

  • django multiprocess problem

    - by iKiR
    I have django application, running under lighttpd via fastcgi. FCGI running script looks like: python manage.py runfcgi socket=<path>/main.socket method=prefork \ pidfile=<path>/server.pid \ minspare=5 maxspare=10 maxchildren=10 maxrequests=500 \ I use SQLite. So I have 10 proccess, which all work with the same DB. Next I have 2 views: def view1(request) ... obj = MyModel.objects.get_or_create(id=1) obj.param1 = <some value> obj.save () def view2(request) ... obj = MyModel.objects.get_or_create(id=1) obj.param2 = <some value> obj.save () And If this views are executed in two different threads sometimes I get MyModel instance in DB with id=1 and updated either param1 or param2 (BUT not both) - it depends on which process was the first. (of course in real life id changes, but sometimes 2 processes execute these two views with same id) The question is: What should I do to get instance with updated param1 and param2? I need something for merging changes in different processes. One decision is create interprocess lock object but in this case I will get sequence executing views and they will not be able to be executed simultaneously, so I ask help

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • Deterministic floating point and .NET

    - by code2code
    How can I guarantee that floating point calculations in a .NET application (say in C#) always produce the same bit-exact result? Especially when using different versions of .NET and running on different platforms (x86 vs x86_64). Inaccuracies of floating point operations do not matter. In Java I'd use strictfp. In C/C++ and other low level languages this problem is essentially solved by accessing the FPU / SSE control registers but that's probably not possible in .NET. Even with control of the FPU control register the JIT of .NET will generate different code on different platforms. Something like HotSpot would be even worse in this case... Why do I need it? I'm thinking about writing a real-time strategy (RTS) game which heavily depends on fast floating point math together with a lock stepped simulation. Essentially I will only transmit user input across the network. This also applies to other games which implement replays by storing the user input. Not an option are: decimals (too slow) fixed point values (too slow and cumbersome when using sqrt, sin, cos, tan, atan...) update state across the network like an FPS: Sending position information for hundreds or a few thousand units is not an option Any ideas?

    Read the article

  • How to push further as a programmer?

    - by MaXX
    For the last, hmm, 6 months I've been reading into Programming in C, I got myself K&Rv2, BEEJ's socket guide, Expert C programming, Linux Systems Programming, the ISO/IEC 9899:1999 specification (real, and not draft). After receiving them from Amazon, I got Linux installed, and got to it. I'm done with K&R, about halfway through Expert C Programming, but still feel weak as a programmer, I'm sure it takes much more than 6 months of reading to become truly skilled, but my question is this: I've done all the exercises in K&Rv2 (in chapter 1) and some in other chapters, most of which are generally really boring. How do I lift my skills, and become truly great? I've invested money, time and a general lifestyle for something I truly desire, but I'm not sure how exactly to achieve it. Could someone explain to me, perhaps if I need to continuously code, what exactly I'm to code? I'm pretty sure, coding up hello world programs isn't going to teach me any more than I already know about anything. A friend of mine said "read" (with emphasis on read) a man page a day, but reading is all I do, I want to do, but I'm not sure what! I'm interested in security, but I'm not sure as a novice what to code that would be considered enough. Ah, I hope you don't delete this paste :) Thanks

    Read the article

  • Custom PHP Framework Feedback

    - by Jascha
    I've been learning OOP programming for about a year and a half now and have developed a fairly standard framework to which I generally abide by. I'd love some feedback or input on how I might improve some functionality or if there are some things I'm overlooking. VIEW MODE 1) Essentially everything starts at the Index.php page. The first thing I do is require my "packages.php" file that is basically a config file that imports all of the classes and function lists I'll be using. 2) I have no direct communication between my index.php file and my classes, what I've done is "pretty them up" with my viewfunctions.php file which is essentially just a conduit to the classes so that in my html I can write <?php get_title('page'); ?> instead of <?php echo $pageClass->get_title('page'); ?> Plus, I can run a couple small booleans and what not in the view function script that can better tailor the output of the class. 3) Any information brought in via the database is started from it's corresponding class that has direct communication with the database class, the only class that is allowed direct to communicate with the database (allowed in the sense that I run all of my queries with custom class code). INPUT MODE 1) Any user input is sent to my userFunctions.php. 2) My security class is then instantiated where I send whatever user input that has been posted for verification and validation. 3) If the input passes my security check, I then pass it to my DB class for input into my Database. FEEDBACK I'm wondering if there are any glaringly obvious pitfalls to the general structure, or ways I can improve this. Thank you in advance for your input. I know there is real no "right" answer for this, but I imagine a couple up votes would be in order for some strong advice regarding building frameworks. -J

    Read the article

  • At what point is it worth using a database?

    - by radix07
    I have a question relating to databases and at what point is worth diving into one. I am primarily an embedded engineer, but I am writing an application using QT to interface with our controller. We are at an odd point where we have enough data that it would be feasible to implement a database (around 700+ items and growing) to manage everything, but I am not sure it is worth the time right now to deal with. I have no problems implementing the GUI with files generated from excel and parsed in, but it gets tedious and hard to track even with VBA scripts. I have been playing around with converting our data into something more manageable for the application side with Microsoft Access and that seems to be working well. If that works out I am only a step (or several) away from using an SQL database and using the QT library to access and modify it. I don't have much experience managing data at this level and am curious what may be the best way to approach this. So what are some of the real benefits of using a database if any in this case? I realize much of this can be very application specific, but some general ideas and suggestions on how to straddle the embedded/application programming line would be helpful. Thanks

    Read the article

  • How to configure an index.htm file in IIS?

    - by salvationishere
    I am running IIS 6.0 on an XP OS using VS 2008 and SQL Server 2008 (Full install). I developed two web apps. Both of these I can run from IIS by setting them to the default website. However, now I tried adding an index.htm file. Real simple; all it has is two hyperlinks to these web apps. But now only the first web app works. The first web app is pure VS. The second web app modifies an Adventureworks database table. But now when I click the hyperlink for the second web app, it gives me the error below. However this error doesn't make sense to me cause I have the two web apps configured as two virtual directories beneath C:\inetpub\ and the index.htm file is also beneath C:\inetpub. And the default website is set to home directory C:\inetpub\ with Document index.htm on top. Also, why does the first web app work and not the second now? Server Error in '/AddFileToSQL' Application. The path '/AddFileToSQL/App_GlobalResources/' maps to a directory outside this application, which is not supported. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: The path '/AddFileToSQL/App_GlobalResources/' maps to a directory outside this application, which is not supported. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

    Read the article

  • why create CLSID_CaptureGraphBuilder2 instance always failed in a machine

    - by Yigang Wu
    It's a real strange issue, the machine information below is from DXDiag. There is no error reported, but create CLSID_CaptureGraphBuilder2 instance always failed in the machine. It's okay to create CLSID_FilterGraph. Before create CLSID_CaptureGraphBuilder2, I have called CoInitialize and created CLSID_FilterGraph. Only this machine has the error, what dll related with this interface or any function needed to call before to make it work? Thanks in advance. System Information Time of this report: 4/24/2010, 09:46:58 Machine name: TURION Operating System: Windows XP Home Edition (5.1, Build 2600) Service Pack 3 (2600.xpsp_sp3_qfe.100216-1510) Language: Japanese (Regional Setting: Japanese) System Manufacturer: To Be Filled By O.E.M. System Model: MS-7145 BIOS: Default System BIOS Processor: AMD Turion(tm) 64 Mobile Technology MT-30, MMX, 3DNow, ~1.6GHz Memory: 768MB RAM Page File: 376MB used, 1401MB available Windows Dir: C:\WINDOWS DirectX Version: DirectX 9.0c (4.09.0000.0904) DX Setup Parameters: Not found DxDiag Version: 5.03.2600.5512 32bit Unicode DxDiag Notes DirectX Files Tab: No problems found. Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Music Tab: No problems found. Input Tab: No problems found. Network Tab: No problems found.

    Read the article

  • best web database solution for scala for a high traffic site?

    - by egervari
    I am in charge of a rebuilding a website that gets about 250,000 visitors a day. We'd like to use Scala, but it does not work very well with Spring (in some minor cases) and Hibernate (there is a major and very annoying mismatch here if you want to use scala collections, which we do). The application itself is going to have about 40-50 tables. Other than Hibernate, is there an ORM that works awesome with Scala and is as performant and reliable as Hibernate? Does it also have the same capabilities, or are we going to run into leaky-abstractions if we don't use Hibernate? It would be a big risk for us to go with a framework that is newer and doesn't seem to have a lot of industry backing... and at the same time, Hibernate is a real pain to program against when using Scala. 1) The Java Collection <- Scala Collection is absolutely painful. There is a lot more boilerplate and crap to write. 2) The IDE doesn't import JavaConversions and java interfaces automatically... so we this needs to be done manually. Optimizing Imports in IDEA is going to destroy all the manual work. 3) There is also a performance cost to converting back and forth all the time in your domain objects and your dao classes. 4) Not to mention there needs to be a lot of casting, which produces code ugly as sin. I actually would love to write my own orm that is 100% tailored to scala, but obviously this is really outside of the scope of our project for now. So what is the best approach?

    Read the article

  • Why does Eclipse skip lines when I debug JBoss?

    - by ZeKoU
    Hi, I am trying to debug web service call which uses JMS in the background.I have JBoss running in debug mode. What happens is that when I press F6 in Eclipse (to execute current line) it skips certain lines. I have this method: @Override public void log(MsgPayload payload) { 1 Date startTime = new Date(); logger.info("Publishing with BufferedPublisher.java start time:"+startTime); 3 publisher.send(payload); Date endTime = new Date(); logger.info("Publishing with BufferedPublisher.java end time:"+endTime); long mills = endTime.getTime()-endTime.getTime(); double secs = mills/1000.0; logger.info("Publishing with BufferedPublisher.java total time (seconds):"+secs); } So what happens? I have breakpoint at line 1. When I press F6 it skips that line and goes to line 3. When I press F6 again it goes to the end of the method. Half of the code is never executed..??? My question is why. I am assuming my source is not well attached to the real code that is being executed.But how do I change this? Thanks.

    Read the article

  • How to mock/stub a directory of files and their contents using RSpec?

    - by John Topley
    A while ago I asked "How to test obtaining a list of files within a directory using RSpec?" and although I got a couple of useful answers, I'm still stuck, hence a new question with some more detail about what I'm trying to do. I'm writing my first RubyGem. It has a module that contains a class method that returns an array containing a list of non-hidden files within a specified directory. Like this: files = Foo.bar :directory => './public' The array also contains an element that represents metadata about the files. This is actually a hash of hashes generated from the contents of the files, the idea being that changing even a single file changes the hash. I've written my pending RSpec examples, but I really have no idea how to implement them: it "should compute a hash of the files within the specified directory" it "shouldn't include hidden files or directories within the specified directory" it "should compute a different hash if the content of a file changes" I really don't want to have the tests dependent on real files acting as fixtures. How can I mock or stub the files and their contents? The gem implementation will use Find.find, but as one of the answers to my other question said, I don't need to test the library. I really have no idea how to write these specs, so any help much appreciated!

    Read the article

  • Why C++ is (one of) the best language to learn at first [closed]

    - by AlexV
    C++ is one of the most used programming language in the world since like 25+ years. My first job as programmer was in C++ and I coded in C++ everyday for nearly 4 years. Now I do mostly PHP, but I will forever cherish this C++ background. C++ has helped me understand many "under the hood" features/behaviors/restrictions of many other (and different) programming languages like PHP and Delphi. I'm a full time programmer for 6+ years now and since I have a quite varied programming background I often get questions by "newbies" as where to start to become a "good" programmer. I think C++ is one of the best language to start with because it gives you a real usefull experience that will last and will teach you how things work under the hood. It's not the easier one to learn for a newbie, but in my opinion it's the one who will reward the most in long term. I would like to know your opinion on this matter to add to my arguments when I guide "newbies". After this introduction, here's my question : Why C++ is for you (one of) the best language to learn at first. Since it's subjective, I've marked this question as community wiki.

    Read the article

  • How to catch unintentional function interpositioning with GCC?

    - by SiegeX
    Reading through my book Expert C Programming, I came across the chapter on function interpositioning and how it can lead to some serious hard to find bugs if done unintentionally. The example given in the book is the following: my_source.c mktemp() { ... } main() { mktemp(); getwd(); } libc mktemp(){ ... } getwd(){ ...; mktemp(); ... } According to the book, what happens in main() is that mktemp() (a standard C library function) is interposed by the implementation in my_source.c. Although having main() call my implementation of mktemp() is intended behavior, having getwd() (another C library function) also call my implementation of mktemp() is not. Apparently, this example was a real life bug that existed in SunOS 4.0.3's version of lpr. The book goes on to explain the fix was to add the keyword static to the definition of mktemp() in my_source.c; although changing the name altogether should have fixed this problem as well. This chapter leaves me with some unresolved questions that I hope you guys could answer: Should our software group adopt the practice of putting the keyword static in front of all functions that we don't want to be exposed? Does GCC have a way to warn about function interposition? We certainly don't ever intend on this happening and I'd like to know about it if it does. Can interposition happen with functions introduced by static libraries? Thanks for the help.

    Read the article

  • PHP Array saved to Text file

    - by coffeemonitor
    I've saved a response from an outside server to a text file, so I don't need to keep running connection requests. Instead, perhaps I can use the text file for my manipulation purposes, until I'm read for re-connecting again. (also, my connection requests are limited to this outside server) Here is what I've saved to a text file: records.txt Array ( [0] => stdClass Object ( [id] => 552 [date_created] => 2012-02-23 10:30:56 [date_modified] => 2012-03-09 18:55:26 [date_deleted] => 2012-03-09 18:55:26 [first_name] => Test [middle_name] => [last_name] => Test [home_phone] => (123) 123-1234 [email] => [email protected] ) [1] => stdClass Object ( [id] => 553 [date_created] => 2012-02-23 10:30:56 [date_modified] => 2012-03-09 18:55:26 [date_deleted] => 2012-03-09 18:55:26 [first_name] => Test [middle_name] => [last_name] => Test [home_phone] => (325) 558-1234 [email] => [email protected] ) ) There's actually more in the Array, but I'm sure 2 are fine. Since this is a text file, and I want to pretend this is the actual outside server (sending me the same info), how do I make it a real array again? I know I need to open the file first: <?php $fp = fopen('records.txt', "r"); // open the file $theData = fread($fh, filesize('records.txt')); fclose($fh); echo $theData; ?> So far $theData is a string value. Is there a way to convert it back to the Array it originally came in as?

    Read the article

  • lists searches in SYB or uniplate haskell

    - by Chris
    I have been using uniplate and SYB and I am trying to transform a list For instance type Tree = [DataA] data DataA = DataA1 [DataB] | DataA2 String | DataA3 String [DataA] deriving Show data DataB = DataB1 [DataA] | DataB2 String | DataB3 String [DataB] deriving Show For instance, I would like to traverse my tree and append a value to all [DataB] So my first thought was to do this: changeDataB:: Tree -> Tree changeDataB = everywhere(mkT changeDataB') chanegDataB'::[DataB] -> [DataB] changeDataB' <add changes here> or if I was using uniplate changeDataB:: Tree -> Tree changeDataB = transformBi changeDataB' chanegDataB'::[DataB] -> [DataB] changeDataB' <add changes here> The problem is that I only want to search on the full list. Doing either of these searches will cause a search on the full list and all of the sub-lists (including the empty list) The other problem is that a value in [DataB] may generate a [DataB], so I don't know if this is the same kind of solution as not searching chars in a string. I could pattern match on DataA1 and DataB3, but in my real application there are a bunch of [DataB]. Pattern matching on the parents would be extensive. The other thought that I had was to create a data DataBs = [DataB] and use that to transform on. That seems kind of lame, there must be a better solution.

    Read the article

  • Unit Testing - Algorithm or Sample based ?

    - by ohadsc
    Say I'm trying to test a simple Set class public IntSet : IEnumerable<int> { Add(int i) {...} //IEnumerable implementation... } And suppose I'm trying to test that no duplicate values can exist in the set. My first option is to insert some sample data into the set, and test for duplicates using my knowledge of the data I used, for example: //OPTION 1 void InsertDuplicateValues_OnlyOneInstancePerValueShouldBeInTheSet() { var set = new IntSet(); //3 will be added 3 times var values = new List<int> {1, 2, 3, 3, 3, 4, 5}; foreach (int i in values) set.Add(i); //I know 3 is the only candidate to appear multiple times int counter = 0; foreach (int i in set) if (i == 3) counter++; Assert.AreEqual(1, counter); } My second option is to test for my condition generically: //OPTION 2 void InsertDuplicateValues_OnlyOneInstancePerValueShouldBeInTheSet() { var set = new IntSet(); //The following could even be a list of random numbers with a duplicate var values = new List<int> { 1, 2, 3, 3, 3, 4, 5}; foreach (int i in values) set.Add(i); //I am not using my prior knowledge of the sample data //the following line would work for any data CollectionAssert.AreEquivalent(new HashSet<int>(values), set); } Of course, in this example, I conveniently have a set implementation to check against, as well as code to compare collections (CollectionAssert). But what if I didn't have either ? This is the situation when you are testing your real life custom business logic. Granted, testing for expected conditions generically covers more cases - but it becomes very similar to implementing the logic again (which is both tedious and useless - you can't use the same code to check itself!). Basically I'm asking whether my tests should look like "insert 1, 2, 3 then check something about 3" or "insert 1, 2, 3 and check for something in general" EDIT - To help me understand, please state in your answer if you prefer OPTION 1 or OPTION 2 (or neither, or that it depends on the case, etc )

    Read the article

  • Best (functional?) programming language to learn coming from Mathematica

    - by Will Robertson
    As a mechanical engineering PhD student, I haven't had a great pedigree in programming as part of my “day job”. I started out in Matlab (having written some Hypercard and Applescript back in the day, and being introduced to Ada, of all things, in my 1st undergrad year), learned to program—if you can call it that—in (La)TeX; and finally discovered and fell for Mathematica. Now I'm interested in learning a "real" programming language that I can enjoy in the same sort of style as Mathematica, which tries to stress functional programming since it seems to map more nicely to how certain kinds of mathematics can be written algorithmically. So which functional language should I learn? I guess the obvious answer is “as many as possible”, but let's start out humble and give a single, well-considered option a good crack. I've heard good things about, say, Haskell and Scala, but I wonder if (given my non–computer science background) I'd be better off starting in more “grounded” territory and going with Ruby or Python (the latter having the big advantage of being used for Sage, which I'd also like to investigate…after my PhD). Well, I guess this is pretty subjective, so perhaps I could rephrase: would it be better to start looking at Haskell (say) straight after an ad-hoc education to functional programming in Mathematica, or will I get more out of learning Python (say) first? In reference to the question "what do I want to do with it?", I guess my answer is "fun, and learning more". I've got this list of languages that I'd like to look at, and I don't know how to trim them down. And I'd rather start with something a little higher-level than C simply so that I can be somewhat productive without having to re-invent many wheels for any code I'd like to write.

    Read the article

  • Help me convert .NET 1.1 Xml validation code to .NET 2.0 please.

    - by Hamish Grubijan
    It would be fantastic if you could help me rid of these warnings below. I have not been able to find a good document. Since the warnings are concentrated in just the private void ValidateConfiguration( XmlNode section ) section, hopefully this is not terribly hard to answer, if you have encountered this before. Thanks! 'System.Configuration.ConfigurationException.ConfigurationException(string)' is obsolete: 'This class is obsolete, to create a new exception create a System.Configuration!System.Configuration.ConfigurationErrorsException' 'System.Xml.XmlValidatingReader' is obsolete: 'Use XmlReader created by XmlReader.Create() method using appropriate XmlReaderSettings instead. http://go.microsoft.com/fwlink/?linkid=14202' private void ValidateConfiguration( XmlNode section ) { // throw if there is no configuration node. if( null == section ) { throw new ConfigurationException("The configuration section passed within the ... class was null ... there must be a configuration file defined.", section ); } //Validate the document using a schema XmlValidatingReader vreader = new XmlValidatingReader( new XmlTextReader( new StringReader( section.OuterXml ) ) ); // open stream on Resources; the XSD is set as an "embedded resource" so Resource can open a stream on it using (Stream xsdFile = XYZ.GetStream("ABC.xsd")) using (StreamReader sr = new StreamReader(xsdFile)) { vreader.ValidationEventHandler += new ValidationEventHandler(ValidationCallBack); vreader.Schemas.Add(XmlSchema.Read(new XmlTextReader(sr), null)); vreader.ValidationType = ValidationType.Schema; // Validate the document while (vreader.Read()) { } if (!_isValidDocument) { _schemaErrors = _sb.ToString(); throw new ConfigurationException("XML Document not valid"); } } } // Does not cause warnings. private void ValidationCallBack( object sender, ValidationEventArgs args ) { // check what KIND of problem the schema validation reader has; // on FX 1.0, it gives a warning for "<xs:any...skip" sections. Don't worry about those, only set validation false // for real errors if( args.Severity == XmlSeverityType.Error ) { _isValidDocument = false; _sb.Append( args.Message + Environment.NewLine ); } }

    Read the article

  • Why is T() = T() allowed?

    - by Rimo
    I believe the expression T() creates an rvalue (by the Standard). However, the following code compiles (at least on gcc4.0): class T {}; int main() { T() = T(); } I know technically this is possible because member functions can be invoked on temporaries and the above is just invoking the operator= on the rvalue temporary created from the first T(). But conceptually this is like assigning a new value to an rvalue. Is there a good reason why this is allowed? Edit: The reason I find this odd is it's strictly forbidden on built-in types yet allowed on user-defined types. For example, int(2) = int(3) won't compile because that is an "invalid lvalue in assignment". So I guess the real question is, was this somewhat inconsistent behavior built into the language for a reason? Or is it there for some historical reason? (E.g it would be conceptually more sound to allow only const member functions to be invoked on rvalue expressions, but that cannot be done because that might break some existing code.)

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >