Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 391/457 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • Gradual memory leak and slowdown in loop

    - by Benji XVI
    I have a simple foundation tool that exports every frame of a movie as a .tiff file. Here is the relevant code: NSString* movieLoc = [NSString stringWithCString:argv[1]]; QTMovie *sourceMovie = [QTMovie movieWithFile:movieLoc error:nil]; int i=0; while (QTTimeCompare([sourceMovie currentTime], [sourceMovie duration]) != NSOrderedSame) { // save image of movie to disk NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSString *filePath = [NSString stringWithFormat:@"/somelocation_%d.tiff", i++]; NSData *currentImageData = [[sourceMovie currentFrameImage] TIFFRepresentation]; [currentImageData writeToFile:filePath atomically:NO]; NSLog(@"%@", filePath); [sourceMovie stepForward]; [arp release]; } [pool drain]; return 0; As you can see, in order to prevent very large memory buildups with the various transparently-autoreleased variables in the loop, we create, and flush, an autoreleasepool with every run through the loop. However, over the course of stepping through a movie, the amount of memory used by the program still gradually increases, and the speed at which frames are processed drops precipitously. (From ~0.5 seconds per frame at the start, to ~2 seconds per frame by the 250th frame.) The only thing I can think can be causing the gradual memory leak is a buildup of the NSAutoreleasePool objects themselves. Am I right in thinking they will only be deallocated when the outer pool is released? If so, is there a better memory management solution here? Creating a pool every run through the loop seems a little hacky. And if not, what is causing the slow memory leak? (It is not NSStrings, and much too slow to be NSImages or NSDatas.) And what could be causing the slowdown?

    Read the article

  • Is this linear search implementation actually useful?

    - by Helper Method
    In Matters Computational I found this interesting linear search implementation (it's actually my Java implementation ;-)): public static int linearSearch(int[] a, int key) { int high = a.length - 1; int tmp = a[high]; // put a sentinel at the end of the array a[high] = key; int i = 0; while (a[i] != key) { i++; } // restore original value a[high] = tmp; if (i == high && key != tmp) { return NOT_CONTAINED; } return i; } It basically uses a sentinel, which is the searched for value, so that you always find the value and don't have to check for array boundaries. The last element is stored in a temp variable, and then the sentinel is placed at the last position. When the value is found (remember, it is always found due to the sentinel), the original element is restored and the index is checked if it represents the last index and is unequal to the searched for value. If that's the case, -1 (NOT_CONTAINED) is returned, otherwise the index. While I found this implementation really clever, I wonder if it is actually useful. For small arrays, it seems to be always slower, and for large arrays it only seems to be faster when the value is not found. Any ideas?

    Read the article

  • @dynamic property needs setter with multiple behaviors

    - by ambertch
    I have a class that contains multiple user objects and as such has an array of them as an instance variable: NSMutableArray *users; The tricky part is setting it. I am deserializing these objects from a server via Objective Resource, and for backend reasons users can only be returned as a long string of UIDs - what I have locally is a separate dictionary of users keyed to UIDs. Given the string uidString of comma separated UIDs I override the default setter and populate the actual user objects: @dynamic users; - (void)setUsers:(id)uidString { users = [NSMutableArray arrayWithArray: [[User allUsersDictionary] objectsForKeys:[(NSString*)uidString componentsSeparatedByString:@","]]]; } The problem is this: I now serialize these to database using SQLitePO, which stores these as the array of user objects, not the original string. So when I retrieve it from database the setter mistakenly treats this array of user objects as a string! Where I actually want to adjust the setter's behavior when it gets this object from DB vs. over the network. I can't just make the getter serialize back into a string without tearing up large code that reference this array of user objects, and I tried to detect in the setter whether I have a string or an array coming in: if ([uidString respondsToSelector:@selector(addObject)]) { // Already an array, so don't do anything - just assign users = uidString but no success... so I'm kind of stuck - any suggestions? Thanks in advance!

    Read the article

  • How to get Amazon s3 PHP SDK working?

    - by JakeRow123
    I'm trying to set up s3 for the first time and trying to run the sample file that comes with the PHP sdk that creates a bucket and attempts to upload some demo files to it. But this is the error I am getting: The difference between the request time and the current time is too large. I read on another question on SO that this is because Amazon determines a valid request by comparing the times between the server and the client, that the 2 must be within a 15 min span of one another. Now here is the problem. My laptop's time is 12:30AM June 8, 2012 at the moment. On my server I created a file called servertime.php and placed this code in that file: <?php print strftime('%c'); ?> and the output is: Fri Jun 8 00:31:22 2012 It looks like the day is correct but I don't know what to make of 00:31:22. In any case, how is it possible to always make sure the time between the client and server is within a 15 minute window of one another. What if I have a user in China who wishes to upload a file on my site which uses s3 for the cdn. Then the time difference would be over a day. How can I make sure all my user's times are within 15 minutes of my server time? What if the user is in the U.S. but the time on their machine is misconfigured. Basically how to get s3 bucket creation and upload to work?

    Read the article

  • Writing out sheet to text file using POI event model

    - by Eduardo Dennis
    I am using XLSX2CSV example to parse large sheets from a workbook. Since I only need to output the data for specific sheets I added an if statement in the process method to test for the specific sheets. When the condition is met I continue with the process. public void process() throws IOException, OpenXML4JException, ParserConfigurationException, SAXException { ReadOnlySharedStringsTable strings = new ReadOnlySharedStringsTable(this.xlsxPackage); XSSFReader xssfReader = new XSSFReader(this.xlsxPackage); StylesTable styles = xssfReader.getStylesTable(); XSSFReader.SheetIterator iter = (XSSFReader.SheetIterator) xssfReader.getSheetsData(); while (iter.hasNext()) { InputStream stream = iter.next(); String sheetName = iter.getSheetName(); if (sheetName.equals("SHEET1")||sheetName.equals("SHEET2")||sheetName.equals("SHEET3")||sheetName.equals("SHEET4")||sheetName.equals("SHEET5")){ processSheet(styles, strings, stream); try { System.setOut(new PrintStream( new FileOutputStream("C:\\Users\\edennis.AD\\Desktop\\test\\"+sheetName+".txt"))); } catch (Exception e) { e.printStackTrace(); } stream.close(); } } } But I need to output text file and not sure how to do it. I tried to use the System.set() method to output everything from system.out to text but that's not working I just get blank files.

    Read the article

  • Finding missing files by checksum

    - by grw
    Hi there, I'm doing a large data migration between two file systems (let's call them F1 and F2) on a Linux system which will necessarily involve copying the data verbatim into a differently-structured hierarchy on F2 and changing the file names. I'd like to write a script to generate a list of files which are in F1 but not in F2, i.e. the ones which weren't copied by the migration script into the new hierarchy, so that I can go back and migrate them manually. Unfortunately for reasons not worth going into, the migration script can't be modified to list files that it doesn't migrate. My question differs from this previously answered one because of the fact that I cannot rely on filenames as a comparison. I know the basic outline of the process would be: Generate a list of checksums for all files, recursing through F1 Do the same for F2 Compare the lists and generate a negative intersection of the checksums, ignoring the file names, to find files which are in F1 but not in F2. I'm kind of stuck getting past that stage, so I'd appreciate any pointers on which tools to use. I think I need to use the 'comm' command to compare the list of file checksums, but since md5sum, sha512sum and the like put the file name next to the checksum, I can't see a way to get it to bring me a useful comparison. Maybe awk is the way to go? I'm using Red Hat Enterprise Linux 5.x. Thanks.

    Read the article

  • Android - BitmapFactory.decodeByteArray - OutOfMemoryError (OOM)

    - by Bob Keathley
    I have read 100s of article about the OOM problem. Most are in regard to large bitmaps. I am doing a mapping application where we download 256x256 weather overlay tiles. Most are totally transparent and very small. I just got a crash on a bitmap stream that was 442 Bytes long while calling BitmapFactory.decodeByteArray(....). The Exception states: java.lang.OutOfMemoryError: bitmap size exceeds VM budget(Heap Size=9415KB, Allocated=5192KB, Bitmap Size=23671KB) The code is: protected Bitmap retrieveImageData() throws IOException { URL url = new URL(imageUrl); InputStream in = null; OutputStream out = null; HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // determine the image size and allocate a buffer int fileSize = connection.getContentLength(); if (fileSize < 0) { return null; } byte[] imageData = new byte[fileSize]; // download the file //Log.d(LOG_TAG, "fetching image " + imageUrl + " (" + fileSize + ")"); BufferedInputStream istream = new BufferedInputStream(connection.getInputStream()); int bytesRead = 0; int offset = 0; while (bytesRead != -1 && offset < fileSize) { bytesRead = istream.read(imageData, offset, fileSize - offset); offset += bytesRead; } // clean up istream.close(); connection.disconnect(); Bitmap bitmap = null; try { bitmap = BitmapFactory.decodeByteArray(imageData, 0, bytesRead); } catch (OutOfMemoryError e) { Log.e("Map", "Tile Loader (241) Out Of Memory Error " + e.getLocalizedMessage()); System.gc(); } return bitmap; } Here is what I see in the debugger: bytesRead = 442 So the Bitmap data is 442 Bytes. Why would it be trying to create a 23671KB Bitmap and running out of memory?

    Read the article

  • is it better to query database or grab from file? php & mysql

    - by pfunc
    I am keeping a large amount of words in a database that I want to match up articles to. I was thinking that it would just be better to keep these words in an array and grab that array whenever needed instead of querying the database every time (since the words won't be changing that much). Is there much performance difference in doing this? And if I were to do this, how to I write a script that writes the array to a a new php file. I tried writing the array like so: while( $row = mysql_fetch_assoc($query)) { $newArray[] = $row; } $fp = fopen('noWordsArr.php', 'w'); fwrite($fp, $newArray); fclose($fp); But all I get in the other file is "Array". So i figured I could write this and then write have a chron hit up the file every few days or so in case things have changed. But I guess if there is no performance advantage then it prob won't be necessary and I can just query the database every time I need to access the words.

    Read the article

  • Switch front-end's of a website after X amount of hits

    - by Derek Adair
    Sorry about the title - not sure what to call this one. A client of mine would like to redirect users to different front-ends of his eCommerce site based on a hit-counter (possibly a timer?). important: -The content is moderately different in the two sites, enough to consider them two different websites. Knowing this client he will likely add more drastic content changes and other front-ends. So for this question consider the content to be -This site has a rather large back-end. With affiliate networking, multiple payment gateways, order-tracking, and several other features in the works. It is essential that these two front-ends have identical back-end functionality I know that if it was just a simple CSS swap this would be as simple as an if statement that ran off some kind of counter stored in a DB... but the different HTML markup is throwing me for a loop. Q: How can I serve two different front-ends (HTML/CSS) based on a hit counter? Also, I don't have any clue what to tag this one as...

    Read the article

  • Delphi - Clean TListBox Items

    - by Brad
    I want to clean a list box by checking it against two other list boxes. Listbox1 would have the big list of items Listbox2 would have words I want removed from listbox1 Listbox3 would have mandatory words that have to exist for it to remain in listbox1 Below is the code I've got so far for this, it's VERY slow with large lists. // not very efficient Function checknegative ( instring : String; ListBox : TListBox ) : Boolean; Var i : Integer; Begin For I := listbox.Items.Count - 1 Downto 0 Do Begin If ExistWordInString ( instring, listbox.Items.Strings [i], [soWholeWord, soDown] ) = True Then Begin result := True; //True if in list, False if not. break; End Else Begin result := False; End; End; result:=false; End; Function ExistWordInString ( aString, aSearchString : String; aSearchOptions : TStringSearchOptions ) : Boolean; Var Size : Integer; Begin Size := Length ( aString ); If SearchBuf ( Pchar ( aString ), Size, 0, 0, aSearchString, aSearchOptions ) <> Nil Then Begin result := True; End Else Begin result := False; End; End;

    Read the article

  • Publish Git repository to SVN

    - by Ken Williams
    I and my small team work in Git, and the larger group uses Subversion. I'd like to schedule a cron job to publish our repositories current HEADs every hour into a certain directory in the SVN repo. I thought I had this figured out, but the recipe I wrote down previously doesn't seem to be working now: git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 cd px2 svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn fetch git rebase trunk master git svn dcommit Here's what happens when I attempt: % git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 Cloning into 'ProjX'... ... % cd px2 % svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX Committed revision 123. % git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX Using higher level of URL: http://me@svnserver/svn/repo/play/me/fromgit/ProjX => http://me@svnserver/svn/repo % git svn fetch W: Ignoring error from SVN, path probably does not exist: (160013): Filesystem has no item: File not found: revision 100, path '/play/me/fromgit/ProjX' W: Do not be alarmed at the above message git-svn is just searching aggressively for old history. This may take a while on large repositories % git rebase trunk master fatal: Needed a single revision invalid upstream trunk I could have sworn this worked previously, anyone have any suggestions? Thanks.

    Read the article

  • Get the src part of a string [duplicate]

    - by Kay Lakeman
    This question already has an answer here: Grabbing the href attribute of an A element 7 answers First post ever here, and i really hope you can help. I use a database where a large piece of html is stored, now i just need the src part of the image tag. I already found a thread, but i just doesn't do the trick. My code: Original string: <p><img alt=\"\" src=\"http://domain.nl/cms/ckeditor/filemanager/userfiles/background.png\" style=\"width: 80px; height: 160px;\" /></p> How i start: $image = strip_tags($row['information'], '<img>'); echo stripslashes($image); This returns: <img alt="" src="http://domain.nl/cms/ckeditor/filemanager/userfiles/background.png" style="width: 80px; height: 160px;" /> Next step: extract the src part: preg_match('/< *img[^>]*src *= *["\']?([^"\']*)/i', $image, $matches); echo $matches ; This last echo returns: Array What is going wrong? Thanks in advance for your anwser.

    Read the article

  • Efficiently select top row for each category in the set

    - by VladV
    I need to select a top row for each category from a known set (somewhat similar to this question). The problem is, how to make this query efficient on the large number of rows. For example, let's create a table that stores temperature recording in several places. CREATE TABLE #t ( placeId int, ts datetime, temp int, PRIMARY KEY (ts, placeId) ) -- insert some sample data SET NOCOUNT ON DECLARE @n int, @ts datetime SELECT @n = 1000, @ts = '2000-01-01' WHILE (@n>0) BEGIN INSERT INTO #t VALUES (@n % 10, @ts, @n % 37) IF (@n % 10 = 0) SET @ts = DATEADD(hour, 1, @ts) SET @n = @n - 1 END Now I need to get the latest recording for each of the places 1, 2, 3. This way is efficient, but doesn't scale well (and looks dirty). SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 1 ORDER BY ts DESC ) t1 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 2 ORDER BY ts DESC ) t2 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 3 ORDER BY ts DESC ) t3 The following looks better but works much less efficiently (30% vs 70% according to the optimizer). SELECT placeId, ts, temp FROM ( SELECT placeId, ts, temp, ROW_NUMBER() OVER (PARTITION BY placeId ORDER BY ts DESC) rownum FROM #t WHERE placeId IN (1, 2, 3) ) t WHERE rownum = 1 The problem is, during the latter query execution plan a clustered index scan is performed on #t and 300 rows are retrieved, sorted, numbered, and then filtered, leaving only 3 rows. For the former query three times one row is fetched. Is there a way to perform the query efficiently without lots of unions?

    Read the article

  • Multiple Solution Layout for ASP.NET Web Portal?

    - by Jared S
    At work, we've developed a custom ASP.NET Web Portal (That's very similar to iGoogle). We have "Apps" (self-contained, large web forms) and "Modules" (similar to Google Gadgets). Currently, we use a single-solution model. Right now, we have: 3 core projects 60 application projects 80 module projects To reduce copy and pasting between projects, we're going to factor out common functionality (Data Access, Business Logic) into separate projects. I'd also like to introduce Unit Tests, which is going to increase the number of projects even more. We've already reached the point where Visual Studio is choking on the number of projects. We generally only load the 3 core projects and then whatever app's/module's project we're working on. Would a different solution structure help us out? Our number of projects is only going to increase. In general, an app or module only references the 3 core projects. Soon, apps/modules may start referencing the Data Access/Business Logic projects. But in general, apps and modules do not make references between themselves. So to recap, what is the best practice for solution structure when there are MANY projects that use a small number of core projects?

    Read the article

  • Spreadsheet_Excel_Writer data output is damaged

    - by dr3w
    I use Spreadsheet_Excel_Writer to generate .xls file and it works fine until I have to deal with a large amount of data. On certain stage it just writes some nonsense chars and quits filling certain columns. However some columns are field up to the end (generally numeric data) I'm not quite sure how the xls document is formed: row by row, or col by col... Also it is obviously not an error in a string, because when i cut out some data, the error appears a little bit further. I think there is no need in all of my code here are some essentials $filename = 'file.xls'; $workbook = & new Spreadsheet_Excel_Writer(); $workbook->setVersion(8); $contents =& $workbook->addWorksheet('Logistics'); $contents->setInputEncoding('UTF-8'); $workbook->send($filename); //here is the part where I write data down $contents->write(0, 0, 'Field A'); $contents->write(0, 1, 'Field B'); $contents->write(0, 2, 'Field C'); $ROW=1; foreach($ordersArr as $key=>$val){ $contents->write($ROW, 0, $val['a']); $contents->write($ROW, 1, $val['b']); $contents->write($ROW, 2, $val['c']); $ROW++; } $workbook->close();

    Read the article

  • MySQL managing catalogue views

    - by Mark Lawrence
    A friend of mine has a catalogue that currently holds about 500 rows or 500 items. We are looking at ways that we can provide reports on the catalogue inclduing the number of times an item was viewed, and dates for when its viewed. His site is averaging around 25,000 page impressions per month and if we assumed for a minute that half of these were catalogue items then we'd assume roughly 12,000 catalogue items viewed each month. My question is the best way to manage item views in the database. First option is to insert the catalogue ID into a table and then increment the number of times its viewed. The advantage of this is its compact nature. There will only ever be as many rows in the table as there are catalogue items. `catalogue_id`, `views` The disadvantage is that no date information is being held, short of maintaining the last time an item was viewed. The second option is to insert a new row each time an item is viewed. `catalogue_id`, `timestamp` If we continue with the assumed figure of 12,000 item views that means adding 12,000 rows to the table each month, or 144,000 rows each year. The advantage of this is we know the number of times the item is viewed, and also the dates for when its viewed. The disadvantage is the size of the table. Is a table with 144,000 rows becoming too large for MySQL? Interested to hear any thoughts or suggestions on how to achieve this. Thanks.

    Read the article

  • application specific seed data population

    - by user339108
    Env: JBoss, (h2, MySQl, postgres), JPA, Hibernate 3.3.x @Id @GeneratedValue(strategy = IDENTITY) private Integer key; Currently our primary keys are created using the above annotation. We expect to support a large number of users (~million users), what key should be used. Should it be Integer or Long or should I use the unsigned versions of the above declarations. We have a j2ee application which needs to be populated with some seed data on installation. On purchase, the customer creates his own data on top of the application. We just want to make sure that there is enough room to ship, modify or add data for future releases. What would be the best mechanism to support this, we had looked at starting all table identifiers from a certain id (say 1000) but this mandates modifying primary key generation to have table or sequence based generators and we have around ~100 tables. We are not sure if this is the right strategy for this. If we use a signed integer approach for the key, would it make sense to have the seed data as everything starting from 0 and below (i.e -ve numbers), so that all customer specific data will be available on 0 and above (i.e. +ve numbers)

    Read the article

  • A generic C++ library that provides QtConcurrent functionality?

    - by Lucas
    QtConcurrent is awesome. I'll let the Qt docs speak for themselves: QtConcurrent includes functional programming style APIs for parallel list processing, including a MapReduce and FilterReduce implementation for shared-memory (non-distributed) systems, and classes for managing asynchronous computations in GUI applications. For instance, you give QtConcurrent::map() an iterable sequence and a function that accepts items of the type stored in the sequence, and that function is applied to all the items in the collection. This is done in a multi-threaded manner, with a thread pool equal to the number of logical CPU's on the system. There are plenty of other function in QtConcurrent, like filter(), filteredReduced() etc. The standard CompSci map/reduce functions and the like. I'm totally in love with this, but I'm starting work on an OSS project that will not be using the Qt framework. It's a library, and I don't want to force others to depend on such a large framework like Qt. I'm trying to keep external dependencies to a minimum (it's the decent thing to do). I'm looking for a generic C++ framework that provides me with the same/similar high-level primitives that QtConcurrent does. AFAIK boost has nothing like this (I may be wrong though). boost::thread is very low-level compared to what I'm looking for. I know C# has something very similar with their Parallel Extensions so I know this isn't a Qt-only idea. What do you suggest I use?

    Read the article

  • byte + byte = int... why?

    - by Robert C. Cartaino
    Looking at this C# code... byte x = 1; byte y = 2; byte z = x + y; // ERROR: Cannot implicitly convert type 'int' to 'byte' The result of any math performed on byte (or short) types is implicitly cast back to an integer. The solution is to explicitly cast the result back to a byte, so... byte z = (byte)(x + y); // works What I am wondering is why? Is it architectural? Philosophical? We have: int + int = int long + long = long float + float = float double + double = double So why not: byte + byte = byte short + short = short ? A bit of background: I am performing a long list of calculations on "small numbers" (i.e. < 8) and storing the intermediate results in a large array. Using a byte array (instead of an int array) is faster (because of cache hits). But the extensive byte-casts spread through the code make it that much more unreadable.

    Read the article

  • Avoiding resource (localizable string) duplication with String.Format

    - by Hrvoje Prgeša
    I'm working on a application (.NET, but not relevant) where there is large potential for resource/string duplication - most of these strings are simple like: Volume: 33 Volume: 33 (dB) Volume 33 dB Volume (dB) Command - Volume: 33 (dB) where X, Y and unit are the same. Should I define a new resource for each of the string or is it preferable to use String.Format to simplify some of these, eg.: String.Format("{0}: {1}", Resource.Volume, 33) String.Format("{0}: {1} {2}", Resource.Volume, 33, Resource.dB) Resource.Volume String.Format("{0} ({1})", 33, Resource.dB) String.Format("{0} ({1})", Resource.Volume, Resource.dB) String.Format("Command - {0}: {1} {2}", Resource.Volume, 33, Resource.dB) I would also define string formats like "{0}: {1}" in the resources so there would be a possibility of reordering words... I would not use this approach selectivly and not throughout the whole application.. And how about: String.Format("{0}: {1}", Volume, Resource.Muted_Volume) // = Volume: Muted Resource.Muted_Volume String.Format("{0}: {1} (by user {2})", Volume, Resource.Muted_Volume, "xy") // = Volume: Muted (by user xy) The advantage is cutting the number of resource by the factor of 4-5. Are there any hidden dangers of using this approach? Could someone give me an example (language) where this would not work correctly?

    Read the article

  • What's so bad about building XML with string concatenation?

    - by wsanville
    In the thread What’s your favorite “programmer ignorance” pet peeve?, the following answer appears, with a large amount of upvotes: Programmers who build XML using string concatenation. My question is, why is building XML via string concatenation (such as a StringBuilder in C#) bad? I've done this several times in the past, as it's sometimes the quickest way for me to get from point A to point B when to comes to the data structures/objects I'm working with. So far, I have come up with a few reasons why this isn't the greatest approach, but is there something I'm overlooking? Why should this be avoided? Probably the biggest reason I can think of is you need to escape your strings manually, and most programmers will forget this. It will work great for them when they test it, but then "randomly" their apps will fail when someone throws an & symbol in their input somewhere. Ok, I'll buy this, but it's really easy to prevent the problem (SecurityElement.Escape to name one). When I do this, I usually omit the XML declaration (i.e. <?xml version="1.0"?>). Is this harmful? Performance penalties? If you stick with proper string concatenation (i.e. StringBuilder), is this anything to be concerned about? Presumably, a class like XmlWriter will also need to do a bit of string manipulation... There are more elegant ways of generating XML, such as using XmlSerializer to automatically serialize/deserialize your classes. Ok sure, I agree. C# has a ton of useful classes for this, but sometimes I don't want to make a class for something really quick, like writing out a log file or something. Is this just me being lazy? If I am doing something "real" this is my preferred approach for dealing w/ XML.

    Read the article

  • Understanding REST through an example

    - by grifaton
    My only real exposure to the ideas of REST has been through Ruby on Rails' RESTful routing. This has suited me well for the kind of CRUD-based applications I have built with Rails, but consequently my understanding of RESTfulness is somewhat limited. Let's say we have a finite collection of Items, each of which has a unique ID, and a number of properties, such as colour, shape, and size (which might be undefined for some Items). Items can be used by a client for a period of time, but each Item can only be used by one client at once. Access to Items is regulated by a server. Clients can request the temporary use of certain items from a server. Usually, clients will only be interested in getting access to a number of Items with particular properties, rather than getting access to specific Items. When a client requests use of a number of Items, the server responds with a list of IDs corresponding to the request, or with a response that says that the requested Items are not currently available or do not exist. A client can make the following kinds of request: Tell me how many green triangle Items there are (in total/available). Give me use of 200 large red Items. I have finished with Items 21, 23, 23. Add 100 new red square Items. Delete 50 small green Items. Modify all big yellow pentagon Items to be blue. The toy example above is like a resource allocation problem I have had to deal with recently. How should I go about thinking about it RESTfully?

    Read the article

  • Variable not accessible within an if statment

    - by Chris
    I have a variable which holds a score for a game. My variable is accessible and correct outside of an if statement but not inside as shown below score is declared at the top of the main.cpp and calculated in the display function which also contains the code below cout << score << endl; //works if(!justFinished){ cout << score << endl; // doesn't work prints a large negative number endTime = time(NULL); ifstream highscoreFile; highscoreFile.open("highscores.txt"); if(highscoreFile.good()){ highscoreFile.close(); }else{ std::ofstream outfile ("highscores.txt"); cout << score << endl; outfile << score << std::endl; outfile.close(); } justFinished = true; } cout << score << endl;//works

    Read the article

  • Working with a list, performing arithmetic logic in Python

    - by haea ohoh
    Suppose I have made a large list of numbers, and I want to make another one which I will add, pairwise, with the first list. Here's the first list, A: [109, 77, 57, 34, 94, 68, 96, 72, 39, 67, 49, 71, 121, 89, 61, 84, 45, 40, 104, 68, 54, 60, 68, 62, 91, 45, 41, 118, 44, 35, 53, 86, 41, 63, 111, 112, 54, 34, 52, 72, 111, 113, 47, 91, 107, 114, 105, 91, 57, 86, 32, 109, 84, 85, 114, 48, 105, 109, 68, 57, 78, 111, 64, 55, 97, 85, 40, 100, 74, 34, 94, 78, 57, 77, 94, 46, 95, 60, 42, 44, 68, 89, 113, 66, 112, 60, 40, 110, 89, 105, 113, 90, 73, 44, 39, 55, 108, 110, 64, 108] And here's B: [35, 106, 55, 61, 81, 109, 82, 85, 71, 55, 59, 38, 112, 92, 59, 37, 46, 55, 89, 63, 73, 119, 70, 76, 100, 49, 117, 77, 37, 62, 65, 115, 93, 34, 107, 102, 91, 58, 82, 119, 75, 117, 34, 112, 121, 58, 79, 69, 68, 72, 110, 43, 111, 51, 102, 39, 52, 62, 75, 118, 62, 46, 74, 77, 82, 81, 36, 87, 80, 56, 47, 41, 92, 102, 101, 66, 109, 108, 97, 49, 72, 74, 93, 114, 55, 116, 66, 93, 56, 56, 93, 99, 96, 115, 93, 111, 57, 105, 35, 99] How might I generate the arithmatic addition logic, processing each pairwise value one by one (A[0] and B[0], through A[99], B[99]) and producing the list C (A[0] + B[0] through A[99]+ B[99])?

    Read the article

  • Application that depends heavily on stored procedures

    - by PieterG
    We currently have an application that depends largely on stored procedures. There is a heavy use of temp tables. It's an extremely large application. Facing this situation, I would like to use Entity Framework or Linq2Sql for a rewrite. I might consider using Fluent Hibernate or Subsonic, as i've used them quite extensively in the past. I've had problems with Linq2Sql generating the return types for the stored procedures because of the usage of the temp tables, and I think it's cumbersome to go and change all the stored procedures from temp tables to in-memory tables. Considering the 2 choices that I want to make, which one of the 2 is the best route to go and why? If my choices are extremely idiotic, please provide alternatives. Edit: The reason for the question and the change is that the data access layer is non-existent and was built 10 years ago. We currently still run into a lot of issues with it. I don't want to divulge too much, but if you saw it, your eyes would start bleeding :)

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >