Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 80/869 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • java: <identifier> expected with ArrayList

    - by A-moc
    I have a class named Storage. Storage contains an arraylist of special objects called Products. Each product contains information such as name, price, etc. My code is as follows: class Storage{ Product sprite = new Product("sprite",1.25,30); Product pepsi = new Product("pepsi",1.85,45); Product orange = new Product("orange",2.25,36); Product hershey = new Product("hershey",1.50,33); Product brownie = new Product("brownie",2.30,41); Product apple = new Product("apple",2.00,15); Product crackers = new Product("peanut",3.90,68); Product trailmix = new Product("trailmix",1.90,45); Product icecream = new Product("icecream",1.65,28); Product doughnut = new Product("doughnut",2.75,18); Product banana = new Product("banana",1.25,32); Product coffee = new Product("coffee",1.30,40); Product chips = new Product("chips",1.70,35); ArrayList<Product> arl = new ArrayList<Product>(); //add initial elements to arraylist arl.add(sprite); arl.add(pepsi); arl.add(orange); arl.add(hershey); arl.add(brownie); arl.add(apple); arl.add(peanut); arl.add(trailmix); arl.add(icecream); arl.add(doughnut); arl.add(banana); arl.add(coffee); arl.add(chips); } Whenever I compile, I get an error message on lines 141-153 stating <identifier> expected. I know it's an elementary problem, but I can't seem to figure this out. Any help is much appreciated.

    Read the article

  • WCF Streaming not working at server

    - by Radhi
    hi, i have used WCF service to transfer large files in chunks to the server for that i have reference this article http://kjellsj.blogspot.com/2007/02/wcf-streaming-upload-files-over-http.html i have configured my application on IIS on my machine. its work fine here. it allows upto 64mb file upload but when we have published the site. it allows only maximum 30Mb file if i try to upload more than that i got error 404 - resource not found. here is the binding config i have used. <basicHttpBinding> <!-- buffer: 64KB; max size: 64MB --> <binding name="FileTransferServicesBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transferMode="Streamed" messageEncoding="Mtom" maxBufferSize="65536" maxReceivedMessageSize="67108864"> <security mode="None"> <transport clientCredentialType="None"/> </security> </binding> </basicHttpBinding> please suggest me where i am missing anything. and if required more code please let me know -thanks in advance

    Read the article

  • XML streaming with XProc.

    - by Pierre
    Hi all, I'm playing with xproc, the XML pipeline language and http://xmlcalabash.com/. I'd like to find an example for streaming large xml documents. for example, given the following huge xml document: <Books> <Book> <title>Book-1</title> </Book> <Book> <title>Book-2</title> </Book> <Book> <title>Book-3</title> </Book> <!-- many many.... --> <Book> <title>Book-N</title> </Book> </Books> How should I proceed to loop (streaming) over x-N documents like <Books> <Book> <title>Book-x</title> </Book> </Books> and treat each document with a xslt ? is it possible with xproc ?

    Read the article

  • Unable to load huge XML document (incorrectly suppose it's due to the XSLT processing)

    - by krisvandenbergh
    I'm trying to match certain elements using XSLT. My input document is very large and the source XML fails to load after processing the following code (consider especially the first line). <xsl:template match="XMI/XMI.content/Model_Management.Model/Foundation.Core.Namespace.ownedElement/Model_Management.Package/Foundation.Core.Namespace.ownedElement"> <rdf:RDF> <rdf:Description rdf:about=""> <xsl:for-each select="Foundation.Core.Class"> <xsl:for-each select="Foundation.Core.ModelElement.name"> <owl:Class rdf:ID="@Foundation.Core.ModelElement.name" /> </xsl:for-each> </xsl:for-each> </rdf:Description> </rdf:RDF> </xsl:template> Apparently the XSLT fails to load after "Model_Management.Model". The PHP code is as follows: if ($xml->loadXML($source_xml) == false) { die('Failed to load source XML: ' . $http_file); } It then fails to perform loadXML and immediately dies. I think there are two options now. 1) I should set a maximum executing time. Frankly, I don't know how that I do this for the built-in PHP 5 XSLT processor. 2) Think about another way to match. What would be the best way to deal with this? The input document can be found at http://krisvandenbergh.be/uml_pricing.xml Any help would be appreciated! Thanks.

    Read the article

  • how to get cartesian products between database and local sequences in linq?

    - by JD
    I saw this similar question here but can't figure out how to use Contains in Cartesian product desired result situation: http://stackoverflow.com/questions/1712105/linq-to-sql-exception-local-sequence-cannot-be-used-in-linq-to-sql-implementatio Let's say I have following: var a = new [] { 1, 4, 7 }; var b = new [] { 2, 5, 8 }; var test = from i in a from j in b select new { A = i, B = j, AB = string.Format("{0:00}a{1:00}b", i, j), }; foreach (var t in test) Console.Write("{0}, ", t.AB); This works great and I get a dump like so (note, I want the cartesian product): 01a02b, 01a05b, 01a08b, 04a02b, 04a05b, 04a08b, 07a02b, 07a05b, 07a08b, Now what I really want is to take this and cartesian product it again against an ID from a database table I have. But, as soon as I add in one more from clause that instead of referencing objects, references SQL table, I get an error. So, altering above to something like so where db is defined as a new DataContext (i.e., class deriving from System.Data.Linq.DataContext): var a = new [] { 1, 4, 7 }; var b = new [] { 2, 5, 8 }; var test = from symbol in db.Symbols from i in a from j in b select new { A = i, B = j, AB = string.Format("{0}{1:00}a{2:00}b", symbol.ID, i, j), }; foreach (var t in test) Console.Write("{0}, ", t.AB); The error I get is following: Local sequence cannot be used in LINQ to SQL implementations of query operators except the Contains operator Its related to not using Contains apparently but I'm unsure how Contains would be used when I don't really want to constrict the results - I want the Cartesian product for my situation. Any ideas of how to use Contains above and still yield the Cartesian product when joining database and local sequences?

    Read the article

  • PHP - warning - Undefined property: stdClass - fix?

    - by Phill Pafford
    I get this warning in my error logs and wanted to know how to correct this issues in my code. Warning: PHP Notice: Undefined property: stdClass::$records in script.php on line 440 Some Code: // Parse object to get account id's // The response doesn't have the records attribute sometimes. $role_arr = getRole($response->records); // Line 440 Response if records exists stdClass Object ( [done] => 1 [queryLocator] => [records] => Array ( [0] => stdClass Object ( [type] => User [Id] => [any] => stdClass Object ( [type] => My Role [Id] => [any] => <sf:Name>My Name</sf:Name> ) ) ) [size] => 1 ) Response if records does not exist stdClass Object ( [done] => 1 [queryLocator] => [size] => 0 ) I was thinking something like array_key_exists() functionality but for objects, anything? or am I going about this the wrong way?

    Read the article

  • LINQ query code for complex merging of data.

    - by Stacey
    I've posted this before, but I worded it poorly. I'm trying again with a more well thought out structure. Re-writing this a bit to make it more clear. I have the following code and I am trying to figure out the shorter linq expression to do it 'inline'. Please examine the "Run()" method near the bottom. I am attempting to understand how to join two dictionaries together based on a matching identifier in one of the objects - so that I can use the query in this sort of syntax. var selected = from a in items.List() // etc. etc. select a; This is my class structure. The Run() method is what I am trying to simplify. I basically need to do this conversion inline in a couple of places, and I wanted to simplify it a great deal so that I can define it more 'cleanly'. class TModel { public Guid Id { get; set; } } class TModels : List<TModel> { } class TValue { } class TStorage { public Dictionary<Guid, TValue> Items { get; set; } } class TArranged { public Dictionary<TModel, TValue> Items { get; set; } } static class Repository { static public TItem Single<TItem, TCollection>(Predicate<TItem> expression) { return default(TItem); // access logic. } } class Sample { public void Run() { TStorage tStorage = new TStorage(); // access tStorage logic here. Dictionary<TModel, TValue> d = new Dictionary<TModel, TValue>(); foreach (KeyValuePair<Guid, TValue> kv in tStorage.Items) { d.Add(Repository.Single<TModel, TModels>(m => m.Id == kv.Key),kv.Value); } } }

    Read the article

  • Getting started with massive data

    - by Max
    I'm a math guy and occasionally do some statistics/machine learning analysis consulting projects on the side. The data I have access to are usually on the smaller side, at most a couple hundred of megabytes (and almost always far less), but I want to learn more about handling and analyzing data on the gigabyte/terabyte scale. What do I need to know and what are some good resources to learn from? Hadoop/MapReduce is one obvious start. Is there a particular programming language I should pick up? (I primarily work now in Python, Ruby, R, and occasionally Java, but it seems like C and Clojure are often used for large-scale data analysis?) I'm not really familiar with the whole NoSQL movement, except that it's associated with big data. What's a good place to learn about it, and is there a particular implementation (Cassandra, CouchDB, etc.) I should get familiar with? Where can I learn about applying machine learning algorithms to huge amounts of data? My math background is mostly on the theory side, definitely not on the numerical or approximation side, and I'm guessing most of the standard ML algorithms don't really scale. Any other suggestions on things to learn would be great!

    Read the article

  • Python access an object byref / Need tagging

    - by Aaron C. de Bruyn
    I need to suck data from stdin and create a object. The incoming data is between 5 and 10 lines long. Each line has a process number and either an IP address or a hash. For example: pid=123 ip=192.168.0.1 - some data pid=123 hash=ABCDEF0123 - more data hash=ABCDEF123 - More data ip=192.168.0.1 - even more data I need to put this data into a class like: class MyData(): pid = None hash = None ip = None lines = [] I need to be able to look up the object by IP, HASH, or PID. The tough part is that there are multiple streams of data intermixed coming from stdin. (There could be hundreds or thousands of processes writing data at the same time.) I have regular expressions pulling out the PID, IP, and HASH that I need, but how can I access the object by any of those values? My thought was to do something like this: myarray = {} for each line in sys.stdin.readlines(): if pid and ip: #If we can get a PID out of the line myarray[pid] = MyData().pid = pid #Create a new MyData object, assign the PID, and stick it in myarray accessible by PID. myarray[pid].ip = ip #Add the IP address to the new object myarray[pid].lines.append(data) #Append the data myarray[ip] = myarray[pid] #Take the object by PID and create a key from the IP. <snip>do something similar for pid and hash, hash and ip, etc...</snip> This gives my an array with two keys (a PID and an IP) and they both point to the same object. But on the next iteration of the loop, if I find (for example) an IP and HASH and do: myarray[hash] = myarray[ip] The following is False: myarray[hash] == myarray[ip] Hopefully that was clear. I hate to admit that waaay back in the VB days, I remember being able handle objects byref instead of byval. Is there something similar in Python? Or am I just approaching this wrong?

    Read the article

  • LINQ Join on Dictionary<K,T> where only K is changed.

    - by Stacey
    Assuming type TModel, TKey, and TValue. In a dictionary where KeyValuePair is declared, I need to merge TKey into a separate model of KeyValuePair where TKey in the original dictionary refers to an identifier in a list of TModel that will replace the item in the Dictionary. public TModel { public Guid Id { get; set; } // ... } public Dictionary<Guid, TValue> contains the elements. TValue relates to the TModel. The serialized/stored object is like this.. public SerializedModel { public Dictionary<Guid,TValue> Items { get; set; } } So I need to construct a new model... KeyValueModel { public Dictionary<TModel, TValue> { get; set; } } KeyValueModel kvm = = (from tModels in controller.ModelRepository.List<Models.Models>() join matchingModels in storedInformation.Items on tModels.Id equals matchingModels select tModels).ToDictionary( c => c.Id, storedInformation.Items.Values ) This linq query isn't doing what I'm wanting, but I think I'm at least headed in the right direction. Can anyone assist with the query? The original object is stored as a KeyValuePair. I need to merge the Guid Keys in the Dictionary to their actual related objects in another object (List) so that the final result is KeyValuePair. And as for what the query is not doing for me... it isn't compiling or running. It just says that "Join is not valid".

    Read the article

  • Is there a way to use Linq projections with extension methods

    - by Acoustic
    I'm trying to use AutoMapper and a repository pattern along with a fluent interface, and running into difficulty with the Linq projection. For what it's worth, this code works fine when simply using in-memory objects. When using a database provider, however, it breaks when constructing the query graph. I've tried both SubSonic and Linq to SQL with the same result. Thanks for your ideas. Here's an extension method used in all scenarios - It's the source of the problem since everything works fine without using extension methods public static IQueryable<MyUser> ByName(this IQueryable<MyUser> users, string firstName) { return from u in users where u.FirstName == firstName select u; } Here's the in-memory code that works fine var userlist = new List<User> {new User{FirstName = "Test", LastName = "User"}}; Mapper.CreateMap<User, MyUser>(); var result = (from u in userlist select Mapper.Map<User, MyUser>(u)) .AsQueryable() .ByName("Test"); foreach (var x in result) { Console.WriteLine(x.FirstName); } Here's the same thing using a SubSonic (or Linq to SQL or whatever) that fails. This is what I'd like to make work somehow with extension methods... Mapper.CreateMap<User, MyUser>(); var result = from u in new DataClasses1DataContext().Users select Mapper.Map<User, MyUser>(u); var final = result.ByName("Test"); foreach(var x in final) // Fails here when the query graph built. { Console.WriteLine(x.FirstName); } The goal here is to avoid having to manually map the generated "User" object to the "MyUser" domain object- in other words, I'm trying to find a way to use AutoMapper so I don't have this kind of mapping code everywhere a database read operation is needed: var result = from u in new DataClasses1DataContext().Users select new MyUser // Can this be avoided with AutoMapper AND extension methods? { FirstName = v.FirstName, LastName = v.LastName };

    Read the article

  • Call method immediately after object construction in LINQ query

    - by Steffen
    I've got some objects which implement this interface: public interface IRow { void Fill(DataRow dr); } Usually when I select something out of db, I go: public IEnumerable<IRow> SelectSomeRows { DataTable table = GetTableFromDatabase(); foreach (DataRow dr in table.Rows) { IRow row = new MySQLRow(); // Disregard the MySQLRow type, it's not important row.Fill(dr); yield return row; } } Now with .Net 4, I'd like to use AsParallel, and thus LINQ. I've done some testing on it, and it speeds up things alot (IRow.Fill uses Reflection, so it's hard on the CPU) Anyway my problem is, how do I go about creating a LINQ query, which calls Fills as part of the query, so it's properly parallelized? For testing performance I created a constructor which took the DataRow as argument, however I'd really love to avoid this if somehow possible. With the constructor in place, it's obviously simple enough: public IEnumerable<IRow> SelectSomeRowsParallel { DataTable table = GetTableFromDatabase(); return from DataRow dr in table.Rows.AsParallel() select new MySQLRow(dr); } However like I said, I'd really love to be able to just stuff my Fill method into the LINQ query, and thus not need the constructor overload.

    Read the article

  • Copying a foreign Subversion repository to keep under dependencies

    - by Jonathan Sternberg
    I want to keep dependencies for my project in our own repository, that way we have consistent libraries for the entire team to work with. For example, I want our project to use the Boost libraries. I've seen this done in the past with putting dependencies under a "vendor" or "dependencies" folder. But I still want to be able to update these dependencies. If a new feature appears in a library and we need it, I want to just be able to update that repository within our own repository. I don't want to have to recopy it and put it under version control again. I'd also like for us to have the ability to change dependencies if a small change is needed without stopping us from ever updating the library. I want the ability to do something like 'svn cp', then be able to 'svn merge' in the future. I just tried this with the boost trunk, but I'm not able to get any history using 'svn log' on the copy I made. How do I do this? What is usually done for large projects with dependencies?

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • Read/Write/Find/Replace huge csv file

    - by notapipe
    I have a huge (4,5 GB) csv file.. I need to perform basic cut and paste, replace operations for some columns.. the data is pretty well organized.. the only problem is I cannot play with it with Excel because of the size (2000 rows, 550000 columns). here is some part of the data: ID,Affection,Sex,DRB1_1,DRB1_2,SENum,SEStatus,AntiCCP,RFUW,rs3094315,rs12562034,rs3934834,rs9442372,rs3737728 D0024949,0,F,0101,0401,SS,yes,?,?,A_A,A_A,G_G,G_G D0024302,0,F,0101,7,SN,yes,?,?,A_A,G_G,A_G,?_? D0023151,0,F,0101,11,SN,yes,?,?,A_A,G_G,G_G,G_G I need to remove 4th, 5th, 6th, 7th, 8th and 9th columns; I need to find every _ character from column 10 onwards and replace it with a space ( ) character; I need to replace every ? with zero (0); I need to replace every comma with a tab; I need to remove first row (that has column names; I need to replace every 0 with 1, every 1 with 2 and every ? with 0 in 2nd column; I need to replace F with 2, M with 1 and ? with 0 in 3rd column; so that in the resulting file the output reads: D0024949 1 2 A A A A G G G G D0024302 1 2 A A G G A G 0 0 D0023151 1 2 A A G G G G G G (both input and output should read one line per row, ne extra blank row) Is there a memory efficient way of doing that with java(and I need a code to do that) or a usable tool for playing with this large data so that I can easily apply Excel functionality..

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

  • Setting synthesized arrays causing memory leaks using nested arrays

    - by webtoad
    Hello: Why is the following code causing a memory leak in an iPhone App? All of the initted objects below leak, including the arrays, the strings and the numbers. So, I'm thinking it has something to do with the the synthesized array property not releasing the object when I set the property again on the second and subsequent time this piece of code is called. Here is the code: "controller" (below) is my custom view controller class, which I have a reference to, and I am setting with this code snippet: sqlite3_stmt *statement; NSMutableArray *foo_IDs = [[NSMutableArray alloc] init]; NSMutableArray *foo_Names = [[NSMutableArray alloc] init]; NSMutableArray *foo_IDsBySection = [[NSMutableArray alloc] init]; NSMutableArray *foo_NamesBySection = [[NSMutableArray alloc] init]; // Get data: NSString *sql = @"select distinct p.foo_ID, p.foo_Name from foo as p "; if (sqlite3_prepare_v2(...) == SQLITE_OK) { while (sqlite3_step(statement) == SQLITE_ROW) { int p_id; NSString *foo_Name; p_id = sqlite3_column_int(statement, 0); char *str2 = (char *)sqlite3_column_text(statement, 1); foo_Name = [NSString stringWithCString:str2]; [foo_IDs addObject:[NSNumber numberWithInt:p_id]]; [foo_Names addObject:foo_Name]; } sqlite3_finalize(statement); } // Pass the array itself into another array: // (normally there is more than one array in each array) [foo_IDsBySection addObject: foo_IDs]; [foo_NamesBySection addObject: foo_Names]; [foo_IDs release]; [foo_Names release]; // Set some synthesized properties (of type NSArray, nonatomic, // retain) in controller: controller.foo_IDsBySection = foo_IDsBySection; controller.foo_NamesBySection = foo_NamesBySection; [foo_IDsBySection release]; [foo_NamesBySection release]; Thanks for any help!

    Read the article

  • What is the difference between these two linq implementations?

    - by Mahesh Velaga
    I was going through Jon Skeet's Reimplemnting Linq to Objects series. In the implementation of where article, I found the following snippets, but I don't get what is the advantage that we are gettting by splitting the original method into two. Original Method: // Naive validation - broken! public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Refactored Method: public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } return WhereImpl(source, predicate); } private static IEnumerable<TSource> WhereImpl<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Jon says - Its for eager validation and then defferring for the rest of the part. But, I don't get it. Could some one please explain it in a little more detail, whats the difference between these 2 functions and why will the validations be performed in one and not in the other eagerly? Conclusion/Solution: I got confused due to my lack of understanding on which functions are determined to be iterator-generators. I assumed that, it is based on signature of a method like IEnumerable<T>. But, based on the answers, now I get it, a method is an iterator-generator if it uses yield statements.

    Read the article

  • MySQL - What is wrong with this query or my database? Terrible performance.

    - by Moss
    SELECT * from `employees` a LEFT JOIN (SELECT phone1 p1, count(*) c, FROM `employees` GROUP BY phone1) b ON a.phone1 = b.p1; I'm not sure if it is this query in particular that has the problem. I have been getting terrible performance in general with this database. The table in question has 120,000 rows. I have tried this particular query remotely and locally with the MyISAM and InnoDB engines, with different types of joins, and with and without an index on phone1. I can get this to complete in about 4 minutes on a 10,000 row table successfully but performance drops exponentially with larger tables. Remotely it will lose connection to the server and locally it brings my system to its knees and seems to go on forever. This query is only a smaller step I was trying to do when a larger query couldn't complete. Maybe I should explain the whole scenario. I have one big flat ugly table that lists a bunch of people and their contact info and the info of the companies they work for. I'm trying to normalize the database and intelligently determine which phone numbers apply to individual people and which apply to an office location. My reasoning is that if a phone number occurs multiple times and the number of occurrence equals the number of times that the street address it is attached to occurs then it must be an office number. So the first step is to count each phone number grouping by phone number. Normally if you just use COUNT()...GROUP BY it will only list the first record it finds in that group so I figured I have to join the full table to the count table where the phone number matches. This does work but as I said I can't successfully complete it on any table much larger than 10,000 rows. This seems pathetic and this doesn't seem like a crazy query to do. Is there a better way to achieve what I want or do I have to break my large table into 12 pieces or is there something wrong with the table or db?

    Read the article

  • PHP Object Creation and Memory Usage

    - by JohnO
    A basic dummy class: class foo { var $bar = 0; function foo() {} function boo() {} } echo memory_get_usage(); echo "\n"; $foo = new foo(); echo memory_get_usage(); echo "\n"; unset($foo); echo memory_get_usage(); echo "\n"; $foo = null; echo memory_get_usage(); echo "\n"; Outputs: $ php test.php 353672 353792 353792 353792 Now, I know that PHP docs say that memory won't be freed until it is needed (hitting the ceiling). However, I wrote this up as a small test, because I've got a much longer task, using a much bigger object, with many instances of that object. And the memory just climbs, eventually running out and stopping execution. Even though these large objects do take up memory, since I destroy them after I'm done with each one (serially), it should not run out of memory (unless a single object exhausts the entire space for memory, which is not the case). Thoughts?

    Read the article

  • nil object in view when building objects on two different associations

    - by Shako
    Hello all. I'm relatively new to Ruby on Rails so please don't mind my newbie level! I have following models: class Paintingdescription < ActiveRecord::Base belongs_to :paintings belongs_to :languages end class Paintingtitle < ActiveRecord::Base belongs_to :paintings belongs_to :languages end class Painting < ActiveRecord::Base has_many :paintingtitles, :dependent => :destroy has_many :paintingdescriptions, :dependent => :destroy has_many :languages, :through => :paintingdescriptions has_many :languages, :through => :paintingtitles end class Language < ActiveRecord::Base has_many :paintingtitles, :dependent => :nullify has_many :paintingdescriptions, :dependent => :nullify has_many :paintings, :through => :paintingtitles has_many :paintings, :through => :paintingdescriptions end In my painting new/edit view, I would like to show the painting details, together with its title and description in each of the languages, so I can store the translation of those field. In order to build the languagetitle and languagedescription records for my painting and each of the languages, I wrote following code in the new method of my Paintings_controller.rb: @temp_languages = @languages @languages.size.times{@painting.paintingtitles.build} @painting.paintingtitles.each do |paintingtitle| paintingtitle.language_id = @temp_languages[0].id @temp_languages.slice!(0) end @temp_languages = @languages @languages.size.times{@painting.paintingdescriptions.build} @painting.paintingdescriptions.each do |paintingdescription| paintingdescription.language_id = @temp_languages[0].id @temp_languages.slice!(0) end In form partial which I call in the new/edit view, I have <% form_for @painting, :html => { :multipart => true} do |f| %> ... <% languages.each do |language| %> <p> <%= label language, language.name %> <% paintingtitle = @painting.paintingtitles[counter] %> <% new_or_existing = paintingtitle.new_record? ? 'new' : 'new' %> <% prefix = "painting[#{new_or_existing}_title_attributes][]" %> <% fields_for prefix, paintingtitle do |paintingtitle_form| %> <%= paintingtitle_form.hidden_field :language_id%> <%= f.label :title %><br /> <%= paintingtitle_form.text_field :title%> <% end %> <% paintingdescription = @painting.paintingdescriptions[counter] %> <% new_or_existing = paintingdescription.new_record? ? 'new' : 'new' %> <% prefix = "painting[#{new_or_existing}_title_attributes][]" %> <% fields_for prefix, paintingdescription do |paintingdescription_form| %> <%= paintingdescription_form.hidden_field :language_id%> <%= f.label :description %><br /> <%= paintingdescription_form.text_field :description %> <% end %> </p> <% counter += 1 %> <% end %> ... <% end %> But, when running the code, ruby encounters a nil object when evaluating paintingdescription.new_record?: You have a nil object when you didn't expect it! You might have expected an instance of ActiveRecord::Base. The error occurred while evaluating nil.new_record? However, if I change the order in which I a) build the paintingtitles and painting descriptions in the paintings_controller new method and b) show the paintingtitles and painting descriptions in the form partial then I get the nil on the paintingtitles.new_record? call. I always get the nil for the objects I build in second place. The ones I build first aren't nil in my view. Is it possible that I cannot build objects for 2 different associations at the same time? Or am I missing something else? Thanks in advance!

    Read the article

  • Grouping by property value and writing group members

    - by Will S
    I need to group the following list by the department value but am having trouble with the LINQ syntax. Here's my list of objects: var people = new List<Person> { new Person { name = "John", department = new List<fields> {new fields { name = "department", value = "IT"}}}, new Person { name = "Sally", department = new List<fields> {new fields { name = "department", value = "IT"}}}, new Person { name = "Bob", department = new List<fields> {new fields { name = "department", value = "Finance"}}}, new Person { name = "Wanda", department = new List<fields> {new fields { name = "department", value = "Finance"}}}, }; I've toyed around with grouping. This is as far as I've got: var query = from p in people from field in p.department where field.name == "department" group p by field.value into departments select new { Department = departments.Key, Name = departments }; So can iterate over the groups, but not sure how to list the Person names - foreach (var department in query) { Console.WriteLine("Department: {0}", department.Department); foreach (var foo in department.Department) { // ?? } } Any ideas on what to do better or how to list the names of the relevant departments?

    Read the article

  • Distinctly LINQ &ndash; Getting a Distinct List of Objects

    - by David Totzke
    Let’s say that you have a list of objects that contains duplicate items and you want to extract a subset of distinct items.  This is pretty straight forward in the trivial case where the duplicate objects are considered the same such as in the following example: List<int> ages = new List<int> { 21, 46, 46, 55, 17, 21, 55, 55 }; IEnumerable<int> distinctAges = ages.Distinct(); Console.WriteLine("Distinct ages:"); foreach (int age in distinctAges) { Console.WriteLine(age); } /* This code produces the following output: Distinct ages: 21 46 55 17 */ What if you are working with reference types instead?  Imagine a list of search results where items in the results, while unique in and of themselves, also point to a parent.  We’d like to be able to select a bunch of items in the list but then see only a distinct list of parents.  Distinct isn’t going to help us much on its own as all of the items are distinct already.  Perhaps we can create a class with just the information we are interested in like the Id and Name of the parents.  public class SelectedItem { public int ItemID { get; set; } public string DisplayName { get; set; } } We can then use LINQ to populate a list containing objects with just the information we are interested in and then get rid of the duplicates. IEnumerable<SelectedItem> list = (from item in ResultView.SelectedRows.OfType<Contract.ReceiptSelectResults>() select new SelectedItem { ItemID = item.ParentId, DisplayName = item.ParentName }) .Distinct(); Most of you will have guessed that this didn’t work.  Even though some of our objects are now duplicates, because we are working with reference types, it doesn’t matter that their properties are the same, they’re still considered unique.  What we need is a way to define equality for the Distinct() extension method. IEqualityComparer<T> Looking at the Distinct method we see that there is an overload that accepts an IEqualityComparer<T>.  We can simply create a class that implements this interface and that allows us to define equality for our SelectedItem class. public class SelectedItemComparer : IEqualityComparer<SelectedItem> { public new bool Equals(SelectedItem abc, SelectedItem def) { return abc.ItemID == def.ItemID && abc.DisplayName == def.DisplayName; } public int GetHashCode(SelectedItem obj) { string code = obj.DisplayName + obj.ItemID.ToString(); return code.GetHashCode(); } } In the Equals method we simply do whatever comparisons are necessary to determine equality and then return true or false.  Take note of the implementation of the GetHashCode method.  GetHashCode must return the same value for two different objects if our Equals method says they are equal.  Get this wrong and your comparer won’t work .  Even though the Equals method returns true, mismatched hash codes will cause the comparison to fail.  For our example, we simply build a string from the properties of the object and then call GetHashCode() on that. Now all we have to do is pass an instance of our IEqualitlyComarer<T> to Distinct and all will be well: IEnumerable<SelectedItem> list =     (from item in ResultView.SelectedRows.OfType<Contract.ReceiptSelectResults>()         select new SelectedItem { ItemID = item.dahfkp, DisplayName = item.document_code })                         .Distinct(new SelectedItemComparer());   Enjoy. Dave Just because I can… Technorati Tags: LINQ,C#

    Read the article

  • Importing AutoCAD/Solidworks drawings/objects into winforms?

    - by Dinoo
    Has anybody done anything like that? I need to import 3d objects, done in either AutoCAD or Solidworks, and draw them into a windows form. I only need the object to be viewed in 3D and moved around - no manipulation required. I am assuming I will need 2 libraries at least, one for a very simple 3D engine, and one to actually get what I need from the CAD/SW files. Autodesk has a SDK available for developing AutoCAD plugins using .NET, but I am not sure if you can use it the other way around - loading files into the .NET app. Any help, links, and ideas are appreciated.

    Read the article

  • How to properly clean up Excel interop objects in C#

    - by HAdes
    I'm using the Excel interop in C# (ApplicationClass) and have placed the following code in my finally clause: while (System.Runtime.InteropServices.Marshal.ReleaseComObject(excelSheet) != 0) { } excelSheet = null; GC.Collect(); GC.WaitForPendingFinalizers(); Although, this kind of works the Excel.exe process is still in the background even after I close Excel. It is only released once my application is manually closed. Anyone realize what I am doing wrong, or has an alternative to ensure interop objects are properly disposed of. Thanks.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >