Search Results

Search found 6053 results on 243 pages for 'solutionsfactory usage'.

Page 219/243 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • many-to-many performance concerns with fluent nhibernate.

    - by Ciel
    I have a situation where I have several many-to-many associations. In the upwards of 12 to 15. Reading around I've seen that it's generally believed that many-to-many associations are not 'typical', yet they are the only way I have been able to create the associations appropriate for my case, so I'm not sure how to optimize any further. Here is my basic scenario. class Page { IList<Tag> Tags { get; set; } IList<Modification> Modifications { get; set; } IList<Aspect> Aspects { get; set; } } This is one of my 'core' classes, and coincidentally one of my core tables. Virtually half of the objects in my code can have an IList<Page>, and some of them have IList<T> where T has its own IList<Page>. As you can see, from an object oriented standpoint, this is not really a problem. But from a database standpoint this begins to introduce a lot of junction tables. So far it has worked fine for me, but I am wondering if anyone has any ideas on how I could improve on this structure. I've spent a long time thinking and in order to achieve the appropriate level of association required, I cannot think of any way to improve it. The only thing I have come up with is to make intermediate classes for each object that has an IList<Page>, but that doesn't really do anything that the HasManyToMany does not already do except introduce another class. It does not extend the functionality and, from what I can tell, it does not improve performance. Any thoughts? I am also concerned about Primary Key limits in this scenario. Most everything needs to be able to have these properties, but the Pages cannot be unique to each object, because they are going to be frequently shared and joined between multiple objects. All relationships are one-sided. (That is, a Page has no knowledge of what owns it). Because of this, I also have no Inverse() mapped HasManyToMany collections. Also, I have read the similar question : Usage of ORMs like NHibernate when there are many associations - performance concerns But it really did not answer my concerns.

    Read the article

  • Cannot figure out how to take in generic parameters for an Enterprise Framework library sql statemen

    - by KallDrexx
    I have written a specialized class to wrap up the enterprise library database functionality for easier usage. The reasoning for using the Enterprise Library is because my applications commonly connect to both oracle and sql server database systems. My wrapper handles both creating connection strings on the fly, connecting, and executing queries allowing my main code to only have to write a few lines of code to do database stuff and deal with error handling. As an example my ExecuteNonQuery method has the following declaration: /// <summary> /// Executes a query that returns no results (e.g. insert or update statements) /// </summary> /// <param name="sqlQuery"></param> /// <param name="parameters">Hashtable containing all the parameters for the query</param> /// <returns>The total number of records modified, -1 if an error occurred </returns> public int ExecuteNonQuery(string sqlQuery, Hashtable parameters) { // Make sure we are connected to the database if (!IsConnected) { ErrorHandler("Attempted to run a query without being connected to a database.", ErrorSeverity.Critical); return -1; } // Form the command DbCommand dbCommand = _database.GetSqlStringCommand(sqlQuery); // Add all the paramters foreach (string key in parameters.Keys) { if (parameters[key] == null) _database.AddInParameter(dbCommand, key, DbType.Object, null); else _database.AddInParameter(dbCommand, key, DbType.Object, parameters[key].ToString()); } return _database.ExecuteNonQuery(dbCommand); } _database is defined as private Database _database;. Hashtable parameters are created via code similar to p.Add("@param", value);. the issue I am having is that it seems that with enterprise library database framework you must declare the dbType of each parameter. This isn't an issue when you are calling the database code directly when forming the paramters but doesn't work for creating a generic abstraction class such as I have. In order to try and get around that I thought I could just use DbType.Object and figure the DB will figure it out based on the columns the sql is working with. Unfortunately, this is not the case as I get the following error: Implicit conversion from data type sql_variant to varchar is not allowed. Use the CONVERT function to run this query Is there any way to use generic parameters in a wrapper class or am I just going to have to move all my DB code into my main classes?

    Read the article

  • Idiomatic use of auto_ptr to transfer ownership to a container

    - by heycam
    I'm refreshing my C++ knowledge after not having used it in anger for a number of years. In writing some code to implement some data structure for practice, I wanted to make sure that my code was exception safe. So I've tried to use std::auto_ptrs in what I think is an appropriate way. Simplifying somewhat, this is what I have: class Tree { public: ~Tree() { /* delete all Node*s in the tree */ } void insert(const string& to_insert); ... private: struct Node { ... vector<Node*> m_children; }; Node* m_root; }; template<T> void push_back(vector<T*>& v, auto_ptr<T> x) { v.push_back(x.get()); x.release(); } void Tree::insert(const string& to_insert) { Node* n = ...; // find where to insert the new node ... push_back(n->m_children, auto_ptr<Node>(new Node(to_insert)); ... } So I'm wrapping the function that would put the pointer into the container, vector::push_back, and relying on the by-value auto_ptr argument to ensure that the Node* is deleted if the vector resize fails. Is this an idiomatic use of auto_ptr to save a bit of boilerplate in my Tree::insert? Any improvements you can suggest? Otherwise I'd have to have something like: Node* n = ...; // find where to insert the new node auto_ptr<Node> new_node(new Node(to_insert)); n->m_children.push_back(new_node.get()); new_node.release(); which kind of clutters up what would have been a single line of code if I wasn't worrying about exception safety and a memory leak. (Actually I was wondering if I could post my whole code sample (about 300 lines) and ask people to critique it for idiomatic C++ usage in general, but I'm not sure whether that kind of question is appropriate on stackoverflow.)

    Read the article

  • pthread and recursively calling execvp in C

    - by eduke
    To begin I'm sorry for my english :) I looking for a way to create a thread each time my program finds a directory, in order to call the program itself but with a new argv[2] argument (which is the current dir). I did it successfully with fork() but with pthread I've some difficulties. I don't know if I can do something like that : #include <unistd.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <sys/wait.h> #include <dirent.h> int main(int argc, char **argv) { pthread_t threadID[10] = {0}; DIR * dir; struct dirent * entry; struct stat status; pthread_attr_t attr; pthread_attr_init(&attr); int i = 0; char *res; char *tmp; char *file; if(argc != 3) { printf("Usage : %s <file> <dir>\n", argv[0]); exit(EXIT_FAILURE); } if(stat(argv[2],&status) == 0) { dir = opendir(argv[2]); file = argv[1]; } else exit(EXIT_FAILURE); while ((entry = readdir(dir))) { if (strcmp(entry->d_name, ".") && strcmp(entry->d_name, "..")) { tmp = malloc(strlen(argv[2]) + strlen(entry->d_name) + 2); strcpy(tmp, argv[2]); strcat(tmp, "/"); strcat(tmp, entry->d_name); stat(tmp, &status); if (S_ISDIR(status.st_mode)) { argv[2] = tmp; pthread_create( &threadID[i], &attr, execvp(argv[0], argv), NULL); printf("New thread created : %d", i); i++; } else if (!strcmp(entry->d_name, file)) { printf(" %s was found - Thread number = %d\n",tmp, i); break; } free(tmp); } } pthread_join( threadID[i] , &res ); exit(EXIT_SUCCESS); } Actually it doesn't works : pthread_create( &threadID[i], &attr, execvp(argv[0], argv), NULL); I have no runtime error, but when the file to find is in another directory, the thread is not created and so execvp(argv[0], argv) is not called... Thank you for you help, Simon

    Read the article

  • Recommendations for a C++ polymorphic, seekable, binary I/O interface

    - by Trevor Robinson
    I've been using std::istream and ostream as a polymorphic interface for random-access binary I/O in C++, but it seems suboptimal in numerous ways: 64-bit seeks are non-portable and error-prone due to streampos/streamoff limitations; currently using boost/iostreams/positioning.hpp as a workaround, but it requires vigilance Missing operations such as truncating or extending a file (ala POSIX ftruncate) Inconsistency between concrete implementations; e.g. stringstream has independent get/put positions whereas filestream does not Inconsistency between platform implementations; e.g. behavior of seeking pass the end of a file or usage of failbit/badbit on errors Don't need all the formatting facilities of stream or possibly even the buffering of streambuf streambuf error reporting (i.e. exceptions vs. returning an error indicator) is supposedly implementation-dependent in practice I like the simplified interface provided by the Boost.Iostreams Device concept, but it's provided as function templates rather than a polymorphic class. (There is a device class, but it's not polymorphic and is just an implementation helper class not necessarily used by the supplied device implementations.) I'm primarily using large disk files, but I really want polymorphism so I can easily substitute alternate implementations (e.g. use stringstream instead of fstream for unit tests) without all the complexity and compile-time coupling of deep template instantiation. Does anyone have any recommendations of a standard approach to this? It seems like a common situation, so I don't want to invent my own interfaces unnecessarily. As an example, something like java.nio.FileChannel seems ideal. My best solution so far is to put a thin polymorphic layer on top of Boost.Iostreams devices. For example: class my_istream { public: virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) = 0; virtual std::streamsize read(char* s, std::streamsize n) = 0; virtual void close() = 0; }; template <class T> class boost_istream : public my_istream { public: boost_istream(const T& device) : m_device(device) { } virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) { return boost::iostreams::seek(m_device, off, way); } virtual std::streamsize read(char* s, std::streamsize n) { return boost::iostreams::read(m_device, s, n); } virtual void close() { boost::iostreams::close(m_device); } private: T m_device; };

    Read the article

  • IP address numbers in MySQL subquery

    - by Iain Collins
    I have a problem with a subquery involving IPV4 addresses stored in MySQL (MySQL 5.0). The IP addresses are stored in two tables, both in network number format - e.g. the format output by MySQL's INET_ATON(). The first table ('events') contains lots of rows with IP addresses associated with them, the second table ('network_providers') contains a list of provider information for given netblocks. events table (~4,000,000 rows): event_id (int) event_name (varchar) ip_address (unsigned 4 byte int) network_providers table (~60,000 rows): ip_start (unsigned 4 byte int) ip_end (unsigned 4 byte int) provider_name (varchar) Simplified for the purposes of the problem I'm having, the goal is to create an export along the lines of: event_id,event_name,ip_address,provider_name If do a query along the lines of either of the following, I get the result I expect: SELECT provider_name FROM network_providers WHERE INET_ATON('192.168.0.1') >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 SELECT provider_name FROM network_providers WHERE 3232235521 >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 That is to say, it returns the correct provider_name for whatever IP I look up (of course I'm not really using 192.168.0.1 in my queries). However, when performing this same query as a subquery, in the following manner, it doesn't yield the result I would expect: SELECT event.id, event.event_name, (SELECT provider_name FROM network_providers WHERE event.ip_address >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1) as provider FROM events Instead the a different (incorrect) value for network_provider is returned - over 90% (but curiously not all) values returned in the provider column contain the wrong provider information for that IP. Using event.ip_address in a subquery just to echo out the value confirms it contains the value I'd expect and that the subquery can parse it. Replacing event.ip_address with an actual network number also works, just using it dynamically in the subquery in this manner that doesn't work for me. I suspect the problem is there is something fundamental and important about subqueries in MySQL that I don't get. I've worked with IP addresses like this in MySQL quite a bit before, but haven't previously done lookups for them using a subquery. The question: I'd really appreciate an example of how I could get the output I want, and if someone here knows, some enlightenment as to why what I'm doing doesn't work so I can avoid making this mistake again. Notes: The actual real-world usage I'm trying to do is considerably more complicated (involving joining two or three tables). This is a simplified version, to avoid overly complicating the question. Additionally, I know I'm not using a between on ip_start & ip_end - that's intentional (the DB's can be out of date, and such cases the owner in the DB is almost always in the next specified range and 'best guess' is fine in this context) however I'm grateful for any suggestions for improvement that relate to the question. Efficiency is always nice, but in this case absolutely not essential - any help appreciated.

    Read the article

  • xerces-c: Xml parsing multiple files

    - by user459811
    I'm atempting to learn xerces-c and was following this tutorial online. http://www.yolinux.com/TUTORIALS/XML-Xerces-C.html I was able to get the tutorial to compile and run through a memory checker (valgrind) with no problems however when I made alterations to the program slightly, the memory checker returned some potential leak bytes. I only added a few extra lines to main to allow the program to read two files instead of one. int main() { string configFile="sample.xml"; // stat file. Get ambigious segfault otherwise. GetConfig appConfig; appConfig.readConfigFile(configFile); cout << "Application option A=" << appConfig.getOptionA() << endl; cout << "Application option B=" << appConfig.getOptionB() << endl; // Added code configFile = "sample1.xml"; appConfig.readConfigFile(configFile); cout << "Application option A=" << appConfig.getOptionA() << endl; cout << "Application option B=" << appConfig.getOptionB() << endl; return 0; } I was wondering why is it when I added the extra lines of code to read in another xml file, it would result in the following output? ==776== Using Valgrind-3.6.0 and LibVEX; rerun with -h for copyright info ==776== Command: ./a.out ==776== Application option A=10 Application option B=24 Application option A=30 Application option B=40 ==776== ==776== HEAP SUMMARY: ==776== in use at exit: 6 bytes in 2 blocks ==776== total heap usage: 4,031 allocs, 4,029 frees, 1,092,045 bytes allocated ==776== ==776== 3 bytes in 1 blocks are definitely lost in loss record 1 of 2 ==776== at 0x4C28B8C: operator new(unsigned long) (vg_replace_malloc.c:261) ==776== by 0x5225E9B: xercesc_3_1::MemoryManagerImpl::allocate(unsigned long) (MemoryManagerImpl.cpp:40) ==776== by 0x53006C8: xercesc_3_1::IconvGNULCPTranscoder::transcode(unsigned short const*, xercesc_3_1::MemoryManager*) (IconvGNUTransService.cpp:751) ==776== by 0x4038E7: GetConfig::readConfigFile(std::string&) (in /home/bonniehan/workspace/test/a.out) ==776== by 0x403B13: main (in /home/bonniehan/workspace/test/a.out) ==776== ==776== 3 bytes in 1 blocks are definitely lost in loss record 2 of 2 ==776== at 0x4C28B8C: operator new(unsigned long) (vg_replace_malloc.c:261) ==776== by 0x5225E9B: xercesc_3_1::MemoryManagerImpl::allocate(unsigned long) (MemoryManagerImpl.cpp:40) ==776== by 0x53006C8: xercesc_3_1::IconvGNULCPTranscoder::transcode(unsigned short const*, xercesc_3_1::MemoryManager*) (IconvGNUTransService.cpp:751) ==776== by 0x40393F: GetConfig::readConfigFile(std::string&) (in /home/bonniehan/workspace/test/a.out) ==776== by 0x403B13: main (in /home/bonniehan/workspace/test/a.out) ==776== ==776== LEAK SUMMARY: ==776== definitely lost: 6 bytes in 2 blocks ==776== indirectly lost: 0 bytes in 0 blocks ==776== possibly lost: 0 bytes in 0 blocks ==776== still reachable: 0 bytes in 0 blocks ==776== suppressed: 0 bytes in 0 blocks ==776== ==776== For counts of detected and suppressed errors, rerun with: -v ==776== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 2 from 2)

    Read the article

  • How to maxmise the largest contiguous block of memory in the Large Object Heap

    - by Unsliced
    The situation is that I am making a WCF call to a remote server which is returns an XML document as a string. Most of the time this return value is a few K, sometimes a few dozen K, very occasionally a few hundred K, but very rarely it could be several megabytes (first problem is that there is no way for me to know). It's these rare occasions that are causing grief. I get a stack trace that starts: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Xml.BufferBuilder.AddBuffer() at System.Xml.BufferBuilder.AppendHelper(Char* pSource, Int32 count) at System.Xml.BufferBuilder.Append(Char[] value, Int32 start, Int32 count) at System.Xml.XmlTextReaderImpl.ParseText() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Xml.XmlReader.ReadElementString() at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderMDRQuery.Read2_getMarketDataResponse() at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer2.Deserialize(XmlSerializationReader reader) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) I've read around and it is because the Large Object Heap is just getting too fragmented, so even preceding the call with a quick check to StringBuilder.EnsureCapacity just causes the OutOfMemoryException to be thrown earlier (and because I'm guessing at what's needed, it might not actually need that much so my check is causing more problems than it is solving). Some opinions are that there's not much I can do about it. Some of the questions I've asked myself: Use less memory - have you checked for leaks? Yes. The memory usage goes up and down, but there's no fundamental growth that guarantees this to happen. Some of the times it fails, it succeeded at that stage previously. Transfer smaller amounts Not an option, this is a third party web service over which I have no control (or at least it would take a long time to resolve, in the meantime I still have a problem) Can you do something to the LOH to make it less likely to fail? ... now this is most fruitful course. It's a 32-bit process (it has to be for various political, technical and boring reasons) but there's normally hundreds of meg free (multiples of the largest amount for which we've seen failures). Can we monitor the LOH? Using perfmon I can track the size of the heaps, but I don't think there's a way to monitor the largest available contiguous block of memory. Question is: any advice or suggestions for things to try?

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • Why does my Perl regular expression only find the last occurrence?

    - by scharan
    I have the following input to a Perl script and I wish to get the first occurrence of NAME="..." strings in each of the <table>...</table> structures. The entire file is read into a single string and the regex acts on that input. However, the regex always returns the last occurrence of NAME="..." strings. Can anyone explain what is going on and how this can be fixed? Input file: ADSDF <TABLE> NAME="ORDERSAA" line1 line2 NAME="ORDERSA" line3 NAME="ORDERSAB" </TABLE> <TABLE> line1 line2 NAME="ORDERSB" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSC" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSD" line3 line3 line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES2" line3 NAME="QUOTES3" NAME="QUOTES4" line3 NAME="QUOTES5" line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES6" NAME="QUOTES7" NAME="QUOTES8" NAME="QUOTES9" line3 line3 </TABLE> <TABLE> NAME="MyName IsKhan" </TABLE> Perl Code starts here: use warnings; use strict; my $nameRegExp = '(<table>((NAME="(.+)")|(.*|\n))*</table>)'; sub extractNames($$){ my ($ifh, $ofh) = @_; my $fullFile; read ($ifh, $fullFile, 1024);#Hardcoded to read just 1024 bytes. while( $fullFile =~ m#$nameRegExp#gi){ print "found: ".$4."\n"; } } sub main(){ if( ($#ARGV + 1 )!= 1){ die("Usage: extractNames infile\n"); } my $infileName = $ARGV[0]; my $outfileName = $ARGV[1]; open my $inFile, "<$infileName" or die("Could not open log file $infileName"); my $outFile; #open my $outFile, ">$outfileName" or die("Could not open log file $outfileName"); extractNames( $inFile, $outFile ); close( $inFile ); #close( $outFile ); } #call main();

    Read the article

  • Parse lines of integers in C

    - by Jérôme
    This is a classical problem, but I can not find a simple solution. I have an input file like: 1 3 9 13 23 25 34 36 38 40 52 54 59 2 3 9 14 23 26 34 36 39 40 52 55 59 63 67 76 85 86 90 93 99 108 114 2 4 9 15 23 27 34 36 63 67 76 85 86 90 93 99 108 115 1 25 34 36 38 41 52 54 59 63 67 76 85 86 90 93 98 107 113 2 3 9 16 24 28 2 3 10 14 23 26 34 36 39 41 52 55 59 63 67 76 Lines of different number of integers separated by a space. I would like to parse them in an array, and separate each line with a marker, let say -1. The difficulty is that I must handle integers and line returns. Here my existing code, it loops upon the scanf loop (because scanf can not begin at a given position). #include <stdio.h> #include <stdlib.h> int main(int argc, char **argv) { if (argc != 4) { fprintf(stderr, "Usage: %s <data file> <nb transactions> <nb items>\n", argv[0]); return 1; } FILE * file; file = fopen (argv[1],"r"); if (file==NULL) { fprintf(stderr, "Error: can not open %s\n", argv[1]); fclose(file); return 1; } int nb_trans = atoi(argv[2]); int nb_items = atoi(argv[3]); int *bdd = malloc(sizeof(int) * (nb_trans + nb_items)); char line[1024]; int i = 0; while ( fgets(line, 1024, file) ) { int item; while ( sscanf (line, "%d ", &item )){ printf("%s %d %d\n", line, i, item); bdd[i++] = item; } bdd[i++] = -1; } for ( i = 0; i < nb_trans + nb_items; i++ ) { printf("%d ", bdd[i]); } printf("\n"); }

    Read the article

  • How to Convert arrays or SimpleXML-Objects into an XML-String

    - by streetparade
    I want to create a xml from a given string, i have a function but i didn't wrote it.It seems a bit cryptical too. Can please some one review it and give me some Ideas, how it could be written clearer for everybody? /** * Converts arrays or SimpleXML-Objects into an XML-String * @params mixed Accepts an array or xml string with data to Post * @params integer DO NOT PROVIDE. Internal Usage for recursion only */ private function mixedDataToXML($data, $level = 1) { if(!$data){ return FALSE; } if(is_array($data)) { $xml = ''; if ($level==1) { $xml .= '<?xml version="1.0" encoding="ISO-8859-1"?>'."\n"; } foreach ($data as $key => $value) { $key = strtolower($key); if (is_array($value)) { $multi_tags = false; foreach($value as $key2=>$value2) { if (is_array($value2)) { $xml .= str_repeat("\t",$level)."<$key>\n"; $xml .= $this->mixedDataToXML($value2, $level+1); $xml .= str_repeat("\t",$level)."</$key>\n"; $multi_tags = true; } else { if (trim($value2)!='') { if (htmlspecialchars($value2)!=$value2) { $xml .= str_repeat("\t",$level). "<$key><![CDATA[$value2]]>". "</$key>\n"; } else { $xml .= str_repeat("\t",$level). "<$key>$value2</$key>\n"; } } $multi_tags = true; } } if (!$multi_tags and count($value)>0) { $xml .= str_repeat("\t",$level)."<$key>\n"; $xml .= $this->mixedDataToXML($value, $level+1); $xml .= str_repeat("\t",$level)."</$key>\n"; } } else { if (trim($value)!='') { if (htmlspecialchars($value)!=$value) { $xml .= str_repeat("\t",$level)."<$key>". "<![CDATA[$value]]></$key>\n"; } else { $xml .= str_repeat("\t",$level). "<$key>$value</$key>\n"; } } } } return $xml; }else{ return (string)$data; } }

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

  • Help with C# program design implementation: multiple array of lists or a better way?

    - by Bob
    I'm creating a 2D tile-based RPG in XNA and am in the initial design phase. I was thinking of how I want my tile engine to work and came up with a rough sketch. Basically I want a grid of tiles, but at each tile location I want to be able to add more than one tile and have an offset. I'd like this so that I could do something like add individual trees on the world map to give more flair. Or set bottles on a bar in some town without having to draw a bunch of different bar tiles with varying bottles. But maybe my reach is greater than my grasp. I went to implement the idea and had something like this in my Map object: List<Tile>[,] Grid; But then I thought about it. Let's say I had a world map of 200x200, which would actually be pretty small as far as RPGs go. That would amount to 40,000 Lists. To my mind I think there has to be a better way. Now this IS pre-mature optimization. I don't know if the way I happen to design my maps and game will be able to handle this, but it seems needlessly inefficient and something that could creep up if my game gets more complex. One idea I have is to make the offset and the multiple tiles optional so that I'm only paying for them when needed. But I'm not sure how I'd do this. A multiple array of objects? object[,] Grid; So here's my criteria: A 2D grid of tile locations Each tile location has a minimum of 1 tile, but can optionally have more Each extra tile can optionally have an x and y offset for pinpoint placement Can anyone help with some ideas for implementing such a design (don't need it done for me, just ideas) while keeping memory usage to a minimum? If you need more background here's roughly what my Map and Tile objects amount to: public struct Map { public Texture2D Texture; public List<Rectangle> Sources; //Source Rectangles for where in Texture to get the sprite public List<Tile>[,] Grid; } public struct Tile { public int Index; //Where in Sources to find the source Rectangle public int X, Y; //Optional offsets }

    Read the article

  • IPHONE DEVELOPMENT PROFILE EXPIRED - I TRIED EVERYTHING AND YES, I READ THE DOCS

    - by theiphoneguy
    I really combed this site and others. I read and re-read the related links here and the Apple docs. I'm sorry, but either I am obviously missing something right under my nose, or this Apple profile/certificate stuff is a bit convoluted. Here it is: I have a product in the App Store. I have updated it several times and users like it. My development profile recently expired just when I was improving the app for its next release. I can run the app in the simulator. I can compile and put the distribution build on my iPhone just fine. I went to the Apple portal and renewed the development profile. I downloaded it and installed it in Xcode. I see it in the Organize window. I see it on my iPhone. I CANNOT put the debug build on my iPhone to debug or run with Instruments. The message is that either there is not a valid signed profile or it is untrusted. I subsequently tried to download and install the certificate to my Mac's keychain. Still no success. I checked the code signing section of Project settings and also for the target and the root. All appears to indicate that it is using the expected development profile for debug. Yes, I had deleted the old profile from my iPhone, from the Organizer. I cleaned the Xcode cache and all targets. I have done all of this several times and in varying sequences to try to cover every possibility. I am ready to do anything to be able to debug with Instruments in order to check for leaks or high memory usage. Even though the distribution compile runs fine on my iPhone and plays well with other running processes, I will not release anything without a leaks/memory test. Any ideas will be appreciated. If I missed something obvious, please forgive me - it was not due to just posting a question without searching for similar postings. Thanks!

    Read the article

  • Neo4j 1.9.4 (REST Server,CYPHER) performance issue

    - by user2968943
    I have Neo4j 1.9.4 installed on 24 core 24Gb ram (centos) machine and for most queries CPU usage spikes goes to 200% with only few concurrent requests. Domain: some sort of social application where few types of nodes(profiles) with 3-30 text/array properties and 36 relationship types with at least 3 properties. Most of nodes currently has ~300-500 relationships. Current data set footprint(from console): LogicalLogSize=4294907 (32MB) ArrayStoreSize=1675520 (12MB) NodeStoreSize=1342170 (10MB) PropertyStoreSize=1739548 (13MB) RelationshipStoreSize=6395202 (48MB) StringStoreSize=1478400 (11MB) which is IMHO really small. most queries looks like this one(with more or less WITH .. MATCH .. statements and few queries with variable length relations but the often fast): START targetUser=node({id}), currentUser=node({current}) MATCH targetUser-[contact:InContactsRelation]->n, n-[:InLocationRelation]->l, n-[:InCategoryRelation]->c WITH currentUser, targetUser,n, l,c, contact.fav is not null as inFavorites MATCH n<-[followers?:InContactsRelation]-() WITH currentUser, targetUser,n, l,c,inFavorites, COUNT(followers) as numFollowers RETURN id(n) as id, n.name? as name, n.title? as title, n._class as _class, n.avatar? as avatar, n.avatar_type? as avatar_type, l.name as location__name, c.name as category__name, true as isInContacts, inFavorites as isInFavorites, numFollowers it runs in ~1s-3s(for first run) and ~1s-70ms (for consecutive and it depends on query) and there is about 5-10 queries runs for each impression. Another interesting behavior is when i try run query from console(neo4j) on my local machine many consecutive times(just press ctrl+enter for few seconds) it has almost constant execution time but when i do it on server it goes slower exponentially and i guess it somehow related with my problem. Problem: So my problem is that neo4j is very CPU greedy(for 24 core machine its may be not an issue but its obviously overkill for small project). First time i used AWS EC2 m1.large instance but over all performance was bad, during testing, CPU always was over 100%. Some relevant parts of configuration: neostore.nodestore.db.mapped_memory=1280M wrapper.java.maxmemory=8192 note: I already tried configuration where all memory related parameters where HIGH and it didn't worked(no change at all). Question: Where to digg? configuration? scheme? queries? what i'm doing wrong? if need more info(logs, configs) just ask ;)

    Read the article

  • What would you suggest as a high school first language?

    - by ldigas
    Edit by OA: After reading some answers I'll just update the question a little. At first I put it a little bluntly, but some of those gave me some good arguments which have to be taken into consideration while making a stand on this one. (these are mostly picked up from comments and answers below). A few things to take into account: to many pupils this is a first programming language - at this stage most of them have trouble grasping a difference between data types, variable passing, ... and whatnot, less alone pointers and similar 'low level stuff' :) they will all have to pass this to get into next grade (well, big majority of them anyway) not all of them have computers at home, not all of them are willing to learn this, less alone interested in - so the concepts have to be taught on a finite time scale in school hours (as well as practice on computers) free literature is a bonus - the teacher will make some scripts and handaways, but still ... I wouldn't like to bear the parents with the burden of buying expensive literature (also, english is not a native language here ... and although they are all learning it, their ability to read it fluently is somewhat questionable) somebody gave an argument - "a language which does not get in the way of ideas" - good one accessibility on different platforms in not expecially important at this point - although most of the suggested ones are available on win as well as linux - not many macs in this part of europe (their prices are sky high for anything but specialised usage) I will check what are the licencing issues on ms express editions about using it massively in high schools for purposes like this - if someone has any info about this, please, do not be shy with it :) A friend of mine, informatics teacher - in EU it comes as something as junior cs teacher, in a local high school asked me what I thought about what should be the first language pupils should be taught? It is a technical school (a little more oriented towards mathematics than the gymnasium, but not computer oriented totally). So I'm asking you - what do you think should be the first language pupils are exposed to in highschool? They have been teaching Pascal so far, but she's not sure that's a good course. She thought about switching to C (which I resented; considering not all pupils have interests in programming, to start with, and should be taught something higher level since they are just gripping the idea of a loop and such ... for a start), I suggested python or ruby (preferably py since it handles all paradigms). What is your opinion on this one? I looked, but didn't find a similar question on SO, so if there is one, please just point me towards it. Edit: The assumption is that none of the pupils have been exposed to any programming in junior school. See also: What is the best way to teach young kids some basic programming concepts? Best ways to teach a beginner to program How and when do you teach a kid to code What is the easiest language to start with? High School Programming

    Read the article

  • ReadFile doesn't work asynchronously on Win7 and Win2k8

    - by f0b0s
    According to MSDN ReadFile can read data 2 different ways: synchronously and asynchronously. I need the second one. The folowing code demonstrates usage with OVERLAPPED struct: #include <windows.h> #include <stdio.h> #include <time.h> void Read() { HANDLE hFile = CreateFileA("c:\\1.avi", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); if ( hFile == INVALID_HANDLE_VALUE ) { printf("Failed to open the file\n"); return; } int dataSize = 256 * 1024 * 1024; char* data = (char*)malloc(dataSize); memset(data, 0xFF, dataSize); OVERLAPPED overlapped; memset(&overlapped, 0, sizeof(overlapped)); printf("reading: %d\n", time(NULL)); BOOL result = ReadFile(hFile, data, dataSize, NULL, &overlapped); printf("sent: %d\n", time(NULL)); DWORD bytesRead; result = GetOverlappedResult(hFile, &overlapped, &bytesRead, TRUE); // wait until completion - returns immediately printf("done: %d\n", time(NULL)); CloseHandle(hFile); } int main() { Read(); } On Windows XP output is: reading: 1296651896 sent: 1296651896 done: 1296651899 It means that ReadFile didn't block and returned imediatly at the same second, whereas reading process continued for 3 seconds. It is normal async reading. But on windows 7 and windows 2008 I get following results: reading: 1296661205 sent: 1296661209 done: 1296661209. It is a behavior of sync reading. MSDN says that async ReadFile sometimes can behave as sync (when the file is compressed or encrypted for example). But the return value in this situation should be TRUE and GetLastError() == NO_ERROR. On Windows 7 I get FALSE and GetLastError() == ERROR_IO_PENDING. So WinApi tells me that it is an async call, but when I look at the test I see that it is not! I'm not the only one who found this "bug": read the comment on ReadFile MSDN page. So what's the solution? Does anybody know? It is been 14 months after Denis found this strange behavior.

    Read the article

  • segmented reduction with scattered segments

    - by Christian Rau
    I got to solve a pretty standard problem on the GPU, but I'm quite new to practical GPGPU, so I'm looking for ideas to approach this problem. I have many points in 3-space which are assigned to a very small number of groups (each point belongs to one group), specifically 15 in this case (doesn't ever change). Now I want to compute the mean and covariance matrix of all the groups. So on the CPU it's roughly the same as: for each point p { mean[p.group] += p.pos; covariance[p.group] += p.pos * p.pos; ++count[p.group]; } for each group g { mean[g] /= count[g]; covariance[g] = covariance[g]/count[g] - mean[g]*mean[g]; } Since the number of groups is extremely small, the last step can be done on the CPU (I need those values on the CPU, anyway). The first step is actually just a segmented reduction, but with the segments scattered around. So the first idea I came up with, was to first sort the points by their groups. I thought about a simple bucket sort using atomic_inc to compute bucket sizes and per-point relocation indices (got a better idea for sorting?, atomics may not be the best idea). After that they're sorted by groups and I could possibly come up with an adaption of the segmented scan algorithms presented here. But in this special case, I got a very large amount of data per point (9-10 floats, maybe even doubles if the need arises), so the standard algorithms using a shared memory element per thread and a thread per point might make problems regarding per-multiprocessor resources as shared memory or registers (Ok, much more on compute capability 1.x than 2.x, but still). Due to the very small and constant number of groups I thought there might be better approaches. Maybe there are already existing ideas suited for these specific properties of such a standard problem. Or maybe my general approach isn't that bad and you got ideas for improving the individual steps, like a good sorting algorithm suited for a very small number of keys or some segmented reduction algorithm minimizing shared memory/register usage. I'm looking for general approaches and don't want to use external libraries. FWIW I'm using OpenCL, but it shouldn't really matter as the general concepts of GPU computing don't really differ over the major frameworks.

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • IPhone Development Profile Expired

    - by theiphoneguy
    I really combed this site and others. I read and re-read the related links here and the Apple docs. I'm sorry, but either I am obviously missing something right under my nose, or this Apple profile/certificate stuff is a bit convoluted. Here it is: I have a product in the App Store. I have updated it several times and users like it. My development profile recently expired just when I was improving the app for its next release. I can run the app in the simulator. I can compile and put the distribution build on my iPhone just fine. I went to the Apple portal and renewed the development profile. I downloaded it and installed it in Xcode. I see it in the Organize window. I see it on my iPhone. I CANNOT put the debug build on my iPhone to debug or run with Instruments. The message is that either there is not a valid signed profile or it is untrusted. I subsequently tried to download and install the certificate to my Mac's keychain. Still no success. I checked the code signing section of Project settings and also for the target and the root. All appears to indicate that it is using the expected development profile for debug. Yes, I had deleted the old profile from my iPhone, from the Organizer. I cleaned the Xcode cache and all targets. I have done all of this several times and in varying sequences to try to cover every possibility. I am ready to do anything to be able to debug with Instruments in order to check for leaks or high memory usage. Even though the distribution compile runs fine on my iPhone and plays well with other running processes, I will not release anything without a leaks/memory test. Any ideas will be appreciated. If I missed something obvious, please forgive me - it was not due to just posting a question without searching for similar postings. Thanks!

    Read the article

  • Memory issues - Living vs. overall -> app is killed

    - by D33
    I'm trying to check my applications memory issues in Instruments. When I load the application I play some sounds and show some animations in UIImageViews. To save some memory I load the sounds only when I need it and when I stop playing it I free it from the memory. problem 1: My application is using about 5.5MB of Living memory. BUT The Overall section is growing after start to 20MB and then it's slowly growing (about 100kB/sec). But responsible Library is OpenAL (OAL::Buffer), dyld (_dyld_start)-I am not sure what this really is, and some other stuff like ft_mem_qrealloc, CGFontStrikeSetValue, … problem 2: When the overall section breaks about 30MB, application crashes (is killed). According to the facts I already read about overall memory, it means then my all allocations and deallocation is about 30MB. But I don't really see the problem. When I need some sound for example I load it to the memory and when I don't need it anymore I release it. But that means when I load 1MB sound, this operation increase overall memory usage with 2MB. Am I right? And when I load 10 sounds my app crashes just because the fact my overall is too high even living is still low??? I am very confused about it. Could someone please help me clear it up? (I am on iOS 5 and using ARC) SOME CODE: creating the sound OpenAL: MYOpenALSound *sound = [[MyOpenALSound alloc] initWithSoundFile:filename willRepeat:NO]; if(!sound) return; [soundDictionary addObject:sound]; playing: [sound play]; dispatch_after(dispatch_time(DISPATCH_TIME_NOW, ((sound.duration * sound.pitch) + 0.1) * NSEC_PER_SEC), dispatch_get_current_queue(), ^{ [soundDictionary removeObjectForKey:[NSNumber numberWithInt:soundID]]; }); } creating the sound with AVAudioPlayer: [musics replaceObjectAtIndex:ID_MUSIC_MAP withObject:[[Music alloc] initWithFilename:@"mapMusic.mp3" andWillRepeat:YES]]; pom = [musics objectAtIndex:musicID]; [pom playMusic]; and stop and free it: [musics replaceObjectAtIndex:ID_MUSIC_MAP withObject:[NSNull null]]; AND IMAGE ANIMATIONS: I load images from big PNG file (this is realated also to my other topic : Memory warning - UIImageView and its animations) I have few UIImageViews and by time I'm setting animation arrays to play Animations... UIImage *source = [[UIImage alloc] initWithCGImage:[[UIImage imageNamed:@"imageSource.png"] CGImage]]; cutRect = CGRectMake(0*dimForImg.width,1*dimForImg.height,dimForImg.width,dimForImg.height); image1 = [[UIImage alloc] initWithCGImage:CGImageCreateWithImageInRect([source CGImage], cutRect)]; cutRect = CGRectMake(1*dimForImg.width,1*dimForImg.height,dimForImg.width,dimForImg.height); ... image12 = [[UIImage alloc] initWithCGImage:CGImageCreateWithImageInRect([source CGImage], cutRect)]; NSArray *images = [[NSArray alloc] initWithObjects:image1, image2, image3, image4, image5, image6, image7, image8, image9, image10, image11, image12, image12, image12, nil]; and this array I just use simply like : myUIImageView.animationImages = images, ... duration -> startAnimating

    Read the article

  • What (tf) are the secrets behind PDF memory allocation (CGPDFDocumentRef)

    - by Kai
    For a PDF reader I want to prepare a document by taking 'screenshots' of each page and save them to disc. First approach is CGPDFDocumentRef document = CGPDFDocumentCreateWithURL((CFURLRef) someURL); for (int i = 1; i<=pageCount; i++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; CGPDFPageRef page = CGPDFDocumentGetPage(document, i); ...//getting + manipulating graphics context etc. ... CGContextDrawPDFPage(context, page); ... UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext(); ...//saving the image to disc [pool drain]; } CGPDFDocumentRelease(document); This results in a lot of memory which seems not to be released after the first run of the loop (preparing the 1st document), but no more unreleased memory in additional runs: MEMORY BEFORE: 6 MB MEMORY DURING 1ST DOC: 40 MB MEMORY AFTER 1ST DOC: 25 MB MEMORY DURING 2ND DOC: 40 MB MEMORY AFTER 2ND DOC: 25 MB .... Changing the code to for (int i = 1; i<=pageCount; i++) { CGPDFDocumentRef document = CGPDFDocumentCreateWithURL((CFURLRef) someURL); NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; CGPDFPageRef page = CGPDFDocumentGetPage(document, i); ...//getting + manipulating graphics context etc. ... CGContextDrawPDFPage(context, page); ... UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext(); ...//saving the image to disc CGPDFDocumentRelease(document); [pool drain]; } changes the memory usage to MEMORY BEFORE: 6 MB MEMORY DURING 1ST DOC: 9 MB MEMORY AFTER 1ST DOC: 7 MB MEMORY DURING 2ND DOC: 9 MB MEMORY AFTER 2ND DOC: 7 MB .... but is obviously a step backwards in performance. When I start reading a PDF (later in time, different thread) in the first case no more memory is allocated (staying at 25 MB), while in the second case memory goes up to 20 MB (from 7). In both cases, when I remove the CGContextDrawPDFPage(context, page); line memory is (nearly) constant at 6 MB during and after all preparations of documents. Can anybody explain whats going on there?

    Read the article

  • What to Expect in Rails 4

    - by mikhailov
    Rails 4 is nearly there, we should be ready before it released. Most developers are trying hard to keep their application on the edge. Must see resources: 1) @sikachu talk: What to Expect in Rails 4.0 - YouTube 2) Rails Guides release notes: http://edgeguides.rubyonrails.org/4_0_release_notes.html There is a mix of all major changes down here: ActionMailer changes excerpt: Asynchronously send messages via the Rails Raise an ActionView::MissingTemplate exception when no implicit template could be found ActionPack changes excerpt Added controller-level etag additions that will be part of the action etag computation Add automatic template digests to all CacheHelper#cache calls (originally spiked in the cache_digests plugin) Add Routing Concerns to declare common routes that can be reused inside others resources and routes Added ActionController::Live. Mix it in to your controller and you can stream data to the client live truncate now always returns an escaped HTML-safe string. The option :escape can be used as false to not escape the result Added ActionDispatch::SSL middleware that when included force all the requests to be under HTTPS protocol ActiveModel changes excerpt AM::Validation#validates ability to pass custom exception to :strict option Changed `AM::Serializers::JSON.include_root_in_json' default value to false. Now, AM Serializers and AR objects have the same default behaviour Added ActiveModel::Model, a mixin to make Ruby objects work with AP out of box Trim down Active Model API by removing valid? and errors.full_messages ActiveRecord changes excerpt Use native mysqldump command instead of structure_dump method when dumping the database structure to a sql file. Attribute predicate methods, such as article.title?, will now raise ActiveModel::MissingAttributeError if the attribute being queried for truthiness was not read from the database, instead of just returning false ActiveRecord::SessionStore has been extracted from Active Record as activerecord-session_store gem. Please read the README.md file on the gem for the usage Fix reset_counters when there are multiple belongs_to association with the same foreign key and one of them have a counter cache Raise ArgumentError if list of attributes to change is empty in update_all Add Relation#load. This method explicitly loads the records and then returns self Deprecated most of the 'dynamic finder' methods. All dynamic methods except for find_by_... and find_by_...! are deprecated Added ability to ActiveRecord::Relation#from to accept other ActiveRecord::Relation objects Remove IdentityMap ActiveSupport changes excerpt ERB::Util.html_escape now escapes single quotes ActiveSupport::Callbacks: deprecate monkey patch of object callbacks Replace deprecated memcache-client gem with dalli in ActiveSupport::Cache::MemCacheStore Object#try will now return nil instead of raise a NoMethodError if the receiving object does not implement the method, but you can still get the old behavior by using the new Object#try! Object#try can't call private methods Add ActiveSupport::Deprecations.behavior = :silence to completely ignore Rails runtime deprecations What are the most important changes for you?

    Read the article

  • CSS selectors : should I minimise my use of the class attribute in the HTML or optimise the speed

    - by Laurent Bourgault-Roy
    As I was working on a small website, I decided to use the PageSpeed extension to check if their was some improvement I could do to make the site load faster. However I was quite surprise when it told me that my use of CSS selector was "inefficient". I was always told that you should keep the usage of the class attribute in the HTML to a minimum, but if I understand correctly what PageSpeed tell me, it's much more efficient for the browser to match directly against a class name. It make sense to me, but it also mean that I need to put more CSS classes in my HTML. It also make my .css file a little harder to read. I usually tend to mark my CSS like this : #mainContent p.productDescription em.priceTag { ... } Which make it easy to read : I know this will affect the main content and that it affect something in a paragraph tag (so I wont start to put all sort of layout code in it) that describe a product and its something that need emphasis. However it seem I should rewrite it as .priceTag { ... } Which remove all context information about the style. And if I want to use differently formatted price tag (for example, one in a list on the sidebar and one in a paragraph), I need to use something like that .paragraphPriceTag { ... } .listPriceTag { ... } Which really annoy me since I seem to duplicate the semantic of the HTML in my classes. And that mean I can't put common style in an unqualified .priceTag { ... } and thus I need to replicate the style in both CSS rule, making it harder to make change. (Altough for that I could use multiple class selector, but IE6 dont support them) I believe making code harder to read for the sake of speed has never been really considered a very good practice . Except where it is critical, of course. This is why people use PHP/Ruby/C# etc. instead of C/assembly to code their site. It's easier to write and debug. So I was wondering if I should stick with few CSS classes and complex selector or if I should go the optimisation route and remove my fancy CSS selectors for the sake of speed? Does PageSpeed make over the top recommandation? On most modern computer, will it even make a difference?

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >