Search Results

Search found 43200 results on 1728 pages for 'large object pattern'.

Page 16/1728 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Incorporating libs into module pattern

    - by webnesto
    I have recently started using require.js (along with Backbone.js, jQuery, and a handful of other JavaScript libs) and I love the module pattern (here's a nice synopsis if you're unfamiliar: http://www.adequatelygood.com/2010/3/JavaScript-Module-Pattern-In-Depth). Something I'm running up against is best practices on incorporating libs that don't (out of the box) support the module pattern. For example, jQuery without modification is going to load into a global jQuery variable and that's that. Require.js recognizes this and provides an example project for download with a (slightly) modified version of jQuery to incorporate with a require.js project. This goes against everything I've ever learned about using external libs - never modify the source. I can list a ton of reasons. Regardless, this is not an approach I'm comfortable with. I have been using a mixed approach - wherein I build/load the "traditional" JS libraries in a "traditional" way (available in the global namespace) and then using the module pattern for all of my application code. This seems okay to me, but it bugs me because one of the real beauties of the module pattern (no globals) is getting perverted. Anyone else got a better solution to this problem?

    Read the article

  • The repository pattern explained and implemented

    The pattern documented and named Repository is one of the most misunderstood and misused. In this post well implement the pattern in C# to achieve this simple line of code: var customers = customers.Matching(new PremiumCustomersFilter()) as well as discuss the origins of the pattern and the original definitions to clear out some of the misrepresentations. [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What's the point of the Prototype design pattern?

    - by user1905391
    So I'm learning about design patterns in school. Many of them are silly little ideas, but nevertheless solve some recurring problems(singleton, adapters, asynchronous polling, ect). But today I was told about the so called 'Prototype' design pattern. I must be missing something, because I don't see any benefits from it. I've seen people online say it's faster than using "new"' but this is doesn't make any sense, since at some point, regardless how the new object is created, memory needs to be allocated for it ect. Furthermore, doesn't this pattern run in the same circles as the 'chicken or egg' problem? By this I mean, since the prototype pattern essentially is just cloning objects, at some point the original object must be created itself (ie, not cloned). So this would mean, that I would need to have an existing copy of every object that I would ever want to clone already ready to clone? Seems stupid to me. Can anyone explain what the use of this pattern is? Original post: http://stackoverflow.com/questions/13887704/whats-the-point-of-the-prototype-design-pattern

    Read the article

  • Read large file into sqlite table in objective-C on iPhone

    - by James Testa
    I have a 2 MB file, not too large, that I'd like to put into an sqlite database so that I can search it. There are about 30K entries that are in CSV format, with six fields per line. My understanding is that sqlite on the iPhone can handle a database of this size. I have taken a few approaches but they have all been slow 30 s. I've tried: 1) Using C code to read the file and parse the fields into arrays. 2) Using the following Objective-C code to parse the file and put it into directly into the sqlite database: NSString *file_text = [NSString stringWithContentsOfFile: filePath usedEncoding: NULL error: NULL]; NSArray *lineArray = [file_text componentsSeparatedByString:@"\n"]; for(int k = 0; k < [lineArray count]; k++){ NSArray *parts = [[lineArray objectAtIndex:k] componentsSeparatedByString: @","]; NSString *field0 = [parts objectAtIndex:0]; NSString *field2 = [parts objectAtIndex:2]; NSString *field3 = [parts objectAtIndex:3]; NSString *loadSQLi = [[NSString alloc] initWithFormat: @"INSERT INTO TABLE (TABLE, FIELD0, FIELD2, FIELD3) VALUES ('%@', '%@', '%@');",field0, field2, field3]; if (sqlite3_exec (db_table, [loadSQLi UTF8String], NULL, NULL, &errorMsg) != SQLITE_OK) { sqlite3_close(db_table); NSAssert1(0, @"Error loading table: %s", errorMsg); } Am I missing something? Does anyone know of a fast way to get the file into a database? Or is it possible to translate the file into a sqlite format that can be read directly into sqlite? Or should I turn the file into a plist and load it into a Dictionary? Unfortunately I need to search on two of the fields, and I think a Dictionary can only have one key? Jim

    Read the article

  • Doing large updates against indexed view

    - by user217136
    We have an indexed view that runs across three large tables. Two of these tables (A & B) are constantly getting updated with user transactions and the other table (C) contains data product info that is needs to be updated once a week. This product table contains over 6 million records. We need this view across these three tables for our core business process and unfortunately we cannot change this aspect. We even had a sql server MVP come in to help test under load to make sure we have the most efficient configuration. There is one column in the product table that gets utilized in the view and has to be updated each week. The problem we are now encountering is that as volume is increasing on our transactions against tables A & B, the update to Table C is causing deadlocks. I have tried several different methods to no avail: 1) I was hoping that we could change the view so that table C could be a dirty read "WITH (NOLOCK)" but apparently that functionality is not available with indexes views. 2) I thought about updating a new column in Table C and then just renaming it when the process is done but you cannot do that due to the dependency in the view. 3) I also entertained the idea of writing this value to a temporary product table, and then running an ALTER statement against the view to have it point to my new table. however when i did that the indexes on my view were dropped and it took quite a bit of time to recreate them. 4) we tried to do the weekly update in small chunks (as small as 100 records at a time) but we still run into dead locks. questions: a) we are using sql server 2005. Does sql server 2008 have a new functionality with their indexed views that would help us? Is there now a way to do dirty reads w/ an indexed view? b) a better approach to altering an existing view to point to a new table? thanks!

    Read the article

  • Managing large binary files with git

    - by pi
    Hi there. I am looking for opinions of how to handle large binary files on which my source code (web application) is dependent. We are currently discussing several alternatives: Copy the binary files by hand. Pro: Not sure. Contra: I am strongly against this, as it increases the likelihood of errors when setting up a new site/migrating the old one. Builds up another hurdle to take. Manage them all with git. Pro: Removes the possibility to 'forget' to copy a important file Contra: Bloats the repository and decreases flexibility to manage the code-base and checkouts/clones/etc will take quite a while. Separate repositories. Pro: Checking out/cloning the source code is fast as ever, and the images are properly archived in their own repository. Contra: Removes the simpleness of having the one and only git repository on the project. Surely introduces some other things I haven't thought about. What are your experiences/thoughts regarding this? Also: Does anybody have experience with multiple git repositories and managing them in one project? Update: The files are images for a program which generates PDFs with those files in it. The files will not change very often(as in years) but are very relevant to a program. The program will not work without the files. Update2: I found a really nice screencast on using git-submodule at GitCasts.

    Read the article

  • How can I quickly parse large (>10GB) files?

    - by Andrew
    Hi - I have to process text files 10-20GB in size of the format: field1 field2 field3 field4 field5 I would like to parse the data from each line of field2 into one of several files; the file this gets pushed into is determined line-by-line by the value in field4. There are 25 different possible values in field2 and hence 25 different files the data can get parsed into. I have tried using Perl (slow) and awk (faster but still slow) - does anyone have any suggestions or pointers toward alternative approaches? FYI here is the awk code I was trying to use; note I had to revert to going through the large file 25 times because I wasn't able to keep 25 files open at once in awk: chromosomes=(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25) for chr in ${chromosomes[@]} do awk < my_in_file_here -v pat="$chr" '{if ($4 == pat) for (i = $2; i <= $2+52; i++) print i}' >> my_out_file_"$chr".query done

    Read the article

  • xslt broken: pattern does not match

    - by krisvandenbergh
    I'm trying to query an xml file using the following xslt: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:bpmn="http://dkm.fbk.eu/index.php/BPMN_Ontology"> <!-- Participants --> <xsl:template match="/"> <html> <body> <table> <xsl:for-each select="Package/Participants/Participant"> <tr> <td><xsl:value-of select="ParticipantType" /></td> <td><xsl:value-of select="Description" /></td> </tr> </xsl:for-each> </table> </body> </html> </xsl:template> </xsl:stylesheet> Here's the contents of the xml file: <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="xpdl2bpmn.xsl"?> <Package xmlns="http://www.wfmc.org/2008/XPDL2.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" Id="25ffcb89-a9bf-40bc-8f50-e5afe58abda0" Name="1 price setting" OnlyOneProcess="false"> <PackageHeader> <XPDLVersion>2.1</XPDLVersion> <Vendor>BizAgi Process Modeler.</Vendor> <Created>2010-04-24T10:49:45.3442528+02:00</Created> <Description>1 price setting</Description> <Documentation /> </PackageHeader> <RedefinableHeader> <Author /> <Version /> <Countrykey>CO</Countrykey> </RedefinableHeader> <ExternalPackages /> <Participants> <Participant Id="008af9a6-fdc0-45e6-af3f-984c3e220e03" Name="customer"> <ParticipantType Type="RESOURCE" /> <Description /> </Participant> <Participant Id="1d2fd8b4-eb88-479b-9c1d-7fe6c45b910e" Name="clerk"> <ParticipantType Type="ROLE" /> <Description /> </Participant> </Participants> </Package> Despite, the simple pattern, the foreach doesn't work. What is wrong with Package/Participants/Participant ? What do I miss here? Thanks a lot!

    Read the article

  • Matching unmatched strings based on a unknown pattern

    - by Polity
    Alright guys, i really hurt my brain over this one and i'm curious if you guys can give me any pointers towards the right direction i should be taking. The situation is this: Lets say, i have a collection of strings (let it be clear that the pattern of this strings is unknown. For a fact, i can say that the string contain only signs from the ASCII table and therefore, i dont have to worry about weird Chinese signs). For this example, i take the following collection of strings (note that the strings dont have to make any human sence so dont try figguring them out :)): "[001].[FOO].[TEST] - 'foofoo.test'", "[002].[FOO].[TEST] - 'foofoo.test'", "[003].[FOO].[TEST] - 'foofoo.test'", "[001].[FOO].[TEST] - 'foofoo.test.sample'", "[002].[FOO].[TEST] - 'foofoo.test.sample'", "-001- BAR.[TEST] - 'bartest.xx1", "-002- BAR.[TEST] - 'bartest.xx1" Now, what i need to have is a way of finding logical groups (and subgroups) of these set of strings, so in the above example, just by rational thinking, you can combine the first 3, the 2 after that and the last 2. Also the resulting groups from the first 5 can be combined in one main group with 2 subgroups, this should give you something like this: { { "[001].[FOO].[TEST] - 'foofoo.test'", "[002].[FOO].[TEST] - 'foofoo.test'", "[003].[FOO].[TEST] - 'foofoo.test'", } { "[001].[FOO].[TEST] - 'foofoo.test.sample'", "[002].[FOO].[TEST] - 'foofoo.test.sample'", } { "-001- BAR.[TEST] - 'bartest.xx1", "-002- BAR.[TEST] - 'bartest.xx1" } } Sorry for the layout above but indenting with 4 spaces doesnt seem to work correctly (or im frakk'n it up). Anyways, I'm not sure how to approach this problem (how to get the result desired as indicated above). First of, i thought of creating a huge set of regexes which would parse most known patterns but the amount of different patterns is just to huge that this isn't realistic. Another think i thought of was parsing each indidual word within a string (so strip all non alphabetic or numeric characters and split by those), and if X% matches, i can assume the strings belong to the same group. (where X wil probably be around 80/90). However, i find the area of speculation kinda big. For example, when matching strings with each 20 words, the change of hitting above 80% is kinda big (that means that 4 words can differ), however when matching only 8 words, 2 words at most can differ. My question to you is, what would be a logical approach in the above situation? Thanks in advance!

    Read the article

  • Large file upload into WSS v3

    - by Rubens Farias
    I'd built an WSSv3 application which upload files in small chunks; when every data piece arrives, I temporarly keep it into a SQL 2005 image data type field for performance reasons**. Problem come when upload ends; I need to move data from my SQL Server to Sharepoint Document Library through WSSv3 object model. Right now, I can think two approaches: SPFileCollection.Add(string, (byte[])reader[0]); // OutOfMemoryException and SPFile file = folder.Files.Add("filename", new byte[]{ }); using(Stream stream = file.OpenBinaryStream()) { // ... init vars and stuff ... while ((bytes = reader.GetBytes(0, offset, buffer, 0, BUFFER_SIZE)) 0) { stream.Write(buffer, 0, (int)bytes); // Timeout issues } file.SaveBinary(stream); } Are there any other way to complete successfully this task? ** Performance reasons: if you tries to write every chunk directly at Sharepoint, you'll note a performance degradation as file grows up (100Mb).

    Read the article

  • Game Object Factory: Fixing Memory Leaks

    - by Bunkai.Satori
    Dear all, this is going to be tough: I have created a game object factory that generates objects of my wish. However, I get memory leaks which I can not fix. Memory leaks are generated by return new Object(); in the bottom part of the code sample. static BaseObject * CreateObjectFunc() { return new Object(); } How and where to delete the pointers? I wrote bool ReleaseClassType(). Despite the factory works well, ReleaseClassType() does not fix memory leaks. bool ReleaseClassTypes() { unsigned int nRecordCount = vFactories.size(); for (unsigned int nLoop = 0; nLoop < nRecordCount; nLoop++ ) { // if the object exists in the container and is valid, then render it if( vFactories[nLoop] != NULL) delete vFactories[nLoop](); } return true; } Before taking a look at the code below, let me help you in that my CGameObjectFactory creates pointers to functions creating particular object type. The pointers are stored within vFactories vector container. I have chosen this way because I parse an object map file. I have object type IDs (integer values) which I need to translate them into real objects. Because I have over 100 different object data types, I wished to avoid continuously traversing very long Switch() statement. Therefore, to create an object, I call vFactoriesnEnumObjectTypeID via CGameObjectFactory::create() to call stored function that generates desired object. The position of the appropriate function in the vFactories is identical to the nObjectTypeID, so I can use indexing to access the function. So the question remains, how to proceed with garbage collection and avoid reported memory leaks? #ifndef GAMEOBJECTFACTORY_H_UNIPIXELS #define GAMEOBJECTFACTORY_H_UNIPIXELS //#include "MemoryManager.h" #include <vector> template <typename BaseObject> class CGameObjectFactory { public: // cleanup and release registered object data types bool ReleaseClassTypes() { unsigned int nRecordCount = vFactories.size(); for (unsigned int nLoop = 0; nLoop < nRecordCount; nLoop++ ) { // if the object exists in the container and is valid, then render it if( vFactories[nLoop] != NULL) delete vFactories[nLoop](); } return true; } // register new object data type template <typename Object> bool RegisterClassType(unsigned int nObjectIDParam ) { if(vFactories.size() < nObjectIDParam) vFactories.resize(nObjectIDParam); vFactories[nObjectIDParam] = &CreateObjectFunc<Object>; return true; } // create new object by calling the pointer to the appropriate type function BaseObject* create(unsigned int nObjectIDParam) const { return vFactories[nObjectIDParam](); } // resize the vector array containing pointers to function calls bool resize(unsigned int nSizeParam) { vFactories.resize(nSizeParam); return true; } private: //DECLARE_HEAP; template <typename Object> static BaseObject * CreateObjectFunc() { return new Object(); } typedef BaseObject*(*factory)(); std::vector<factory> vFactories; }; //DEFINE_HEAP_T(CGameObjectFactory, "Game Object Factory"); #endif // GAMEOBJECTFACTORY_H_UNIPIXELS

    Read the article

  • Query on object id in VQL

    - by Banang
    I'm currently working with the versant object database (using jvi), and have a case where I need to query the database based on an object id. What I'm trying to achieve is something along the lines of Employee employee = new Employee("Mr. Pickles"); session.commit(); FundVQLQuery q = new FundVQLQuery(session, "select * from Employee employee where employee = $1"); q.bind(employee); q.execute(); However, I'm finding out the hard way (I get an EVJ_NOT_A_VALID_KEY_TYPE error thrown at me) that this is infact not the way to do it. Anyone got any experience in working with VQL? Your help is much apreciated! A small clarification: The problem is I'm running some performance tests on the database using the pole position framework, and one of the tests in that framework requires me to fetch an object from the database using either an object reference or a low level object id. Thus, I'm not allowed to reference specific fields in the employee object, but must perform the query on the object in its entirety. So, it's not allowed for me to go "select * from Employee e where e.id = 4", I need it to use the entire object. Accepted answer: Since there is some sort of lunacy magic built into SO that prevents me to mark an accepted answer after a bounty has run out, readers should know that the answer posted by Chris Holmes solves this issue. Readers are encouraged to up-vote his post to further signalize the correctness of his answer to any future readers of this thread.

    Read the article

  • Passing javascript object to webservice via Jquery ajax

    - by kralco626
    I have a webservice that returns an object [WebMethod] public List<User> ContractorApprovals() I also have a webservice that accepcts an object [WebMethod] public bool SaveContractor(Object u) When I make my webservice calls via Jquery: function ServiceCall(method, parameters, onSucess, onFailure) { var parms = "{" + (($.isArray(parameters)) ? parameters.join(',') : parameters) + "}"; // to json $.ajax({ type: "POST", url: "services/"+method, data: parms, contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { if (typeof onSucess == 'function' || typeof onSucess == 'object') onSucess(msg.d); }, error: function(msg, err) { $("#dialog-error").dialog('open');} }); I can call the first one just fine. My onSucess function gets passed a javascript object exactly structured like my User object on the service. However, I am now having trouble getting the object back to the server. I'm accepting Object as a parameter on the server side so I can't inagine there is an issue there. So I'm thinking something is wrong with the parms on the client side but i'm not sure what... I am doing something to the effect ServiceCall("AuthorizationManagerWorkManagement.asmx/ContractorApprovals", "", function(data,args){$("#div").data('user',data[0])}, null) then ServiceCall("AuthorizationManagerWorkManagement.asmx/SaveContractor", $("#div").data('user'), //These also do not work: "{'u': ' + $("#div").data("user") + '}", NOR JSON.stringify({u: userObject}) function(data,args){(alert(data)}, null) I know the first service call works, I can get the data. The second one is causing the "onFailure" method to execute rather then "OnSuccess". Any ideas?

    Read the article

  • Strange behavior with large Object Types

    - by Peter Lang
    I recognized that calling a method on an Oracle Object Type takes longer when the instance gets bigger. The code below just adds rows to a collection stored in the Object Type and calls the empty dummy-procedure in the loop. Calls are taking longer when more rows are in the collection. When I just remove the call to dummy, performance is much better (the collection still contains the same number of records): Calling dummy: Not calling dummy: 11 0 81 0 158 0 Code to reproduce: Create Type t_tab Is Table Of VARCHAR2(10000); Create Type test_type As Object( tab t_tab, Member Procedure dummy ); Create Type Body test_type As Member Procedure dummy As Begin Null; --# Do nothing End dummy; End; Declare v_test_type test_type := New test_type( New t_tab() ); Procedure run_test As start_time NUMBER := dbms_utility.get_time; Begin For i In 1 .. 200 Loop v_test_Type.tab.Extend; v_test_Type.tab(v_test_Type.tab.Last) := Lpad(' ', 10000); v_test_Type.dummy(); --# Removed this line in second test End Loop; dbms_output.put_line( dbms_utility.get_time - start_time ); End run_test; Begin run_test; run_test; run_test; End; I tried with both 10g and 11g. Can anyone explain/reproduce this behavior?

    Read the article

  • Design in "mixed" languages: object oriented design or functional programming?

    - by dema80
    In the past few years, the languages I like to use are becoming more and more "functional". I now use languages that are a sort of "hybrid": C#, F#, Scala. I like to design my application using classes that correspond to the domain objects, and use functional features where this makes coding easier, more coincise and safer (especially when operating on collections or when passing functions). However the two worlds "clash" when coming to design patterns. The specific example I faced recently is the Observer pattern. I want a producer to notify some other code (the "consumers/observers", say a DB storage, a logger, and so on) when an item is created or changed. I initially did it "functionally" like this: producer.foo(item => { updateItemInDb(item); insertLog(item) }) // calls the function passed as argument as an item is processed But I'm now wondering if I should use a more "OO" approach: interface IItemObserver { onNotify(Item) } class DBObserver : IItemObserver ... class LogObserver: IItemObserver ... producer.addObserver(new DBObserver) producer.addObserver(new LogObserver) producer.foo() //calls observer in a loop Which are the pro and con of the two approach? I once heard a FP guru say that design patterns are there only because of the limitations of the language, and that's why there are so few in functional languages. Maybe this could be an example of it? EDIT: In my particular scenario I don't need it, but.. how would you implement removal and addition of "observers" in the functional way? (I.e. how would you implement all the functionalities in the pattern?) Just passing a new function, for example?

    Read the article

  • What OO Design to use ( is there a Design Pattern )?

    - by Blundell
    I have two objects that represent a 'Bar/Club' ( a place where you drink/socialise). In one scenario I need the bar name, address, distance, slogon In another scenario I need the bar name, address, website url, logo So I've got two objects representing the same thing but with different fields. I like to use immutable objects, so all the fields are set from the constructor. One option is to have two constructors and null the other fields i.e: class Bar { private final String name; private final Distance distance; private final Url url; public Bar(String name, Distance distance){ this.name = name; this.distance = distance; this.url = null; } public Bar(String name, Url url){ this.name = name; this.distance = null; this.url = url; } // getters } I don't like this as you would have to null check when you use the getters In my real example the first scenario has 3 fields and the second scenario has about 10, so it would be a real pain having two constructors, the amount of fields I would have to declare null and then when the object are in use you wouldn't know which Bar you where using and so what fields would be null and what wouldn't. What other options do I have? Two classes called BarPreview and Bar? Some type of inheritance / interface? Something else that is awesome?

    Read the article

  • Protected object this object on the rompager server is protected

    - by Sami-L
    I have a windows home server, when I connect to any web site in it I get an authentication window with the next message: "http://mydomain.com site requires user name and password, the site says: "SmartAX". Then when I close the window I get an error page saying: Protected object this object on the rompager server is protected Could you please have an idea on this, have it relation with ADSL router ?

    Read the article

  • C# Reflection - Casting private Object field

    - by alhazen
    I have the following classes: public class MyEventArgs : EventArgs { public object State; public MyEventArgs (object state) { this.State = state; } } public class MyClass { // ... public List<string> ErrorMessages { get { return errorMessages; } } } When I raise my event, I set 'State' of the MyEventArgs object to an object of type MyClass. I'm trying to retrieve ErrorMessages by reflection in my event handler: public static void OnEventEnded(object sender, EventArgs args) { Type type = args.GetType(); FieldInfo stateInfo = type.GetField("State"); PropertyInfo errorMessagesInfo = stateInfo.FieldType.GetProperty("ErrorMessages"); object errorMessages = errorMessagesInfo.GetValue(null, null); } But this returns errorMessagesInfo as null (even though stateInfo is not null). Is it possible to retrieve ErrorMessages ? Thank you

    Read the article

  • object representation and value representation

    - by FredOverflow
    3.9 §4 says: The object representation of an object of type T is the sequence of N unsigned char objects taken up by the object of type T, where N equals sizeof(T). The value representation of an object is the set of bits that hold the value of type T. For trivially copyable types, the value representation is a set of bits in the object representation that determines a value, which is one discrete element of an implementation-defined set of values. Does "The value representation of an object" imply that values are always stored in objects? What is the value representation of non-trivially copyable types?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >