Search Results

Search found 18092 results on 724 pages for 'matt long'.

Page 579/724 | < Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • Transitioning from desktop app written in C++ to a web-based app

    - by Karim
    We have a mature Windows desktop application written in C++. The application's GUI sits on top of a windows DLL that does most of the work for the GUI (it's kind of the engine). It, too, is written in C++. We are considering transitioning the Windows app to be a web-based app for various reasons. What I would like to avoid is having to writing the CGI for this web-based app in C++. That is, I would rather have the power of a 4G language like Python or a .NET language for creating the web-based version of this app. So, the question is: given that I need to use a C++ DLL on the backend to do the work of the app what technology stack would you recommend for sitting between the user's browser and are C++ dll? We can assume that the web server will be Windows. Some options: Write a COM layer on top of the windows DLL which can then be access via .NET and use ASP.NET for the UI Access the export DLL interface directly from .NET and use ASP.NET for the UI. Write a custom Python library that wraps the windows DLL so that the rest of the code can be written. Write the CGI using C++ and a C++-based MVC framework like Wt Concerns: I would rather not use C++ for the web framework if it can be avoided - I think languages like Python and C# are simply more powerful and efficient in terms of development time. I'm concerned that my mixing managed and unmanaged code with one of the .NET solutions I'm asking for lots of little problems that are hard to debug (purely anecdotal evidence for that) Same is true for using a Python layer. Anything that's slightly off the beaten path like that worries me in that I don't have much evidence one way or the other if this is a viable long term solution.

    Read the article

  • PHP: MVC and Model

    - by Pirkka
    Hello I`ve been wondering this one thing about creating models. If I make for example Page model. Is it the both: It can retrieve one row from the table or all the rows. Somehow Im mixing the objects and the database. I have thought it like this: I would have to make a Page-class that would represent one row in the table. It also would have all the basic CRUD-methods. Then I would have to do a Pages-class (somekind of collection) that would retrieve rows from the table and instantiate a Page object from each row. Is this kind of weird? If someone could explain to me the idea of model throughout.. Im again confused. Maybe Im thinking the whole OOP too difficult.. And by the way this forum is great. Hopefully people will just understand my problems. Heh. I was a long time procedural style programmer and now in 3 months I have dived into OOP and MVC and PHP frameworks and I just get more excited day by day when I explore this stuff!

    Read the article

  • returning autorelease NSString still causes memory leaks

    - by hookjd
    I have a simple function that returns an NSString after decoding it. I use it a lot throughout my application, and it appears to create a memory leak (according to "leaks" tool) every time I use it. Leaks tells me the problem is on the line where I alloc the NSString that I am going to return, even though I autorelease it. Here is the function: -(NSString *) decodeValue { NSString *newString; newString = [self stringByReplacingOccurrencesOfString:@"#" withString:@"$"]; NSData *stateData = [NSData dataWithBase64EncodedString:newString]; NSString *convertState = [[[NSString alloc] initWithData:stateData encoding:NSUTF8StringEncoding] autorelease]; return convertState; } My understanding of [autorelease] is that it should be used in exactly this way... where I want to hold onto the object just long enough to return it in my function and then let the object be autoreleased later. So I believe I can use this function through code like this without manually releasing anything: NSString *myDecodedString = [myString decodeValue]; But this process is reporting leaks and I don't understand how to change it to avoid the leaks. What am I doing wrong?

    Read the article

  • What information should a SVN/Versioned file commit comment contain?

    - by RenderIn
    I'm curious what kind of content should be in a versioned file commit comment. Should it describe generally what changed (e.g. "The widget screen was changed to display only active widgets") or should it be more specific (e.g. "A new condition was added to the where clause of the fetchWidget query to retrieve only active widgets by default") How atomic should a single commit be? Just the file containing the updated query in a single commit (e.g. "Updated the widget screen to display only active widgets by default"), or should that and several other changes + interface changes to a screen share the same commit with a more general description like ("Updated the widget screen: A) display only active widgets by default B) added button to toggle showing inactive widgets") I see subversion commit comments being used very differently and was wondering what others have had success with. Some comments are as brief as "updated files", while others are many paragraphs long, and others are formatted in a way that they can be queried and associated with some external system such as JIRA. I used to be extremely descriptive of the reason for the change as well as the specific technical changes. Lately I've been scaling back and just giving a general "This is what I changed on this page" kind of comment.

    Read the article

  • tipfy for Google App Engine: Is it stable? Can auth/session components of tipfy be used with webapp?

    - by cv12
    I am building a web application on Google App Engine that requires users to register with the application and subsequently authenticate with it and maintain sessions. I don't want to force users to have Google accounts. Also, the target audience for the application is the average non-geek, so I'm not very keen on using OpenID or OAuth. I need something simple like: User registers with an e-mail and password, and then can log back in with those credentials. I understand that this approach does not provide the security benefits of Google or OpenID authentication, but I am prepared to trade foolproof security for end-user convenience and hassle-free experience. I explored Django, but decided that consecutive deprecations from appengine-helper to app-engine-patch to django-nonrel may signal that path may be a bit risky in the long-term. I'd like to use a code base that is likely to be maintained consistently. I also explored standalone session/auth packages like gaeutilities and suas. GAEUtilities looked a bit immature (e.g., the code wasn't pythonic in places, in my opinion) and SUAS did not give me a lot of comfort with the cookie-only sessions. I could be wrong with my assessment of these two, so I would appreciate input on those (or others that may serve my objective). Finally, I recently came across tipfy. It appears to be based on Werkzeug and Alex Martelli spoke highly of it here on stackoverflow. I have two primary questions related to tipfy: As a framework, is it as mature as webapp? Is it stable and likely to be maintained for some time? Since my primary interest is the auth/session components, can those components of the tipfy framework be used with webapp, independent of the broader tipfy framework? If yes, I would appreciate a few pointers to how I could go about doing that.

    Read the article

  • Combinatorial optimisation of a distance metric

    - by Jose
    I have a set of trajectories, made up of points along the trajectory, and with the coordinates associated with each point. I store these in a 3d array ( trajectory, point, param). I want to find the set of r trajectories that have the maximum accumulated distance between the possible pairwise combinations of these trajectories. My first attempt, which I think is working looks like this: max_dist = 0 for h in itertools.combinations ( xrange(num_traj), r): for (m,l) in itertools.combinations (h, 2): accum = 0. for ( i, j ) in itertools.izip ( range(k), range(k) ): A = [ (my_mat[m, i, z] - my_mat[l, j, z])**2 \ for z in xrange(k) ] A = numpy.array( numpy.sqrt (A) ).sum() accum += A if max_dist < accum: selected_trajectories = h This takes forever, as num_traj can be around 500-1000, and r can be around 5-20. k is arbitrary, but can typically be up to 50. Trying to be super-clever, I have put everything into two nested list comprehensions, making heavy use of itertools: chunk = [[ numpy.sqrt((my_mat[m, i, :] - my_mat[l, j, :])**2).sum() \ for ((m,l),i,j) in \ itertools.product ( itertools.combinations(h,2), range(k), range(k)) ]\ for h in itertools.combinations(range(num_traj), r) ] Apart from being quite unreadable (!!!), it is also taking a long time. Can anyone suggest any ways to improve on this?

    Read the article

  • Using SVN post-commit hook to update only files that have been commited

    - by fondie
    I am using an SVN repository for my web development work. I have a development site set up which holds a checkout of the repository. I have set up an SVN post-commit hook so that whenever a commit is made to the repository the development site is updated: cd /home/www/dev_ssl /usr/bin/svn up This works fine but due to the size of the repository the updates take a long time (approx. 3 minutes) which is rather frustrating when making regular commits. What I'd like is to change the post-commit hook to only update those files/directories that have been committed but I don't know how to go about doing this. Updating the "lowest common directory" would probably be the best solution, e.g. If committing the follow files: /branches/feature_x/images/logo.jpg /branches/feature_x/css/screen.css It would update the directory: /branches/feature_x/ Can anyone help me create a solution that achieves this please? Thanks! Update: The repository and development site are located on the same server so network issues shouldn't be involved. CPU usage is very low, and I/O should be ok (it's running on hi-spec dedicated server) The development site is approx. 7.5GB in size and contains approx. 600,000 items, this is mainly due to having multiple branches/tags

    Read the article

  • Display image background fully for last repeat in a div

    - by Stiggler
    I have a 700x300 background repeating seamlessly under the main content-div. Now I'd like to attach a div at the bottom of the content-div, containing a continuation-to-end of the background image, connecting seamlessly with the background above it. Due to the nature of the pattern, unless the full 300px height of the background image is visible in the last repeat of the content-div's backround, the background in the div below won't seamlessly connect. Basically, I need the content div's height to be a multiple of 300px under all circumstances. What's a good approach to this sort of problem? I've tried resizing the content-div on loading the page, but this only works as long as the content div doesn't contain any resizing, dynamic content, which is not my case: function adjustContentHeight() { // Setting content div's height to nearest upper multiple of column backgrounds height, // forcing it not to be cut-off when repeated. var contentBgHeight = 300; var contentHeight = $("#content").height(); var adjustedHeight = Math.ceil(contentHeight / contentBgHeight); $("#content").height(adjustedHeight * contentBgHeight); } $(document).ready(adjustContentHeight); What I'm looking for there is a way to respond to a div resizing event, but there doesn't seem to be such a thing. Also, please assume I have no access to the JS controlling the resizing of content in the content-div, though this is potentially a way of solving the problem. Another potential solution I was thinking off was to offset the background image in the bottom div by a certain amount depending on the height of the content-div. Again, the missing piece seems to be the ability to respond to a resize event.

    Read the article

  • How to increase the performance of a loop which runs for every 'n' minutes.

    - by GustlyWind
    Hi Giving small background to my requirement and what i had accomplished so far: There are 18 Scheduler tasks run at regular intervals (least being 30 mins) takes input of nearly 5000 eligible employees run into a static method for iteration and generates a mail content for that employee and mails. An average task takes about 9 min multiplied by 18 will be roughly 162 mins meanwhile there would be next tasks which will be in queue (I assume). So my plan is something like the below loop try { // Handle Arraylist of alerts eligible employees Iterator employee = empIDs.iterator(); while (employee.hasNext()) { ScheduledImplementation.getInstance().sendAlertInfoToEmpForGivenAlertType((Long)employee.next(), configType,schedType); } } catch (Exception vEx) { _log.error("Exception Caught During sending " + configType + " messages:" + configType, vEx); } Since I know how many employees would come to my method I will divide the while loop into two and perform simultaneous operations on two or three employees at a time. Is this possible. Or is there any other ways I can improve the performance. Some of the things I had implemented so far 1.Wherever possible made methods static and variables too Didn't bother to catch exceptions and send back because these are background tasks. (And I assume this improves performance) Get the DB values in one query instead of multiple hits. If am successful in optimizing the while loop I think i can save couple of mins. Thanks

    Read the article

  • SQL Server search filter and order by performance issues

    - by John Leidegren
    We have a table value function that returns a list of people you may access, and we have a relation between a search and a person called search result. What we want to do is that wan't to select all people from the search and present them. The query looks like this SELECT qm.PersonID, p.FullName FROM QueryMembership qm INNER JOIN dbo.GetPersonAccess(1) ON GetPersonAccess.PersonID = qm.PersonID INNER JOIN Person p ON p.PersonID = qm.PersonID WHERE qm.QueryID = 1234 There are only 25 rows with QueryID=1234 but there are almost 5 million rows total in the QueryMembership table. The person table has about 40K people in it. QueryID is not a PK, but it is an index. The query plan tells me 97% of the total cost is spent doing "Key Lookup" witht the seek predicate. QueryMembershipID = Scalar Operator (QueryMembership.QueryMembershipID as QM.QueryMembershipID) Why is the PK in there when it's not used in the query at all? and why is it taking so long time? The number of people total 25, with the index, this should be a table scan for all the QueryMembership rows that have QueryID=1234 and then a JOIN on the 25 people that exists in the table value function. Which btw only have to be evaluated once and completes in less than 1 second.

    Read the article

  • JSON Twitter List in C#.net

    - by James
    Hi, My code is below. I am not able to extract the 'name' and 'query' lists from the JSON via a DataContracted Class (below) I have spent a long time trying to work this one out, and could really do with some help... My Json string: {"as_of":1266853488,"trends":{"2010-02-22 15:44:48":[{"name":"#nowplaying","query":"#nowplaying"},{"name":"#musicmonday","query":"#musicmonday"},{"name":"#WeGoTogetherLike","query":"#WeGoTogetherLike"},{"name":"#imcurious","query":"#imcurious"},{"name":"#mm","query":"#mm"},{"name":"#HumanoidCityTour","query":"#HumanoidCityTour"},{"name":"#awesomeindianthings","query":"#awesomeindianthings"},{"name":"#officeformac","query":"#officeformac"},{"name":"Justin Bieber","query":"\"Justin Bieber\""},{"name":"National Margarita","query":"\"National Margarita\""}]}} My code: WebClient wc = new WebClient(); wc.Credentials = new NetworkCredential(this.Auth.UserName, this.Auth.Password); string res = wc.DownloadString(new Uri(link)); //the download string gives me the above JSON string - no problems Trends trends = new Trends(); Trends obj = Deserialise<Trends>(res); private T Deserialise<T>(string json) { T obj = Activator.CreateInstance<T>(); using (MemoryStream ms = new MemoryStream(Encoding.Unicode.GetBytes(json))) { DataContractJsonSerializer serialiser = new DataContractJsonSerializer(obj.GetType()); obj = (T)serialiser.ReadObject(ms); ms.Close(); return obj; } } [DataContract] public class Trends { [DataMember(Name = "as_of")] public string AsOf { get; set; } //The As_OF value is returned - But how do I get the //multidimensional array of Names and Queries from the JSON here? }

    Read the article

  • Hadoop/MapReduce: Reading and writing classes generated from DDL

    - by Dave
    Hi, Can someone walk me though the basic work-flow of reading and writing data with classes generated from DDL? I have defined some struct-like records using DDL. For example: class Customer { ustring FirstName; ustring LastName; ustring CardNo; long LastPurchase; } I've compiled this to get a Customer class and included it into my project. I can easily see how to use this as input and output for mappers and reducers (the generated class implements Writable), but not how to read and write it to file. The JavaDoc for the org.apache.hadoop.record package talks about serializing these records in Binary, CSV or XML format. How do I actually do that? Say my reducer produces IntWritable keys and Customer values. What OutputFormat do I use to write the result in CSV format? What InputFormat would I use to read the resulting files in later, if I wanted to perform analysis over them?

    Read the article

  • Volatile fields in C#

    - by Danny Chen
    From the specification 10.5.3 Volatile fields: The type of a volatile field must be one of the following: A reference-type. The type byte, sbyte, short, ushort, int, uint, char, float, bool, System.IntPtr, or System.UIntPtr. An enum-type having an enum base type of byte, sbyte, short, ushort, int, or uint. First I want to confirm my understanding is correct: I guess the above types can be volatile because they are stored as a 4-bytes unit in memory(for reference types because of its address), which guarantees the read/write operation is atomic. A double/long/etc type can't be volatile because they are not atomic reading/writing since they are more than 4 bytes in memory. Is my understanding correct? And the second, if the first guess is correct, why a user defined struct with only one int field in it(or something similar, 4 bytes is ok) can't be volatile? Theoretically it's atomic right? Or it's not allowed simply because that all user defined structs(which is possibly more than 4 bytes) are not allowed to volatile by design?

    Read the article

  • Downloading attachments to directory with IMAP in PHP, randomly works

    - by Nick
    I found PHP code online to download attachments to a directory using IMAP from here. http://www.nerdydork.com/download-pop3imap-email-attachments-with-php.html I modified it slightly changing $structure = imap_fetchstructure($mbox, $jk); $parts = ($structure->parts); to $structure = imap_fetchstructure($mbox, $jk); $parts = ($structure); to get it to run properly, as otherwise I got an error about how stdClass doesn't define a property called $parts. Doing that, I was able to download all the attachments. I tested it again recently though, and it didn't work. Well, it didn't work 6 times, worked the 7th, and then hasn't worked since. I'm thinking it has something to do with me screwing up the parts handling, since count($parts) keeps returning 1 for each message, so it's not finding any attachments I think. Since it downloaded the attachments at one point with no issues, I feel confident that the area things are getting screwed up is right here. Before this block of code is a for loop that goes through each message in the box, and after it is loop that just goes through $parts for each imap structure. Thanks for any help you can provide. I looked at the imap_fetchstructure page on php.net and can't figure out what I'm doing wrong. Edit: I just double-checked the folder after typing up my question and it all popped up. I feel like I'm going nuts. I hadn't run the code since a few minutes before I started typing this, and it doesn't make sense to me that it would take this long to trigger. I have some 800 messages in the mailbox, but I figured since it printed my statement at the very end of the PHP that all of the file creation work was done.

    Read the article

  • Qt and variadic functions

    - by Noah Roberts
    OK, before lecturing me on the use of C-style variadic functions in C++...everything else has turned out to require nothing short of rewriting the Qt MOC. What I'd like to know is whether or not you can have a "slot" in a Qt object that takes an arbitrary amount/type of arguments. The thing is that I really want to be able to generate Qt objects that have slots of an arbitrary signature. Since the MOC is incompatible with standard preprocessing and with templates, it's not possible to do so with either direct approach. I just came up with another idea: struct funky_base : QObject { Q_OBJECT funky_base(QObject * o = 0); public slots: virtual void the_slot(...) = 0; }; If this is possible then, because you can make a template that is a subclass of a QObject derived object so long as you don't declare new Qt stuff in it, I should be able to implement a derived templated type that takes the ... stuff and turns it into the appropriate, expected types. If it is, how would I connect to it? Would this work? connect(x, SIGNAL(someSignal(int)), y, SLOT(the_slot(...))); If nobody's tried anything this insane and doesn't know off hand, yes I'll eventually try it myself...but I am hoping someone already has existing knowledge I can tap before possibly wasting my time on it.

    Read the article

  • Automatic music rating based on listening habits

    - by marco92w
    I've created a Winamp-like music player in Delphi. Not so complex, of course. Just a simple one. But now I would like to add a more complex feature: Songs in the library should be automatically rated based on the user's listening habits. This means: The application should "understand" if the user likes a song or not. And not only whether he/she likes it but also how much. My approach so far (data which could be used): Simply measure how often a song was played per time. Start counting time when the song was added to the library so that recent songs don't have any disadvantage. Measure how long a song was played on average (minutes). Starting a song but directly change to another one should have a bad influence on the ranking since the user didn't seem to like the song. ... Could you please help me with this problem? I would just like to have some ideas. I don't need the implementation in Delphi.

    Read the article

  • Using nohup mysqldump from php script is inserting a '!' and breaking to a new line.

    - by Aglystas
    I'm trying to run a mysqldump from php using the nohup command to prevent the script from hanging. Here's the command (The database is mc6_erik_test, everything else is just a table list until you get to the end) exec("mysqldump -u root -pPassword -h vfmy1-dev.mountainmedia.com mc6_erik_test access_log admin affiliate affiliate_2_product authorized_ip category category_2_product claim_code claim_code_log country_exclude customer customer_2_subscription customer_account_log customer_address customer_bill customer_discount customer_ip customer_key email_bulk_log email_draft email_queue email_queue_log email_template endicia_log gift_wrap image_bulk_upload log mailing_list manufacturer merchant merchant_checkout merchant_ip merchant_ship merchant_ship_conf new_account_temp order_dest order_item order_item_2_dest order_item_2_package order_item_log order_item_registrant order_note order_package order_package_label orders package package_2_product pref product product_2_supplier product_also product_event_date product_image product_option product_related product_review product_review_helpful product_ship_disable report search_log subscription supplier temp_product transaction_account transactions wish_list wish_list_fill wish_list_item --opt --where='merchant_id=\'6\'' /tmp/sync_db_card_20100519105358.sql"); As you can see it's really long, because I have to specifically include only the tables I want to dump. The command works great from the command line, however when I run it through a web script towards the end the following is being used as the command... supplier temp_product transaction_account transactio! ns wish_list wish_list_fill wish_list_item --opt --where='merchant_id="6"' > /tmp/sync_db_card_20100519105358.sql So the table 'transactions' is being split by an exclamation point and newline. The rest of the command is exactly the same. And if I run this through the php-cli interface it doesn't happen only when I try running it via the webserver using nohup. I'm wondering if there is some inherit string length to using the exec command within a php script, or really if anyone has any general idea what is going on here.

    Read the article

  • Why use Entity Framework over Linq2SQL ...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason to use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

  • NHibernate slow mapping

    - by Rob A
    My question is what can I do to determine the cause of the slowness, or what can I do to speed it up without knowing the exact cause. I am running a simple query and it appears that the mapping back to the entities is taking taking forever. The result set is 350, which is not much data in my opinion. IRepository repo = ObjectFactory.GetInstance<IRepository>(); var q = repo.Query<Order>(item => item.Ordereddate > DateTime.Now.AddDays(-40)); foreach (var order in q) { Console.WriteLine(order.TransactionNumber); } The profiler is telling me it is executing the query 7ms / 35257ms, I am assuming that the former is the actual response from the db and the latter is the time it takes NH to do it's magic. 35 seconds is too long. This is a simple mapping, one table, nested components, using fluent interface to do mappings. I just start up a simple console app and run the one query, the slowness is measured after the SessionFactory is initialized, there should only be one session, and I am not using a transaction. Thanks

    Read the article

  • Improving the performance of XSL

    - by Rachel
    I am using the below XSL 2.0 code to find the ids of the text nodes that contains the list of indices that i give as input. the code works perfectly but in terms for performance it is taking a long time for huge files. Even for huge files if the index values are small then the result is quick in few ms. I am using saxon9he Java processor to execute the XSL. <xsl:variable name="insert-data" as="element(data)*"> <xsl:for-each-group select="doc($insert-file)/insert-data/data" group-by="xsd:integer(@index)"> <xsl:sort select="current-grouping-key()"/> <data index="{current-grouping-key()}" text-id="{generate-id( $main-root/descendant::text()[ sum((preceding::text(), .)/string-length(.)) ge current-grouping-key() ][1] )}"> <xsl:copy-of select="current-group()/node()"/> </data> </xsl:for-each-group> </xsl:variable> In the above solution if the index value is too huge say 270962 then the time taken for the XSL to execute is 83427ms. In huge files if the index value is huge say 4605415, 4605431 it takes several minutes to execute. Seems the computation of the variable "insert-data" takes time though it is a global variable and computed only once. Should the XSL be addessed or the processor? How can i improve the performance of the XSL.

    Read the article

  • ByteFlow installation Error on Windows

    - by Patrick
    Hi Folks, When I try to install ByteFlow on my Windows development machine, I got the following MySQL error, and I don't know what to do, please give me some suggestion. Thank you so much!!! E:\byteflow-5b6d964917b5>manage.py syncdb !!! Read about DEBUG in settings_local.py and then remove me !!! !!! Read about DEBUG in settings_local.py and then remove me !!! J:\Program Files\Python26\lib\site-packages\MySQLdb\converters.py:37: DeprecationWarning: the sets module is deprecated from sets import BaseSet, Set Creating table auth_permission Creating table auth_group Creating table auth_user Creating table auth_message Creating table django_content_type Creating table django_session Creating table django_site Creating table django_admin_log Creating table django_flatpage Creating table actionrecord Creating table blog_post Traceback (most recent call last): File "E:\byteflow-5b6d964917b5\manage.py", line 11, in <module> execute_manager(settings) File "J:\Program Files\Python26\lib\site-packages\django\core\management\__init__.py", line 362, in execute_manager utility.execute() File "J:\Program Files\Python26\lib\site-packages\django\core\management\__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 222, in execute output = self.handle(*args, **options) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 351, in handle return self.handle_noargs(**options) File "J:\Program Files\Python26\lib\site-packages\django\core\management\commands\syncdb.py", line 78, in handle_noargs cursor.execute(statement) File "J:\Program Files\Python26\lib\site-packages\django\db\backends\util.py", line 19, in execute return self.cursor.execute(sql, params) File "J:\Program Files\Python26\lib\site-packages\django\db\backends\mysql\base.py", line 84, in execute return self.cursor.execute(query, args) File "J:\Program Files\Python26\lib\site-packages\MySQLdb\cursors.py", line 166, in execute self.errorhandler(self, exc, value) File "J:\Program Files\Python26\lib\site-packages\MySQLdb\connections.py", line 35, in defaulterrorhandler raise errorclass, errorvalue _mysql_exceptions.OperationalError: (1071, 'Specified key was too long; max key length is 767 bytes')

    Read the article

  • questions about name mangling in C++

    - by Tim
    I am trying to learn and understand name mangling in C++. Here are some questions: (1) From devx When a global function is overloaded, the generated mangled name for each overloaded version is unique. Name mangling is also applied to variables. Thus, a local variable and a global variable with the same user-given name still get distinct mangled names. Are there other examples that are using name mangling, besides overloading functions and same-name global and local variables ? (2) From Wiki The need arises where the language allows different entities to be named with the same identifier as long as they occupy a different namespace (where a namespace is typically defined by a module, class, or explicit namespace directive). I don't quite understand why name mangling is only applied to the cases when the identifiers belong to different namespaces, since overloading functions can be in the same namespace and same-name global and local variables can also be in the same space. How to understand this? Do variables with same name but in different scopes also use name mangling? (3) Does C have name mangling? If it does not, how can it deal with the case when some global and local variables have the same name? C does not have overloading functions, right? Thanks and regards!

    Read the article

  • Opaque tenant identification with SQL Server & NHibernate

    - by Anton Gogolev
    Howdy! We're developing a nowadays-fashionable multi-tenanted SaaS app (shared database, shared schema), and there's one thing I don't like about it: public class Domain : BusinessObject { public virtual long TenantID { get; set; } public virtual string Name { get; set; } } The TenantID is driving me nuts, as it has to be accounted for almost everywhere, and it's a hassle from security standpoint: what happens if a malicious API user changes TenantID to some other value and will mix things up. What I want to do is to get rid of this TenantID in our domain objects altogether, and to have either NHibernate or SQL Server deal with it. From what I've already read on the Internets, this can be done with CONTEXT_INFO (here's a NHibernatebased implementation), NHibernate filters, SQL Views and with combination thereof. Now, my requirements are as follows: Remove any mentions of TenantID from domain objects ...but have SQL Server insert it where appropriate (I guess this is achieved with default constraints) ...and obviously provide support for filtering based on this criteria, so that customers will never see each other's data If possible, avoid SQL Server views. Have a solution which plays nicely with NHibernate, SQL Servers' MARS and general nature of SaaS apps being highly concurrent What are your thoughts on that?

    Read the article

  • Fill in word form field with more than 255 characters

    - by user1308743
    I am trying to programmaticly fill in a microsoft word form. I am successfully able to do so if the string is under 255 chars with the following code below, however it says the string is too long if i try and use a string over 255 chars... How do I get past this limitation? If I open the word doc in word I can type in more than 255 chars without a problem. Does anyone know how to input more characters via c# code? object fileName = strFileName; object readOnly = false; object isVisible = true; object missing = System.Reflection.Missing.Value; //open doc _oDoc = _oWordApplic.Documents.Open(ref fileName, ref missing, ref readOnly, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref isVisible, ref missing, ref missing, ref missing, ref missing); _oDoc.Activate(); //write string _oDoc.FormFields[oBookMark].Result = value; //save and close oDoc.SaveAs(ref fileName, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing); _oWordApplic.Application.Quit(ref missing, ref missing, ref missing);

    Read the article

< Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >