Search Results

Search found 78653 results on 3147 pages for 'performance object name s'.

Page 393/3147 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • Doctrine: Relating a model to itself using a link table, like "This event is related to to the following other events"

    - by mattalexx
    So in English, the relationship would sound like "This event is related to to the following other events". My first instinct is to create an EventEvent model, with a first_event_id field and a second_event_id field. Then I would define the following two relationships in the Event model: $this->hasMany('Event as FirstRelatedEvents', array('local' => 'first_event_id', 'foreign' => 'second_event_id', 'refClass' => 'EventEvent')); $this->hasMany('Event as SecondRelatedEvents', array('local' => 'second_event_id', 'foreign' => 'first_event_id', 'refClass' => 'EventEvent')); But I would rather not have to use two relationships on the Event model. Is there a better way to do this?

    Read the article

  • Proxy object references in MVC code

    - by krystan honour
    Hi there, I am just figuring out best practice with MVC now I have a project where we have chosen to use it in anger. My question is. If creating a list view which is bound to an IEnumerable is this bad practise? Would it be better to seperate the code generated by the WCF Service reference into a datastructure which essentially holds the same data but abstracts further from the service, meaning that the UI is totally unaware of the service implementation beneath. or do people just bind to the proxy object types and have done with it ? My personal feeling is to create an abstraction but this seems to violate the DRY principle.

    Read the article

  • Django paging object has issues with Postgresql QuerySets

    - by pivotal
    I have some django code that runs fine on a SQLite database or on a MySQL database, but it runs into problems with Postgres, and it's making me crazy that no one has has this issue before. I think it may also be related to the way querysets are evaluated by the pager. In a view I have: def index(request, page=1): latest_posts = Post.objects.all().order_by('-pub_date') paginator = Paginator(latest_posts, 5) try: posts = paginator.page(page) except (EmptyPage, InvalidPage): posts = paginator.page(paginator.num_pages) return render_to_response('blog/index.html', {'posts' : posts}) And inside the template: {% for post in posts.object_list %} {# some rendering jazz #} {% endfor %} This works fine with SQLite, but Postgres gives me: Caught TypeError while rendering: 'NoneType' object is not callable To further complicate things, when I switch the Queryset call to: latest_posts = Post.objects.all() Everything works great. I've tried re-reading the documentation, but found nothing, although I admit I'm a bit clouded by frustration at this point. What am I missing? Thanks in advance.

    Read the article

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

  • How do I make this nested for loop, testing sums of cubes, more efficient?

    - by Brian J. Fink
    I'm trying to iterate through all the combinations of pairs of positive long integers in Java and testing the sum of their cubes to discover if it's a Fibonacci number. I'm currently doing this by using the value of the outer loop variable as the inner loop's upper limit, with the effect being that the outer loop runs a little slower each time. Initially it appeared to run very quickly--I was up to 10 digits within minutes. But now after 2 full days of continuous execution, I'm only somewhere in the middle range of 15 digits. At this rate it may end up taking a whole year just to finish running this program. The code for the program is below: import java.lang.*; import java.math.*; public class FindFib { public static void main(String args[]) { long uLimit=9223372036854775807L; //long maximum value BigDecimal PHI=new BigDecimal(1D+Math.sqrt(5D)/2D); //Golden Ratio for(long a=1;a<=uLimit;a++) //Outer Loop, 1 to maximum for(long b=1;b<=a;b++) //Inner Loop, 1 to current outer { //Cube the numbers and add BigDecimal c=BigDecimal.valueOf(a).pow(3).add(BigDecimal.valueOf(b).pow(3)); System.out.print(c+" "); //Output result //Upper and lower limits of interval for Mobius test: [c*PHI-1/c,c*PHI+1/c] BigDecimal d=c.multiply(PHI).subtract(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)), e=c.multiply(PHI).add(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)); //Mobius test: if integer in interval (floor values unequal) Fibonacci number! if (d.toBigInteger().compareTo(e.toBigInteger())!=0) System.out.println(); //Line feed else System.out.print("\r"); //Carriage return instead } //Display final message System.out.println("\rDone. "); } } Now the use of BigDecimal and BigInteger was delibrate; I need them to get the necessary precision. Is there anything other than my variable types that I could change to gain better efficiency?

    Read the article

  • yet another confusion with multiprocessing error, 'module' object has no attribute 'f'

    - by gatoatigrado
    I know this has been answered before, but it seems that executing the script directly "python filename.py" does not work. I have Python 2.6.2 on SuSE Linux. Code: #!/usr/bin/python # -*- coding: utf-8 -*- from multiprocessing import Pool p = Pool(1) def f(x): return x*x p.map(f, [1, 2, 3]) Command line: > python example.py Process PoolWorker-1: Traceback (most recent call last): File "/usr/lib/python2.6/multiprocessing/process.py", line 231, in _bootstrap self.run() File "/usr/lib/python2.6/multiprocessing/process.py", line 88, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python2.6/multiprocessing/pool.py", line 57, in worker task = get() File "/usr/lib/python2.6/multiprocessing/queues.py", line 339, in get return recv() AttributeError: 'module' object has no attribute 'f'

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

  • How should I declare default values for instance variables in Python?

    - by int3
    Should I give my class members default values like this: class Foo: num = 1 or like this? class Foo: def __init__(self): self.num = 1 In this question I discovered that in both cases, bar = Foo() bar.num += 1 is a well-defined operation. I understand that the first method will give me a class variable while the second one will not. However, if I do not require a class variable, but only need to set a default value for my instance variables, are both methods equally good? Or one of them more 'pythonic' than the other? One thing I've noticed is that in the Django tutorial, they use the second method to declare Models. Personally I think the second method is more elegant, but I'd like to know what the 'standard' way is.

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • How to get the size of a binary tree ?

    - by Andrei Ciobanu
    I have a very simple binary tree structure, something like: struct nmbintree_s { unsigned int size; int (*cmp)(const void *e1, const void *e2); void (*destructor)(void *data); nmbintree_node *root; }; struct nmbintree_node_s { void *data; struct nmbintree_node_s *right; struct nmbintree_node_s *left; }; Sometimes i need to extract a 'tree' from another and i need to get the size to the 'extracted tree' in order to update the size of the initial 'tree' . I was thinking on two approaches: 1) Using a recursive function, something like: unsigned int nmbintree_size(struct nmbintree_node* node) { if (node==NULL) { return(0); } return( nmbintree_size(node->left) + nmbintree_size(node->right) + 1 ); } 2) A preorder / inorder / postorder traversal done in an iterative way (using stack / queue) + counting the nodes. What approach do you think is more 'memory failure proof' / performant ? Any other suggestions / tips ? NOTE: I am probably going to use this implementation in the future for small projects of mine. So I don't want to unexpectedly fail :).

    Read the article

  • Creating relationship between two model instances

    - by Lowgain
    This is probably pretty simple, but here: Say I've got two models, Thing and Tag class Thing < ActiveRecord::Base has_and_belongs_to_many :tags end class Tag < ActiveRecord::Base has_and_belongs_to_many :things end And I have an instance of each. I want to link them. Can I do something like: @thing = Thing.find(1) @tag = Tag.find(1) @thing.tags.add(@tag) If not, what is the best way to do this? Thanks!

    Read the article

  • Jmeter- HTTP Cache Manager, Unable to cache everything what it is being cached by Browser

    - by chinmay brahma
    I used HTTP Chache Manager to Cache files which are being cached in browser. I am successful of doing it for some of the pages. Number of files being cached in Jmeter is equal to Number of files being cached by browser. But in some cases : I found number files being cached is lesser than the files being cached by browser. Using Jmeter I found only 5 files are being cached but in real browser 12 files are getting cached. Thanks in advance

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • How to not over-use jQuery?

    - by Fedyashev Nikita
    Typical jQuery over-use: $('button').click(function() { alert('Button clicked: ' + $(this).attr('id')); }); Which can be simplified to: $('button').click(function() { alert('Button clicked: ' + this.id); }); Which is way faster. Can you give me any more examples of similar jQuery over-use?

    Read the article

  • Effecient data structure design

    - by Sway
    Hi there, I need to match a series of user inputed words against a large dictionary of words (to ensure the entered value exists). So if the user entered: "orange" it should match an entry "orange' in the dictionary. Now the catch is that the user can also enter a wildcard or series of wildcard characters like say "or__ge" which would also match "orange" The key requirements are: * this should be as fast as possible. * use the smallest amount of memory to achieve it. If the size of the word list was small I could use a string containing all the words and use regular expressions. however given that the word list could contain potentially hundreds of thousands of enteries I'm assuming this wouldn't work. So is some sort of 'tree' be the way to go for this...? Any thoughts or suggestions on this would be totally appreciated! Thanks in advance, Matt

    Read the article

  • Are closures in javascript recompiled

    - by Discodancer
    Let's say we have this code (forget about prototypes for a moment): function A(){ var foo = 1; this.method = function(){ return foo; } } var a = new A(); is the inner function recompiled each time the function A is run? Or is it better (and why) to do it like this: function method = function(){ return this.foo; } function A(){ this.foo = 1; this.method = method; } var a = new A(); Or are the javascript engines smart enough not to create a new 'method' function every time? Specifically Google's v8 and node.js. Also, any general recommendations on when to use which technique are welcome. In my specific example, it really suits me to use the first example, but I know thath the outer function will be instantiated many times.

    Read the article

  • integer division in php

    - by oezi
    hi guys, i'm looking for the fastest way to do an integer division in php. for example, 5 / 2 schould be 4 | 6 / 2 should be 3 and so on. if i simply do this, php will return 2.5 in the first case, the only solution i could find was using intval($my_number/2) - wich isn't as fast as i want it to be (but gives the expected results). can anyone help me out with this?

    Read the article

  • same flash file (.swf) downloaded multiple times on a page

    - by Gunjan
    I have a page that has a table with each row corresponding to an audio file. The last cell in each row embeds a simple flash audio player. The problem is that the flash file for the player is being downloaded for each row separately and as soon as rows go beyond 40-50 it crashes the browser. I tried using different players (1pixelout, flash-mp3-player) and the problem is still there, so its not a player specific issue. Is there any way to cache the player so that it is only downloaded once?

    Read the article

  • oprofile unable to produce call graph

    - by aaa
    hello I am trying to use oprofile to generate call graph. Compiler is g++, platform is linux x86-64, linker is gfortran C++ code is compiled with -fno- omit-frame-pointer. oprofile is started with --callgraph=25. report I run with --callgraph. the call graph is produced but it's only includes self time, which is not much use what am I missing?

    Read the article

  • To Interface or Not?: Creating a polymorphic model relationship in Ruby on Rails dynamically..

    - by Globalkeith
    Please bear with me for a moment as I try to explain exactly what I would like to achieve. In my Ruby on Rails application I have a model called Page. It represents a web page. I would like to enable the user to arbitrarily attach components to the page. Some examples of "components" would be Picture, PictureCollection, Video, VideoCollection, Background, Audio, Form, Comments. Currently I have a direct relationship between Page and Picture like this: class Page < ActiveRecord::Base has_many :pictures, :as => :imageable, :dependent => :destroy end class Picture < ActiveRecord::Base belongs_to :imageable, :polymorphic => true end This relationship enables the user to associate an arbitrary number of Pictures to the page. Now if I want to provide multiple collections i would need an additional model: class PictureCollection < ActiveRecord::Base belongs_to :collectionable, :polymorphic => true has_many :pictures, :as => :imageable, :dependent => :destroy end And alter Page to reference the new model: class Page < ActiveRecord::Base has_many :picture_collections, :as => :collectionable, :dependent => :destroy end Now it would be possible for the user to add any number of image collections to the page. However this is still very static in term of the :picture_collections reference in the Page model. If I add another "component", for example :video_collections, I would need to declare another reference in page for that component type. So my question is this: Do I need to add a new reference for each component type, or is there some other way? In Actionscript/Java I would declare an interface Component and make all components implement that interface, then I could just have a single attribute :components which contains all of the dynamically associated model objects. This is Rails, and I'm sure there is a great way to achieve this, but its a tricky one to Google. Perhaps you good people have some wise suggestions. Thanks in advance for taking the time to read and answer this.

    Read the article

  • C#/WPF FileSystemWatcher on every extension on every path

    - by BlueMan
    I need FileSystemWatcher, that can observing same specific paths, and specific extensions. But the paths could by dozens, hundreds or maybe thousand (hope not :P), the same with extensions. The paths and ext are added by user. Creating hundreds of FileSystemWatcher it's not good idea, isn't it? So - how to do it? Is it possible to watch/observing every device (HDDs, SD flash, pendrives, etc.)? Will it be efficient? I don't think so... . Every changing Windows log file, scanning file by antyvirus program - it could realy slow down my program with SystemWatcher :(

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >