Search Results

Search found 6634 results on 266 pages for 'fast fashion'.

Page 176/266 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • django url tag performance

    - by zxygentoo
    I was trying to integrate django-voting into my project following the RedditStyleVoting instruction. In my urls.py, i did something like this: url(r'^sections/(?P<object_id>\d+)/(?P<direction>up|down|clear)vote/?$', vote_on_object, dict( model=Section, template_object_name='section', template_name='script/section_confirm_vote.html', allow_xmlhttprequest=True ), name="section_vote", then, in my template: {% vote_by_user user on section as vote %} {% score_for_object section as score %} {% vote_by_user user on section as vote %} {% score_for_object section as score %} {{ score.score|default:0 }} It takes over 1.3s to load the page, but by hard coding it like this: {% vote_by_user user on section as vote %} {% score_for_object section as score %} {{ score.score|default:0 }} I got 50ms. Just avoid the url tag resolving stuff I got a 20+ times performance improvement. Is there something I did wrong? If not, then what's the best practice here, should we do things the right way or the fast way?

    Read the article

  • which layout engine for finding coordinates of html elements on the web page?

    - by Mexx
    I am doing some web data classification task and was thinking if I could get the co-ordinates of html elements as they would appear on a web-browser without taking into consideration any css or javascript being referred in the web page. My language of programming is c++ and the need results for a couple million of pages, so it has to be fast. I know there is a Microsoft COM component which renders the page in a web browser control and then can be queried for position of different html tags. But this is not suitable in my case as it first renders the whole page which takes up a lot of time. So as I found out, there are open-source layout engines WebKit, Gecko that can probably be used for this. But that's a huge piece of code and I need someone to direct me to the right classes or right modules to look into or any previous/similar work someone has done previously. Also, please let me know what you guys think is a good choice if I want to customize the existing code for use with multiple threads to make it faster. Thanks

    Read the article

  • how to optimize an oracle query that has to_char in where clause for date

    - by panorama12
    I have a table that contains about 49403459 records. I want to query the table on a date range. say 04/10/2010 to 04/10/2010. However, the dates are stored in the table as format 10-APR-10 10.15.06.000000 AM (time stamp). As a result. When I do: SELECT bunch,of,stuff,create_date FROM myTable WHERE TO_CHAR (create_date,'MM/DD/YYYY)' >= '04/10/2010' AND TO_CHAR (create_date, 'MM/DD/YYYY' <= '04/10/2010' I get 529 rows but in 255.59 seconds! which is because I guess I am doing to_char on EACH record. However, When I do SELECT bunch,of,stuff,create_date FROM myTable WHERE create_date >= to_date('04/10/2010','MM/DD/YYYY') AND create_date <= to_date('04/10/2010','MM/DD/YYYY') then I get 0 results in 0.14 seconds. How can I make this query fast and still get valid (529) results?? At this point I can not change indexes. Right now I think index is created on create_date column

    Read the article

  • Is it possible to generate plain-old XML using Haml?

    - by lsdr
    I've been working on a piece of software where I need to generate a custom XML file to send back to a client application. The current solutions on Ruby/Rails world for generating XML files are slow, at best. Using builder or event Nokogiri, while have a nice syntax and are maintainable solutions, they consume too much time and processing. I definetly could go to ERB, which provides a good speed at the expense of building the whole XML by hand. HAML is a great tool, have a nice and straight-forward syntax and is fairly fast. But I'm struggling to build pure XML files using it. Which makes me wonder, is it possible at all? Does any one have some pointers to some code or docs showing how to do this, build a full, valid XML from HAML?

    Read the article

  • Can this loop be sped up in pure Python?

    - by Noctis Skytower
    I was trying out an experiment with Python, trying to find out how many times it could add one to an integer in one minute's time. Assuming two computers are the same except for the speed of the CPUs, this should give an estimate of how fast some CPU operations may take for the computer in question. The code below is an example of a test designed to fulfill the requirements given above. This version is about 20% faster than the first attempt and 150% faster than the third attempt. Can anyone make any suggestions as to how to get the most additions in a minute's time span? Higher numbers are desireable. EDIT: This experiment is being written in Python 3.1 and is 15% faster than the fourth speed-up attempt. def start(seconds): import time, _thread def stop(seconds, signal): time.sleep(seconds) signal.pop() total, signal = 0, [None] _thread.start_new_thread(stop, (seconds, signal)) while signal: total += 1 return total if __name__ == '__main__': print('Testing the CPU speed ...') print('Relative speed:', start(60))

    Read the article

  • Advanced queries in HBase

    - by Teflon Ted
    Given the following HBase schema scenario (from the official FAQ)... How would you design an Hbase table for many-to-many association between two entities, for example Student and Course? I would define two tables: Student: student id student data (name, address, ...) courses (use course ids as column qualifiers here) Course: course id course data (name, syllabus, ...) students (use student ids as column qualifiers here) This schema gives you fast access to the queries, show all classes for a student (student table, courses family), or all students for a class (courses table, students family). How would you satisfy the request: "Give me all the students that share at least two courses in common"? Can you build a "query" in HBase that will return that set, or do you have to retrieve all the pertinent data and crunch it yourself in code?

    Read the article

  • Scalable (half-million files) version control system

    - by hashable
    We use SVN for our source-code revision control and are experimenting using it for non-source-code files. We are working with a large set (300-500k) of short (1-4kB) text files that will be updated on a regular basis and need to version control it. We tried using SVN in flat-file mode and it is struggling to handle the first commit (500k files checked in) taking about 36 hours. On a daily basis, we need the system to be able to handle 10k modified files per commit transaction in a short time (<5 min). My questions: Is SVN the right solution for my purpose. The initial speed seems too slow for practical use. If Yes, is there a particular svn server implementation that is fast? (We are currently using the gnu/linux default svn server and command line client.) If No, what are the best f/oss/commercial alternatives Thanks

    Read the article

  • Best Application for storing code snippets

    - by Konrad
    Hi all, Just wondering if you can point me in the direction of a simple, fast program which stores code snippets. I have been using a local wiki up to now, but I find it a little annoying at times. Ideally I would like this application to be portable - i.e. it could run off of a USB stick on multiple machines with no installation. What do you guys use? EDIT: I would prefer a solution that was decoupled from the IDE and stored locally, not in the cloud. EDIT 2: Thanks for all the replies thus far, but I am still awaiting a non cloud / web based portable solution. Anyone else care to weigh in? :)

    Read the article

  • problem with HttpWebRequest.GetResponse perfomance in multithread applcation

    - by Nikita
    I have very bad perfomance of HttpWebRequest.GetResponse method when it is called from different thread for different URLs. For example if we have one thread and execute url1 it requires 3sec. If we ececute url1 and url2 in parallet it requires 10sec, first request ended after 8sec and second after 10sec. If we exutet 10 URLs url1, url2, ... url0 it requires 1min 4 sec!!! first request ended after 50 secs! I use GetResponse method. I tried te set DefaultConnectionLimit but it doesn't help. If use BeginGetRequest/EndGetResponse methods it works very fast but only if this methods called from one thread. If from different it is also very slowly. I need to execute Http requests from many threads at one time. ?an anyone ever encountered such a problem? Thank you for any suggestions.

    Read the article

  • Trying to find a good javascript function for hmac-sha1

    - by Darxval
    So i have been searching the web for a javascript source for an Hmac-sha1 algorithm. I saw Crypto's but i cant seem to get it to work, mainly because it has no idea what crypto means. (i copied the .js script functions into my script file) http://code.google.com/p/crypto-js/ I have my base64 encoded function already. that i got from here: http://nerds-central.blogspot.com/2007/01/fast-scalable-javascript-and-vbscript.html btw this for a twitter application using the new OAuth system. any help or links to where i can find anything on this would be helpful If you need me to elaborate let me know. thank you!

    Read the article

  • Make Trac use a Drupal user database for authentication

    - by denisw
    I am trying to set up a Trac instance as a complement to a Drupal site and would like to give users the possibility to use their Drupal account in Trac, too, ideally in a single sign-on fashion (if the user is already logged into Drupal, he is automatically logged into Trac). The question now is how to accomplish this. I have found a plugin named DrupalIntegration which seems to implement that functionality; however, it is poorly documented - in fact, not documented at all. I managed to install it, but don't know how to configure it. Here is what I came up with from looking at the source code and the documentation of the AccountManager plugin (on which DrupalIntegration depends): [components] trac.web.auth.loginmodule = disabled acct_mgr.api = enabled acct_mgr.web_ui.LoginModule = enabled acct_mgr.web_ui.RegistrationModule = disabled TracDrupalIntegration.DrupalIntegration = enabled [account-manager] drupal_database = mysql://<usernam>:<password>@localhost/<db> password_store = DrupalIntegration (<username>, <password> and <db> are naturally substituted with the correct data). While the Trac log says: 2010-12-18 10:54:09,570 Trac[loader] DEBUG: Loading TracDrupalIntegration from /usr/lib/python2.5/site-packages/TracDrupalIntegration-0.1-py2.5.egg this doesn't seem to work: trying to log in with a Drupal username / password results in an "Invalid username or password" error. Has someone used the DrupalIntegration and can point out to me what I did wrong? Or is there any other approach you know (or even have used in the past) to integrate Drupal and Trac that way?

    Read the article

  • Operations on bytes in C#

    - by Hooch
    Hello. I'm writing application to control LEDS on LPT. I have everything working except this. This is one small function. I have sth like that: I want to build function that will take two argument and return one number: In actual code those binary numers will be in hex. I put them there like that so that it's easier for you to visualize it. Example1: arg1 = 1100 1100 arg2 = 1001 0001 retu = 0100 1100 Example2: arg1 = 1111 1111 arg2 = 0001 0010 retu = 1110 1101 Example3: arg1 = 1111 0000 arg2 = 0010 0010 retu = 1101 0000 I have no idea how this function should look like. I want it to be as fast as possible. I'll call this function 200 times per second.

    Read the article

  • Is Sphinx better than LaTex in writing manuals/books?

    - by Masi
    Only a few people recommended to use Sphinx at the beginning of the year. Sphinx has developed rather fast recently. I noted today that Sage has made a change from direct editing with LaTex to Sphinx. This is evident in William Stein's answer on 2nd April about Sage's tutorial The tutorial is not a latex document anymore. It's an entirely different Sphinx document that can output pdf. It suggests me that Sphinx may be at a level such that it is suitable for me. Is Sphinx better than LaTex in writing manuals/books?

    Read the article

  • What's the cross-browser way to capture all single clicks on a button?

    - by sany
    What's the best way to execute a function exactly once every time a button is clicked, regardless of click speed and browser? Simply binding a "click" handler works perfectly in all browsers except IE. In IE, when the user clicks too fast, only "dblclick" fires, so the "click" handler is never executed. Other browsers trigger both events so it's not a problem for them. The obvious solution/hack (to me at least) is to attach a dblclick handler in IE that triggers my click handler twice. Another idea is to track clicks myself with mousedown/mouseup, which seems pretty primitive and probably belongs in a framework rather than my application. So, what's the best/usual/right way of handling this? (pure javascript or jQuery preferred)

    Read the article

  • Need help on nested loop of queries in php and mysql?

    - by mysqllearner
    Hi, I am trying to get do this: <?php $good_customer = 0; $q = mysql_query("SELECT user FROM users WHERE activated = '1'"); // this gives me about 40k users while($r = mysql_fetch_assoc($q)){ $money_spent = 0; $user = $r['user']; // Do queries on another 20 tables for($i = 1; $i<=20 ; $i++){ $tbl_name = 'data' . $i; $q2 = mysql_query("SELECT money_spent FROM $tbl_name WHERE user = '{$user}'"); while($r2 = mysql_fetch_assoc($q2)){ $money_spend += $r2['money_spent']; } if($money_spend > 1000000){ $good_customer += 1; } } } This is just an example. I am testing on localhost, for single user, it returns very fast. But when I try 1000, it takes forever, not even mentioned 40k users. Anyway to optimise/improve this code? EDIT: By the way, each of the others 20 tables has ~20 - 40k records

    Read the article

  • imported vm gives "failed to open/create network" error

    - by Colleen
    steps: 1. created a vm in windows 2. partitioned drive and installed ubuntu 3. exported the vm I created 4. mounted windows drive in ubuntu 5. imported the vm from the export, in the mounted drive 6. tried to start vm, got the following error: "Failed to open a session for the virtual machine XXXX. Failed to open/create the internal network 'HostInterfaceNetworking-Intel(R) 82579LM Gigabit Network Connection' (you might need to modprobe vboxnetflt to make it accessible) (VERR_INTNET_FLT_IF_NOT_FOUND). Result Code: NS_ERROR_FAILURE (0x80004005) Component: Console Interface: IConsole {1968b7d3-e3bf-4ceb-99e0-cb7c913317bb} " Network settings: Adapter 1: PCnet-FAST III (Bridged adapter, Intel(R) 82579LM Gigabit Network Connection)

    Read the article

  • does a ruby on rails rack class get access to the entire rails environment?

    - by Andrew Arrow
    that is when the def call(env) method is invoked by hitting any url, can I inside that method make some ActiveRecord queries, use classes defined in lib, etc. etc. Or is it more like an irb console without the rails env loaded? Another way to put it with a rake task example: task :foo => :environment do # with env end task :foo2 do # without env end I would think rack classes would NOT get the environment so they are super fast and don't take all the overhead of a normal rails request. But that doesn't seem to be the case. I CAN make ActiveRecord queries inside my rack class. So what is the advantage of rack then?

    Read the article

  • How to filter node list based on the contents of another node list

    - by ~otakuj462
    Hi, I'd like to use XSLT to filter a node list based on the contents of another node list. Specifically, I'd like to filter a node list such that elements with identical id attributes are eliminated from the resulting node list. Priority should be given to one of the two node lists. The way I originally imagined implementing this was to do something like this: <xsl:variable name="filteredList1" select="$list1[not($list2[@id_from_list1 = @id_from_list2])]"/> The problem is that the context node changes in the predicate for $list2, so I don't have access to attribute @id_from_list1. Due to these scoping constraints, it's not clear to me how I would be able to refer to an attribute from the outer node list using nested predicates in this fashion. To get around the issue of the context node, I've tried to create a solution involving a for-each loop, like the following: <xsl:variable name="filteredList1"> <xsl:for-each select="$list1"> <xsl:variable name="id_from_list1" select="@id_from_list1"/> <xsl:if test="not($list2[@id_from_list2 = $id_from_list1])"> <xsl:copy-of select="."/> </xsl:if> </xsl:for-each> </xsl:variable> But this doesn't work correctly. It's also not clear to me how it fails... Using the above technique, filteredList1 has a length of 1, but appears to be empty. It's strange behaviour, and anyhow, I feel there must be a more elegant approach. I'd appreciate any guidance anyone can offer. Thanks.

    Read the article

  • Quick way to do data lookup in PHP

    - by Ghostrider
    I have a data table with 600,000 records that is around 25 megabytes large. It is indexed by a 4 byte key. Is there a way to find a row in such dataset quickly with PHP without resorting to MySQL? The website in question is mostly static with minor PHP code and no database dependencies and therefore fast. I would like to add this data without having to use MySQL if possible. In C++ I would memory map the file and do a binary search in it. Is there a way to do something similar in PHP?

    Read the article

  • Efficient implementation of natural logarithm (ln) and exponentiation

    - by Donotalo
    Basically, I'm looking for implementation of log() and exp() functions provided in C library <math.h>. I'm working with 8 bit microcontrollers (OKI 411 and 431). I need to calculate Mean Kinetic Temperature. The requirement is that we should be able to calculate MKT as fast as possible and with as little code memory as possible. The compiler comes with log() and exp() functions in <math.h>. But calling either function and linking with the library causes the code size to increase by 5 Kilobytes, which will not fit in one of the micro we work with (OKI 411), because our code already consumed ~12K of available ~15K code memory. The implementation I'm looking for should not use any other C library functions (like pow(), sqrt() etc). This is because all library functions are packed in one library and even if one function is called, the linker will bring whole 5K library to code memory.

    Read the article

  • Python IPC, popen too slow

    - by UnableToLoad
    i need to run a subprocess (./myProgram) form python script and get output, actually i do this: import subprocess proc = subprocess.Popen('./generate_out', shell=False, stdout=subprocess.PIPE, ) while proc.poll() is None: out = proc.stdout.readline() data = doStuff(out) print(data) but is slow, sometimes pass a lot of time between the output produced by ./generate_out and the print(data), knowing that my doStuff() function is very fast, i think there is some buffer slowing down my pipe... Notes: ./generate_out, generates potentially an unlimited number of lines of finite length each. It seems that when too few chars are put in the pipe between the two processes nothing happens, then when enough is produced i get a huge print (non the expected behaviour!) sometimes i wait many seconds (10-20 and more) between generate_out print and python print) what can i do? maybe communicate() is faster? anithing else? Thank you a lot!

    Read the article

  • How to store and compare time-zone sensitive times

    - by Chad Moran
    I have a data structure where an entity has times stored as an int (minutes into the day) for fast comparison. The entity also has a Foreign Key reference back to a TimeZone table which contains the .NET CLR ID Name and it's Standard Time/Daylight Time acronyms. Since this information is stored as time-zone insensitive - I was wondering how in LINQ to SQL I could convert this into a UTC DateTime for comparison against other times that will be in UTC. Just to be clear this conversion has to be done server-side so that I can execute filtering on the SQL Server and not the client. I am using .NET 3.5 SP1 and SQL Server 2008.

    Read the article

  • What the difference between zend framework and Wordpress as framework ?

    - by justjoe
    i only know wordpress and start to seek another alternative framework, zend. i heard hearsay that zend's better from others framework. if you're "a serous coder", or try to act like one, you need to use it on building your web app. some said zend is better. But it's subjective. It's fast ans secure. But nobody tell me the reason or at leas compare it with with wordpress. ultimate question : Do zend have theme or plugin just like wordpress ? any hint will be helpful

    Read the article

  • How do you make Python wait so that you can read the output?

    - by anonnoir
    I've always been a heavy user of Notepad2, as it is fast, feature-rich, and supports syntax highlighting. Recently I've been using it for Python. My problem: when I finish editing a certain Python source code, and try to launch it, the screen disappears before I can see the output pop up. Is there any way for me to make the results wait so that I can read it, short of using an input() or time-delay function? Otherwise I'd have to use IDLE, because of the output that stops for you to read. (My apologies if this question is a silly one, but I'm very new at Python and programming in general.)

    Read the article

  • Multi-threaded downloader in C# question

    - by blez
    Currently I have multi-threaded downloader class that uses HttpWebRequest/Response. All works fine, it's super fast, BUT.. the problem is that the data needs to be streamed while it's downloading to another app. That means that it must be streamed in the right order, the first chunk first, and then the next in the queue. Currently my downloader class is sync and Download() returns byte[]. In my async multi-threaded class I make for example, list with 4 empty elements (for slots) and I pass each index of the slot to each thread using the Download() function. That simulates synchronization, but that's not what I need. How should I do the queue thing, to make sure the data is streamed as soon as the first chunk start downloading.

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >