Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 629/1833 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • Why do you program? Why do you do what you do? [closed]

    - by Pirate for Profit
    To me, writing a new program is like a puzzle. Before you write any code for a large system, you have to carefully craft each piece in your mind and imagine how all the pieces will fit together. If you don't, your solution may end up being undefined. What I mean is, I often don't know what I'm doing so I'll come to this site and beg for a code snippet, and then somehow try to hack it into my projects. I started writing GW-Basic when I was around 8 years old. Then it progressed from there, went to california university and did some Python and C++, but really didn't learn anything(college = highsk00l++). I've mostly been self-taught, took awhile to break bad habits and I'd say only in recent years would I consider myself understanding of design patterns and all that stuff (no but honestly procedural dudes, I would not want to design and maintain a large system procedurally, yous crazy). And despite my username, money has NOT been a big motivator. I've gone from job to job, I can usually get the work done perfect very quickly, any delays on my part are understandable (well about as understandable as it gets in the industry). But I ain't gonna work for peanuts because I got mouths to feed. Why do you program? Why do you do what you do?

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • How to Store and Retrieve Images Using SQL Server (Server Management Studio)

    - by Joe Majewski
    I am having difficulties when trying to insert files into a SQL Server database. I'll try to break this down as best as I can: What data type should I be using to store image files (jpeg/png/gif/etc)? Right now my table is using the image data type, but I am curious if varbinary would be a better option. How would I go about inserting the image into the database? Does Microsoft SQL Server Management Studio have any built in functions that allow insertions of files into tables? If so, how is that done? Also, how could this be done through the use of an HTML form with PHP handling the input data and placing it into the table? How would I fetch the image from the table and display it on the page? I understand how to SELECT the cell's contents, but how would I go about translating that into a picture. Would I have to have a header(Content type: image/jpeg)? I have no problem doing any of these things with MySQL, but the SQL Server environment is still new to me, and I am working on a project for my job that requires the use of stored procedures to grab various data. Any and all help is appreciated.

    Read the article

  • Seeking good practice advice: multisite in Drupal

    - by deanloh
    I'm using multisite to host my client sites. During development stage, I use subdomain to host the staging site, e.g. client1.mydomain.com. And here's how it look under the SITES folder: /sites/client1.mydomain.com When the site is completed and ready to go live, I created another folder for the actual domain, e.g. client1.com. Hence: /sites/client1.com Next, I created symlinks under client1.com for FILES and SETTINGS.PHP that points to the subdomain i.e. /sites/client1.com/settings.php --> /sites/client1.mydomain.com/settings.php /sites/client1.com/files --> /sites/client1.mydomain.com/files Finally, to prevent Google from indexing both the subdomain and actual domain, I created the rule in .htaccess to rewrite client1.mydomain.com to client1.com, therefore, should anyone try to access the subdomain, he will be redirected to the actual domain. This above arrangement works perfectly fine. But I somehow feel there is a better way to achieve the above in much simplified manner. Please feel free to share your views and all advice is much appreciated.

    Read the article

  • Google Doclist API : bug in revision history?

    - by Jerbome
    I'm trying to manage revisions of files (not documents) stored in google docs programatically using gdata document list java library v3. I can create files and revisions using this tool : I can see them in the web UI. The thing is : the content of my revisions seems wrong. Here is my test protocol : I create a plain text file with "Hello World" in it. I upload it to gdocs without converting it. I create a revision of this file, its content changing to "Content of the second version" I create another revision, its content is now "Content of the third version" At each step, I check the content of each revision, using my app AND using the web UI. Here is what I get : First step : no problem i see one version containing the "Hello world" text Second step : no problem either, i see 2 versions, containing Hello World for the first one and Content of the second version for the second. Third step : here the problems comes. I see my 3 versions, but only the third and last seems to be correct. when i download the second version, the content is "Content of the second versio" (not a typo, it misses the 'n'). And i cannot even download the initial version, it seems to timeout. Important thing : I did not have this problem three weeks ago, my revision management worked well. I have no idea of what happens there, except it seems to be server-related, as the problem is seen either with my app or the google native webapp. Last thing : I tried using the google drive API as gdocs had been merged with drive. When i request revisions of my file, the API returns me an error saying that revisions are not supported for files, even if i can see them in the UI. I tried on converted documents, it worked. I'm looking for a workaround for this problem. Has anyone ever encountered such a problem ? Thanks in advance, Jérôme

    Read the article

  • Can Visual Studio (should it be able to) compute a diff between any two changesets associated with a

    - by Hamish Grubijan
    Here is my use case: I start on a project XYZ, for which I create a work item, and I make frequent check-ins, easily 10-20 in total. ALL of the code changes will be code-read and code-reviewed. The change sets are not consecutive - other people check-in in-between my changes, although they are very unlikely to touch the exact same files. So ... at the en of the project I am interested in a "total diff" - as if there was a single check-in by me to complete the entire project. In theory this is computable. From the list of change sets associated with the work item, you get the list of all files that were affected. Then, the algorithm can aggregate individual diffs over each file and combine them into one. It is possible that a pure total diff is uncomputable due to the fact that someone else renamed files, or changed stuff around very closely, or in the same functions as me. I that case ... I suppose a total diff can include those changes by non-me as well, and warn me about the fact. I would find this very useful, but I do not know how to do t in practice. Can Visual Studio 2008/2010 (and/or TFS server) do it? Are there other source control systems capable of doing this? Thanks.

    Read the article

  • Automated generation of the "What's new in this version" reading

    - by lunohodov
    We all know it - this is the reading that lists the changes brought by each new version of our favorite software. Whenever it comes bundled as a file (Changes.txt, CHANGES, WhatsNew.txt, etc) or is presented within an installer this is usually the first thing we read before installing/updating. On a current project we have a Changelog.txt file that gets updated by hand every time a notable change occurs. However this often leads to "I forgot to update the changelog". So I am looking for a way to automate this. I am thinking of a script that extracts the changes from our commit messages (using a convention) and generates the file. For example a commit message like Updated json-glib to 0.7.6 [changes] - Fix crash on Windows - Fix issues with facebook contacts with very large UIDs. Would produce the following Changes.txt Version 1.9.18 (03/10/2010) Fix crash on Windows Fix issues with facebook contacts with very large UIDs. Does anyone know a better solution/tool for this or am I going to write my own? Thanks!

    Read the article

  • How to catch this low level MySQL (?) error in PHP/Magento

    - by andnil
    When I'm executing the following statement in Magento with a really large $sku, the execution terminates without any errors thrown what so ever. There are no errors in either Magento's, Apache's or PHP's error logs. Mage::getModel('catalog/product')-loadByAttribute('sku', $sku); Question: How do I catch the error? I've tried to set custom error handlers, and for testing purposes I've also managed to trigger error situations where each of the error handler functions are invoked. But when running the previously mentioned Magento code with a large $sku, none of the error handling functions are executed. error_reporting( -1 ); set_error_handler( array( 'Error', 'captureNormal' ) ); set_exception_handler( array( 'Error', 'captureException' ) ); register_shutdown_function( array( 'Error', 'captureShutdown' ) ); For completeness, this is the $sku I'm passing to loadByAttribute(). (The sku is invalid, but that is not the issue) 1- 9685 0102046|1- 9685 1212100|1- 9685 1212092|1- 9685 1212096|1- 9685 1102100|1- 9685 1102108|1- 9685 1102112|1- 9685 1102092|1- 9685 0102048|1- 9685 0102054|1- 9685 0102056|1- 9685 0102058|1- 9685 1212104|1- 9685 1212108|1- 9685 0212058|1- 9685 0104050|1- 9685 0212050|1- 9685 0212056|1- 9685 0212044|1- 9685 0212048|1- 9685 0212052|1- 9685 0212054|1- 9685 1102104|1- 9685 1102124 Any insight into this matter is much appreciated! Update: Upon further investigation, this is the exact point in the code where execution terminates. when the foreach is executed I guess Magento goes into MySQL world and starts loading up data from the database. \Mage\Catalog\Model\Abstract.php public function loadByAttribute($attribute, $value, $additionalAttributes = '*') { $collection = $this->getResourceCollection() ->addAttributeToSelect($additionalAttributes) ->addAttributeToFilter($attribute, $value) ->setPage(1,1); foreach ($collection as $object) { // <--------------- HERE return $object; } return false; } Note, I'm ONLY interested in finding out how to properly CATCH these kinds of errors, not "fix" the logic. This is so that I can present a proper error message to the user. The example above with the malformed sku is contrived and I have no desire to make my Magento app work with those erroneous skus.

    Read the article

  • Top n items in a List ( including duplicates )

    - by Krishnan
    Trying to find an efficient way to obtain the top N items in a very large list, possibly containing duplicates. I first tried sorting & slicing, which works. But this seems unnnecessary. You shouldn't need to sort a very large list if you just want the top 20 members. So I wrote a recursive routine which builds the top-n list. This also works, but is very much slower than the non-recursive one! Question: Which is my second routine (elite2) so much slower than elite, and how do I make it faster ? My code is attached below. Thanks. import scala.collection.SeqView import scala.math.min object X { def elite(s: SeqView[Int, List[Int]], k:Int):List[Int] = { s.sorted.reverse.force.slice(0,min(k,s.size)) } def elite2(s: SeqView[Int, List[Int]], k:Int, s2:List[Int]=Nil):List[Int] = { if( k == 0 || s.size == 0) s2.reverse else { val m = s.max val parts = s.force.partition(_==m) val whole = if( parts._1.size > 1) parts._1.tail:::parts._2 else parts._2 elite2( whole.view, k-1, m::s2 ) } } def main(args:Array[String]) = { val N = 1000000/3 val x = List(N to 1 by -1).flatten.map(x=>List(x,x,x)).flatten.view println(elite2(x,20)) println(elite(x,20)) } }

    Read the article

  • Basic Application Organization + Publishing (.NET 4.0)

    - by keynesiancross
    Hi all, I'm trying to figure out the best way to keep my program organized. Currently I have many class files in one project file, but some of these classes do things that are very different, and some I would like to expose to other applications in the future. One thought I had to organizing my application was to create multiple project files, with one "Main" project, which would interact with all the other projects and their relevant classes as needed. Does this make sense? I was wondering if anyone had any suggestions in regards to using multiple project files in one solution (and how do you create something like this?), and if it makes sense to have multiple namespaces in one solution... Cheers ----Edit Below---- Sorry, my fault. Currently my program is all in one console project. Within this project I have several classes, some of which basically launch a BackgroundWorker and run an endless loop pulling data. The BackgroundWorker then passes this data back to the main business logic as needed. I'm hoping to seperate this data pull material (including the background worker material) into one project file, and the rest of the business logic into another project file. The projects will have to pass objects between eachother though (the data to the main business logic, and the business logic will pass startup parameteres to the dataPull project)... Hopefully this adds a bit more detail.

    Read the article

  • Use `require()` with `node --eval`

    - by rentzsch
    When utilizing node.js's newish support for --eval, I get an error (ReferenceError: require is not defined) when I attempt to use require(). Here's an example of the failure: $ node --eval 'require("http");' undefined:1 ^ ReferenceError: require is not defined at eval at <anonymous> (node.js:762:36) at eval (native) at node.js:762:36 $ Here's a working example of using require() typed into the REPL: $ node > require("http"); { STATUS_CODES: { '100': 'Continue' , '101': 'Switching Protocols' , '102': 'Processing' , '200': 'OK' , '201': 'Created' , '202': 'Accepted' , '203': 'Non-Authoritative Information' , '204': 'No Content' , '205': 'Reset Content' , '206': 'Partial Content' , '207': 'Multi-Status' , '300': 'Multiple Choices' , '301': 'Moved Permanently' , '302': 'Moved Temporarily' , '303': 'See Other' , '304': 'Not Modified' , '305': 'Use Proxy' , '307': 'Temporary Redirect' , '400': 'Bad Request' , '401': 'Unauthorized' , '402': 'Payment Required' , '403': 'Forbidden' , '404': 'Not Found' , '405': 'Method Not Allowed' , '406': 'Not Acceptable' , '407': 'Proxy Authentication Required' , '408': 'Request Time-out' , '409': 'Conflict' , '410': 'Gone' , '411': 'Length Required' , '412': 'Precondition Failed' , '413': 'Request Entity Too Large' , '414': 'Request-URI Too Large' , '415': 'Unsupported Media Type' , '416': 'Requested Range Not Satisfiable' , '417': 'Expectation Failed' , '418': 'I\'m a teapot' , '422': 'Unprocessable Entity' , '423': 'Locked' , '424': 'Failed Dependency' , '425': 'Unordered Collection' , '426': 'Upgrade Required' , '500': 'Internal Server Error' , '501': 'Not Implemented' , '502': 'Bad Gateway' , '503': 'Service Unavailable' , '504': 'Gateway Time-out' , '505': 'HTTP Version not supported' , '506': 'Variant Also Negotiates' , '507': 'Insufficient Storage' , '509': 'Bandwidth Limit Exceeded' , '510': 'Not Extended' } , IncomingMessage: { [Function: IncomingMessage] super_: [Function: EventEmitter] } , OutgoingMessage: { [Function: OutgoingMessage] super_: [Function: EventEmitter] } , ServerResponse: { [Function: ServerResponse] super_: [Circular] } , ClientRequest: { [Function: ClientRequest] super_: [Circular] } , Server: { [Function: Server] super_: { [Function: Server] super_: [Function: EventEmitter] } } , createServer: [Function] , Client: { [Function: Client] super_: { [Function: Stream] super_: [Function: EventEmitter] } } , createClient: [Function] , cat: [Function] } > Is there a way to use require() with node's --eval? I'm on node 0.2.6 on Mac OS X 10.6.5.

    Read the article

  • Boost shared_ptr use_count function

    - by photo_tom
    My application problem is the following - I have a large structure foo. Because these are large and for memory management reasons, we do not wish to delete them when processing on the data is complete. We are storing them in std::vector<boost::shared_ptr<foo>>. My question is related to knowing when all processing is complete. First decision is that we do not want any of the other application code to mark a complete flag in the structure because there are multiple execution paths in the program and we cannot predict which one is the last. So in our implementation, once processing is complete, we delete all copies of boost::shared_ptr<foo>> except for the one in the vector. This will drop the reference counter in the shared_ptr to 1. Is it practical to use shared_ptr.use_count() to see if it is equal to 1 to know when all other parts of my app are done with the data. One additional reason I'm asking the question is that the boost documentation on the shared pointer shared_ptr recommends not using "use_count" for production code.

    Read the article

  • Python: Removing particular character (u"\u2610") from string

    - by duhaime
    I have been wrestling with decoding and encoding in Python, and I can't quite figure out how to resolve my problem. I am looping over xml text files (sample) that are apparently coded in utf-8, using Beautiful Soup to parse each file, then looking to see if any sentence in the file contains one or more words from two different list of words. Because the xml files are from the eighteenth century, I need to retain the em dashes that are in the xml. The code below does this just fine, but it also retains a pesky box character that I wish to remove. I believe the box character is this character. (You can find an example of the character I wish to remove in line 3682 of the sample file above. On this webpage, the character looks like an 'or' pipe, but when I read the xml file in Komodo, it looks like a box. When I try to copy and paste the box into a search engine, it looks like an 'or' pipe. When I print to console, though, the character looks like an empty box.) To sum up, the code below runs without errors, but it prints the empty box character that I would like to remove. for work in glob.glob(pathtofiles): openfile = open(work) readfile = openfile.read() stringfile = str(readfile) decodefile = stringfile.decode('utf-8', 'strict') #is this the dodgy line? soup = BeautifulSoup(decodefile) textwithtags = soup.findAll('text') textwithtagsasstring = str(textwithtags) #this method strips everything between anglebrackets as it should textwithouttags = stripTags(textwithtagsasstring) #clean text nonewlines = textwithouttags.replace("\n", " ") noextrawhitespace = re.sub(' +',' ', nonewlines) print noextrawhitespace #the boxes appear I tried to remove the boxes by using noboxes = noextrawhitespace.replace(u"\u2610", "") But Python threw an error flag: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 280: ordinal not in range(128) Does anyone know how I can remove the boxes from the xml files? I would be grateful for any help others can offer.

    Read the article

  • HTML templating in C++ and translations

    - by Karim
    I'm using HTML_Template for templating in my C++-based web app (don't ask). I chose that because it was very simple and it turns out to be a good solution. The only problem right now is that I would like to be able to include translatable strings in the HTML templates (HTML_Template does not really support that). Ultimately, what I would like is to have a single file that contains all the strings to be translated. It can then be given to a translator and plugged back in to the app and used depending on which language the user chose in settings. I've been going back and forth on some options and was wondering what others felt was the best choice (or if there's a better choice that isn't listed) Extend HTML_Template to include a tag for holding the literal string to translate. So, for example, in the HTML I would put something like <TMPL_TRANS "this is the text to translate"/> Use a completely separate scheme for translation and preprocess the HTML files to generate the final template files (without the special translation lingo). For example, in the pre-processed file, translatable text would look like this: {{this is the text to translate}} and the final would look like: this is the text to translate Don't do anything and let the translators find the string to translate in the html and js files themselves.

    Read the article

  • Optimize Duplicate Detection

    - by Dave Jarvis
    Background This is an optimization problem. Oracle Forms XML files have elements such as: <Trigger TriggerName="name" TriggerText="SELECT * FROM DUAL" ... /> Where the TriggerText is arbitrary SQL code. Each SQL statement has been extracted into uniquely named files such as: sql/module=DIAL_ACCESS+trigger=KEY-LISTVAL+filename=d_access.fmb.sql sql/module=REP_PAT_SEEN+trigger=KEY-LISTVAL+filename=rep_pat_seen.fmb.sql I wrote a script to generate a list of exact duplicates using a brute force approach. Problem There are 37,497 files to compare against each other; it takes 8 minutes to compare one file against all the others. Logically, if A = B and A = C, then there is no need to check if B = C. So the problem is: how do you eliminate the redundant comparisons? The script will complete in approximately 208 days. Script Source Code The comparison script is as follows: #!/bin/bash echo Loading directory ... for i in $(find sql/ -type f -name \*.sql); do echo Comparing $i ... for j in $(find sql/ -type f -name \*.sql); do if [ "$i" = "$j" ]; then continue; fi # Case insensitive compare, ignore spaces diff -IEbwBaq $i $j > /dev/null # 0 = no difference (i.e., duplicate code) if [ $? = 0 ]; then echo $i :: $j >> clones.txt fi done done Question How would you optimize the script so that checking for cloned code is a few orders of magnitude faster? System Constraints Using a quad-core CPU with an SSD; trying to avoid using cloud services if possible. The system is a Windows-based machine with Cygwin installed -- algorithms or solutions in other languages are welcome. Thank you!

    Read the article

  • .htaccess redirect noticably increasing load time

    - by GTCrais
    I set up a SEF link via .htaccess RewriteRule to one of the articles on my website just to see how that works, and it does work but it considerably increases the load time of that particular page. On average the articles (including the one I'm talking about, when not using the rewrite rule) load in about 1.3 seconds. With the rewrite rule, the load time is 3.3 seconds on average until the page displays, and the loader thingy in the firefox tab keeps spinning for another 2 seconds. I have WAMP setup on my computer, and the website is being accessed through no-ip.com. Here is the .htaccess config (very simple, as you can see): Options +FollowSymLinks RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^o-sw-liji /NewSWL/o-nama.php?body=o-sw-liji In httpd.conf I have this (somewhere I read this might affect the load time for some reason - searching for files through all the directories or something, I don't remember exactly what I read): <Directory /> Options None AllowOverride None Order deny,allow Deny from all </Directory> DocumentRoot "Z:/Program Files (x86)/wamp/www/" <Directory "Z:/Program Files (x86)/wamp/www/"> Options None AllowOverride All Order allow,deny Allow from all </Directory> Any ideas why .htaccess redirect increases the load time by so much? UPDATE: so I put a session based counter in the "o-nama.php" script. Apparently when I access the web via the 'normal' link i.e. 'o-nama.php?body=o-sw-liji', the counter increases by one, as it should - it's one page load. But when the page is accessed through the redirected link, i.e. 'o-nama/o-sharewood-liji' the counter increases by 6-8, which naturally makes the load time a lot longer, since it's loading the same page for 6-8 times. I have no idea why this is happening. Any help is appreciated.

    Read the article

  • creating tables in ruby-on-rails 3 through migrations?

    - by fayer
    im trying to understand the process of creating tables in ruby-on-rails 3. i have read about migrations. so i am supposed to create tables by editing in the files in: Database Migrations/migrate/20100611214419_create_posts Database Migrations/migrate/20100611214419_create_categories but they were generated by: rails generate model Post name:string description:text rails generate model Category name:string description:text does this mean i have to use "rails generate model" command everytime i want to create a table? what if i create a migration file but want to add columns. do i create another migration file for adding those or do i edit the existing migration file directly? the guide told me to add a new one, but here is the part i dont understand. why would i add a new one? cause then the new state will be dependent of 2 migration files. in symfony i just edit a schema.yml file directly, there are no migration files with versioning and so on. im new to RoR and want to get the picture of creating tables. thanks

    Read the article

  • Quick introduction to Linux needed [closed]

    - by 0xDEAD BEEF
    Hi guys! I have to get into Linux ASAP and realy mean ASAP. I have installed Cygwin but as allways - things dont go as easy as one would like. First problem i enconter was - i choose KDE package, but there is no sign of KDE files anywhere in cygwin folder. How do i run KDE windows. Currently startx fires, but all looks ugly! My desire is to download and run Qt Creator. Seems that there is no cygwin package, but downloading source and compiling is good to go. Only that i have forgoten every linux command i ever knew! :D Please - what are default commands u use on linux? What does exec do? what ./ stands for? What is directory strucutre and why there is such mess in bin folder? Thanks god - i have windows over cygwin, so downloading files is not a problem, but again -how do i unpack them in linux style and how to i build? simply issue "make" command from folder, where i extracted files? Please help!

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • Skip subdirectory in python import

    - by jstaab
    Ok, so I'm trying to change this: app/ - lib.py - models.py - blah.py Into this: app/ - __init__.py - lib.py - models/ - __init__.py - user.py - account.py - banana.py - blah.py And still be able to import my models using from app.models import User rather than having to change it to from app.models.user import User all over the place. Basically, I want everything to treat the package as a single module, but be able to navigate the code in separate files for development ease. The reason I can't do something like add for file in __all__: from file import * into init.py is I have circular references between the model files. A fix I don't want is to import those models from within the functions that use them. But that's super ugly. Let me give you an example: user.py ... from app.models import Banana ... banana.py ... from app.models import User ... I wrote a quick pre-processing script that grabs all the files, re-writes them to put imports at the top, and puts it into models.py, but that's hardly an improvement, since now my stack traces don't show the line number I actually need to change. Any ideas? I always though init was probably magical but now that I dig into it, I can't find anything that lets me provide myself this really simple convenience.

    Read the article

  • Could git do not store history of specific folders when working with git-svn?

    - by Timofey Basanov
    In short: Is there a way to disable storing full history for specific folders in git-svn repo? We have pretty large SVN repo with big checkout. I would like to migrate it to Git for my local development, because Git speeds up update and status commands orders of magnitude. When I simply do git svn clone it creates very big repo. Big enough to be bigger then my whole HDD. The problem lies in binary directories for which history is too large. Latest binaries are required for proper local build, but history is not required at all for my development process. I will never change them myself. I would like to store only latest versions for specific folders, or may be a history, but for no more than a week. I could only found filter for git svn fetch, which excludes specific folders at all. This is not exactly what I need. It's OK with me to have Cron task which deletes history from specific folders, but I do not know how to make one. Also Cron does not solve problem of first git svn clone. P.S. SVN repository structure could not be changed by any means.

    Read the article

  • i don't know how to solve this error

    - by wide
    in local it works. when i load server, i got this error. Using themed css files requires a header control on the page. (e.g. <head runat="server" />). Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Using themed css files requires a header control on the page. (e.g. <head runat="server" />). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Using themed css files requires a header control on the page. (e.g. <head runat="server" />).] System.Web.UI.PageTheme.SetStyleSheet() +2458406 System.Web.UI.Page.OnInit(EventArgs e) +8699420 System.Web.UI.Control.InitRecursive(Control namingContainer) +333 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +378

    Read the article

  • Is it possible to reference remote content from chrome.manifest? (XULRunner)

    - by siemaa
    Hi, I have a xulrunner application and I've been trying to reference remote content from chrome.manifest file. Tt's an application for the company I work in; it's run on a number of computers (most of them are used by other employees as well) as a kind of an internet monitoring service. The problem I'd like to solve is this: updating the code of such application usually requires me to manually copy the modified files to every computer that the application is running on (I've had no luck trying to make automatic updates via xulrunner platform). This process has become very tedious. What I'd like to have is a web server, where all of the xul and js files would be accessible, so that every application could reference them from there. This would require me only to update the code on that server, and the applications (when restarted) would automatically get the latest code. What I managed to do: I can reference js scripts from a xul file using http based urls and everything works fine (I can use local, binary components etc.), although the xul file has to be local - that I'd like to change. But when I write in chrome.manifest a line like: content my_app http://path/to/app/files/ and then use the line in default/preferences/pref.js pref("toolkit.defaultChromeURI", "chrome://my_app/content/my_app.xul"); it just opens a console window (to test I manually run the application with the -console option) and no code gets executed. The file can be downloaded remotely using wget so I guess this isn't the web server issue. The applications work on Windows machines. Is there some kind of security issue causing such behavior or am I doing something wrong? Is it even possible to register remote, http based content as chrome?

    Read the article

  • PHP cache header override

    - by Soyo
    I've been through over 100 answers here, lots to try, NOTHING working?? Have a PHP based site. I need caching OFF for all .php files EXCEPT A SELECT FEW. So, in .htaccess, I have the following: ExpiresActive On # Eliminate caching for certain dynamic files <FilesMatch "\.(php|cgi|pl)$"> ExpiresDefault A0 Header set Cache-Control "no-cache, no-store, must-revalidate, max-age=0, proxy-revalidate, no-transform" Header set Pragma "no-cache" </FilesMatch> Using Firebug, I see the following: Cache-Control no-cache, no-store, must-revalidate, max-age=0, proxy-revalidate, no-transform Connection Keep-Alive Content-Type text/html Date Sun, 02 Sep 2012 19:22:27 GMT Expires Sun, 02 Sep 2012 19:22:27 GMT Keep-Alive timeout=3, max=100 Pragma no-cache Server Apache Transfer-Encoding chunked X-Powered-By PHP/5.2.17 Hey, Looks great! BUT, I have a couple .php pages I need some very short caching on. I thought the simple answer was having this added to the very top of each php page in which I want caching enabled: <?php header("Cache-Control: max-age=360"); ?> Nope. Then I tried various versions of the above. Nope. Then I tried meta http-equiv variations. Nope. Then I tried variations of the .htaccess code along with the above variations, such as limiting it to: # Eliminate caching for certain dynamic files <FilesMatch "\.(php|cgi|pl)$"> Header set Cache-Control "no-cache, max-age=0" </FilesMatch> Nope. It seems nothing I do will allow a single .php to be cache enabled with the .htaccess code in place, short of removing the statements from the .htaccess file altogether. Where am I going wrong? What do I have to do to get individual php pages to be cacheable while the rest remain off?? Thank you for any thoughts.

    Read the article

  • dm_exec_query_stats returning stale data?

    - by VoiceOfUnreason
    I've been testing my app on a SQL Server 2005 database, and am trying to establish a preliminary picture of the query performance using sys.dm_exec_query_stats. Problem: there's a particular query that I'm interested in, because total_elapsed_time and last_elapsed_time are both large numbers. When I tickle my app to invoke that query (this runs successfully), then refresh my view of the stats, I find that 1) execution_count has incremented (expected) 2) last_execution_time has updated to now (expected) 3) last_elapsed_time is still a large value (not expected - I anticipated a new value) 4) total_elapsed_time is unchanged (contradiction?) If last_elapsed_time refers to the execution that happened @ last_execution_time, then the total_elapsed_time should have increased? This documentation: http://msdn.microsoft.com/en-us/library/ms189741(SQL.90).aspx tells me that last_execution_time is the last time the plan was executed, and last_elapsed_time comes from the "most recently executed plan", but doesn't tell me why those might be different. The query itself is uncomplicated (SELECT/WHERE/ORDER BY - parameters appearing in the where clause, but no clever operations), the table has maybe 25 rows in it right now. Questions: 1) What's the real relationship between execution_count, last_execution_time, and last_elapsed_time? 2) Where is the documentation of this relationship (manual, third party book, blog, bug ticket, stone tablets...) ?

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >