Search Results

Search found 77950 results on 3118 pages for 'large file upload'.

Page 149/3118 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Resizing uploaded files in django using PIL

    - by Nikunj
    I am using PIL to resize an uploaded file using this method: def resize_uploaded_image(buf): imagefile = StringIO.StringIO(buf.read()) imageImage = Image.open(imagefile) (width, height) = imageImage.size (width, height) = scale_dimensions(width, height, longest_side=240) resizedImage = imageImage.resize((width, height)) return resizedImage I then use this method to get the resizedImage in my main view method: image = request.FILES['avatar'] resizedImage = resize_uploaded_image(image) content = django.core.files.File(resizedImage) acc = Account.objects.get(account=request.user) acc.avatar.save(image.name, content) However, this gives me the 'read' error. Trace: Exception Type: AttributeError at /myapp/editAvatar Exception Value: read Any idea how to fix this? I have been at it for hours! Thanks! Nikunj

    Read the article

  • PHP upload file using PUT instead of POST

    - by Marco Demaio
    I read something about this on PHP docs, but it's not celar to me: Do the most widely used browsers (IE, FF, Chrome, Safari, Opera, ...) support this PUT method to uplaod files? What HTML should I write to make the browser call the server via PUT request. I mean do I need to write a FORM with an INPUT file field and just replace the attribute method="POST" with the method="PUT"? On the PHP docs (link above) they say a PUT request is much simplier than a POST request when uploading file, along with this advantage, what other advantages/disadvanatges do the PUT has comapred to teh POST? Thanks!

    Read the article

  • Large number of Google Map Markers and IE6?

    - by Sivakanesh
    I'm working on an application that generates a large number of Google Map markers (2000 - 7000) via JSON. I'm also using MarkerCluster. It works quick on Chrome and FF but IE6 takes few minutes and just crashes the first time I try to zoom in. I'm not doing any more than just adding the markers to a map using JQuery & GMap API. So I looked at the following URL of the regular Google Map. http://maps.google.co.uk/maps?f=q&source=s_q&hl=en&q=hotel&sll=53.182996,-2.581787&sspn=1.494529,4.927368&ie=UTF8&split=1&rq=1&ev=p&hq=hotel&hnear=&ll=53.123702,-2.730103&spn=1.496594,4.927368&t=h&z=8 It shows a lot of tiny markers (~1000) and works fine on IE6. Do you have any ideas why this works and the markers added via the API struggles? Thanks

    Read the article

  • IE Problem: Jagged Scrolling and Dragging Inside Large Viewport

    - by br4inwash3r
    My site is a single page website with a very large "canvas" size. and to navigate around the site i'm using jquery scrollTo and jquery Dragscrollable plugin. in IE 7 & 8 the scrolling/dragging movement is very jagged. at first i thought it was my script or some other plugin that's causing this. but after i stripped down everything it's still the same. i've tried a few tips i've found around here. but none is really working for me. i know i should be asking this question to the plugin developers. i did.. i just thought maybe you guys have some other solution for this issue. here's the URL to the demo site: http://satuhati.com/bare/template/ really appreciate any help u can give :) thx..

    Read the article

  • C#: Efficiently search a large string for occurences of other strings

    - by Jon
    Hi, I'm using C# to continuously search for multiple string "keywords" within large strings, which are = 4kb. This code is constantly looping, and sleeps aren't cutting down CPU usage enough while maintaining a reasonable speed. The bog-down is the keyword matching method. I've found a few possibilities, and all of them give similar efficiency. 1) http://tomasp.net/articles/ahocorasick.aspx -I do not have enough keywords for this to be the most efficient algorithm. 2) Regex. Using an instance level, compiled regex. -Provides more functionality than I require, and not quite enough efficiency. 3) String.IndexOf. -I would need to do a "smart" version of this for it provide enough efficiency. Looping through each keyword and calling IndexOf doesn't cut it. Does anyone know of any algorithms or methods that I can use to attain my goal?

    Read the article

  • Speeding up inner joins between a large table and a small table

    - by Zaid
    This may be a silly question, but it may shed some light on how joins work internally. Let's say I have a large table L and a small table S (100K rows vs. 100 rows). Would there be any difference in terms of speed between the following two options?: OPTION 1: OPTION 2: --------- --------- SELECT * SELECT * FROM L INNER JOIN S FROM S INNER JOIN L ON L.id = S.id; ON L.id = S.id; Notice that the only difference is the order in which the tables are joined. I realize performance may vary between different SQL languages. If so, how would MySQL compare to Access?

    Read the article

  • Faster forking of large processes on Linux ?

    - by timday
    What's the fastest, best way on modern Linux of achieving the same effect as a fork-execve combo from a large process ? My problem is that the process forking is ~500MByte big, and a simple benchmarking test achieves only about 50 forks/s from the process (c.f ~1600 forks/s from a minimally sized process) which is too slow for the intended application. Some googling turns up vfork as having being invented as the solution to this problem... but also warnings about not to use it. Modern Linux seems to have acquired related clone and posix_spawn calls; are these likely to help ? What's the modern replacement for vfork ? I'm using 64bit Debian Lenny on an i7 (the project could move to Squeeze if posix_spawn would help).

    Read the article

  • Write large PDFs with Java sequentially

    - by Benjamin Muschko
    I am looking for a Java library that let's you write large PDFs sequentially with a minimum amount of memory. Most of the libraries I had a look at has to build up the document in memory first before you can actually write it. The problem I have to deal with are OutOfMemoryErrors. It would be great if I could flush the writer programmatically whenever needed e.g. for each page. Does anyone have any recommendations? I need something with a license along the lines of the LGPL (so not the GPL or the Affero GPL that iText uses).

    Read the article

  • Problem with large Canvas in Silverlight

    - by Fury
    Hi, I am developing (using Silvelight 3) an aplication that creates some kind of timeline and places objects on it. For this purpose I need a really large Canvas (up to 2000000 pixels width) with long lines on it, but whenever I create Canvas even 40000 pixels width it behaves very strangely, randomly disappearing. I have found a post with the description of the exactly same problem on silverlight forums and another one here on the stackoverflow. It seems that is a known problem since silverlight 2, but I can't find any good workaround. Does anybody know such workaround or can check is it still an issue in Silverlight 4? Thanks in advance.

    Read the article

  • How do you pronounce large hex numbers?

    - by warrenm
    This question might be subjective, but I'm hoping there's some consensus that I just don't know about. Short hex numbers are relatively easy to spell out (e.g., 0xC4A might be "cee-four-ay"). Hex numbers ending with a multiple of three zeros are likewise pretty easy (e.g., 0xC000 might be "cee-thousand"). But is there a concise way to pronounce 0xFFFF0000 or 0xCA000000? Magic numbers like 0xDEADBEEF are popular for their pronounceability, but I'm mostly asking about large-ish, round numbers that seem like they should have a more concise pronunciation.

    Read the article

  • Technical reasons for not having large background images in websites

    - by kees-kist
    Most websites tend to have either a solid color as background, or a small image that is repeated. Why aren't more websites using a large image (such as a photo) as background? I can think of the following reasons: 1) Problems with different screen resolutions. Too small and gaps start to appear on the left and/or right side for higher resolutions, too big and lower resolutions only show part of the image. 2) Bandwidth. Although this is unlikely to be a problem for most websites. Are there any other reasons why such backgrounds are not being used more often?

    Read the article

  • C# Importing Large Volume of Data from CSV to Database

    - by guazz
    What's the most efficient method to load large volumes of data from CSV (3 million + rows) to a database. The data needs to be formatted(e.g. name column needs to be split into first name and last name, etc.) I need to do this in a efficiently as possible i.e. time constraints I am siding with the option of reading, transforming and loading the data using a C# application row-by-row? Is this ideal, if not, what are my options? Should I use multithreading?

    Read the article

  • How do I include extremely long literals in C++ source?

    - by BillyONeal
    Hello everyone :) I've got a bit of a problem. Essentially, I need to store a large list of whitelisted entries inside my program, and I'd like to include such a list directly -- I don't want to have to distribute other libraries and such, and I don't want to embed the strings into a Win32 resource, for a bunch of reasons I don't want to go into right now. I simply included my big whitelist in my .cpp file, and was presented with this error: 1>ServicesWhitelist.cpp(2807): fatal error C1091: compiler limit: string exceeds 65535 bytes in length The string itself is about twice this allowed limit by VC++. What's the best way to include such a large literal in a program?

    Read the article

  • rails large amount of data in single insert activerecord gave out

    - by Nik
    So I have I think around 36,000 just to be safe, a number I wouldn't think was too large for a modern sql database like mysql. Each record has just two attributes. So I do: so I collected them into one single insert statement sql = "INSERT INTO tasks (attrib_a, attrib_b) VALUES (c1,d1),(c2,d2),(c3,d3)...(c36000,d36000);" ActiveRecord::Base.connection.execute sql from C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract_adapter.rb:219:in `log' from C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:323:in `execute_without_analyzer from c:/r/projects/vendor/plugins/rails-footnotes/lib/rails-footnotes/notes/queries_note.rb:130:in `execute' from C:/Ruby/lib/ruby/1.8/benchmark.rb:308:in `realtime' from c:/r/projects/vendor/plugins/rails-footnotes/lib/rails-footnotes/notes/queries_note.rb:130:in `execute' from (irb):53 from C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/vendor/tzinfo-0.3.12/tzinfo/time_or_datetime.rb:242 I don't know if the above info is enough, please do ask for anything that I didn't provide here. So any idea what this is about? THANK YOU!!!!

    Read the article

  • Ruby: Read large data from stdout and stderr of an external process on Windows

    - by BinaryMuse
    Greetings, all, I need to run a potentially long-running process from Ruby on Windows and subsequently capture and parse the data from the external process's standard output and error. A large amount of data can be sent to each, but I am only necessarily interested in one line at a time (not capturing and storing the whole of the output). After a bit of research, I found that the Open3 class would take care of executing the process and giving me IO objects connected to the process's standard output and error (via popen3). Open3.popen3("external-program.bat") do |stdin, out, err, thread| # Step3.profit() ? end However, I'm not sure how to continually read from both streams without blocking the program. Since calling IO#readlines on out or err when a lot of data has been sent results in a memory allocation error, I'm trying to continuously check both streams for available input, but not having much luck with any of my implementations. Thanks in advance for any advice!

    Read the article

  • How to execute a large PHP Script ?

    - by atif089
    Well basically I may want to execute a script that may take as much as 1 hours as well. What I really want to do is Send SMS to my users using a third party API. So its basically like I supply my script with an array of phone numbers and fire the method to send SMS. However assuming it take 5 seconds to send 1 SMS and I want to send 1000 SMS which is roughly 1 - 2 hours. I can use set_time_limit() because I am on shared host. One way to do this is store numbers in a session and execute each SMS and use javascript to refresh that page until end. This way I need to keep my browser open and the execution will stop if my Internet Connection is disconnected. So, Is there any better way to do this ? Hope I am clear enough to explain what I want? I want to execute a large script that may take hours to execute without getting timeout.

    Read the article

  • Enable export to XML via HTTP on a large number of models with child relations

    - by Vasil
    I've a large number of models (120+) and I would like to let users of my application export all of the data from them in XML format. I looked at django-piston, but I would like to do this with minimum code. Basically I'd like to have something like this: GET /export/applabel/ModelName/ Would stream all instances of ModelName in applabel together with it's tree of related objects . I'd like to do this without writing code for each model. What would be the best way to do this?

    Read the article

  • Calculating very large exponents in python

    - by miraclesoul
    Dear All, Currently i am simulating my cryptographic scheme to test it. I have developed the code but i am stuck at one point. I am trying to take : g**x where g = 256 bit number x = 256 bit number Python hangs at this point, i have read alot of forums, threads etcc but only come to the conclusion that python hangs, as its hard for it to process such large numbers. any idea how can it be done? any two line piece of code, any library, anything that can be done.(ALSO PLEASE I AM A NEW PYTHON USER AND THIS IS FIRST TIME I DID PROGRAMMING IN IT, SO NO COMPLEX METHODS ...HOPE YOU UNDERSTAND :s)

    Read the article

  • slow record deletion with large ntext values

    - by asking
    I'm having trouble deleting some records via a stored procedure from a table in SQLServer 2008R2 that has ntext columns. The stored proc is timing out and running the query directly takes a very long time. The initial query was a straight "delete from y where x = z" and I've also tried running it in batches of 1000 with transactions but it is still slow and timing out in a stored proc. The majority of the records in the table will not be deleted each time (it's not just a once-off query but will be run other times). The ntext columns are not used in the where clause and I can't change the column types. Any suggestions on the quickest way to delete records with large ntext values? Thanks

    Read the article

  • How to transfer large files from desktop to server ( .NET)

    - by rahulchandran
    I am writing a .NET 2.0 based desktop client that will send large files ( well largish under 2GB) to a server. Need to develop the server as well. Server can be on any technology It should be secure so an underlying SSL stream is needed What are my options. Any obvious caveats etc I should be aware of To my mind the simplest solution is to open a tcp\ip connection over SSL to the server and send n packets each of size M bytes and then have the server append the chunks to the file and finally send an EOF packet as well IS this horrible. Will the perf suck on the server with all these disk writes What are any other clever options. I am limited to .NET 2.0 on the client if I did move to a WCF client will it buy be something magical and cool for this scenario Thanks

    Read the article

  • Redirecting a large number of URLs with htaccess or php header

    - by Peter
    I have undergone a major website overhaul and now have 5,000+ incoming links from search engines and external sites, bookmark services etc that lead to dead pages or 404 errors. A lot of the pages have corresponding "permalinks" or known replacement hierarchy/URL structure. I've started to list the main redirects with htaccess or physical files with simply a header location reidrect which is clearly not sustainable! What would be the best method to list all of the old link addresses and their corresponding new addresses with htaccess, php headers, mysql, sitemap file or is it better to have all broken links and wait for search engines etc to re-index my site? Are there any implications for having a large number of redirecting files for this temporary period until links are reset?

    Read the article

  • Errors with large data sources

    - by The Sheek Geek
    I'm doing some benchmarking on large data sources and binding/exporting data for reporting. I started with using a data set, filling it with 100000 rows and then attempting to open a crystal report with the retrieved data. I noticed that the data set filled just fine (took about 779 milliseconds) however, when attempting to export the data to the report or even bind to a gridview the application would fail with an OutOfMemoryException. Does anyone experienced this before or have an idea of how to get around it? It is very possible that clients will run reports for years worth of data and 100000 rows are not inconceivable. The application and the benchmark code are written in C# using ORACLE and SQL Server databases. I still have some data sources to test, but would like to know how to get around this just in case I don't find a better solution.

    Read the article

  • Paging a UIScrollView with a large PDF

    - by Fousa
    I try to create a simple UIScrollView with paging. And I want to be able to scroll through a large PDF document, but this gives me some problems... I tried the following options: Convert all the PDF pages to UIImages at startup, this works, but is very slow on start Manually drawing the PDF page in the drawRect, but yet again this was slow... And I prefer not to load everything at startup but to do it during the usage. Did anyone did this recently? Can't seem to find a nice example project. Thnx! Jelle

    Read the article

  • Unable to return large result set ORA-22814

    - by rvenugopal
    Hello All I am encountering an issue when I am trying to load a large result set using a range query in Oracle 10g. When I try a smaller range (1 to 100), it works but when I try a larger range(1 and 1000), I get the following error "ORA-22814: attribute or element value is larger than specified in type" error. I have a basic UDT (PostComments_Type) and I have tried using both a VArray and a Table type of PostComments_Type but that hasn't made a difference. Your help is appreciated --Thanks Venu PROCEDURE RangeLoad ( floorId IN NUMBER, ceilingId IN NUMBER, o_PostComments_LARGE_COLL_TYPE OUT PostComments_LARGE_COLL_TYPE -- Tried using as VArray and also Table type of PostComments_Type )IS BEGIN SELECT PostComments_TYPE ( PostComments_ID, ... ) BULK COLLECT INTO o_PostComments_LARGE_COLL_TYPE ------------This is for VARRAY/Table Type. So bulk operation FROM PostComments WHERE PostComments_ID BETWEEN floorId And ceilingId; END RangeLoad;

    Read the article

  • namespacing large javascript like jquery

    - by frenchie
    I have a very large javascript file: it's over 9,000 lines. The code looks like this: var GlobalVar1 = ""; var GlobalVar2 = null; function A() {...} function B(SomeParameter) {...} I'm using the google compiler and the global variables and functions get renamed a,b,c... and there's a good change that there might be some collision later with some outside code. What I want to do is have my code organized like the jquery library where everything is accessible with $. Is there a way to namespace my code so that everything is behind a # character for example. I'd like to have this to call my code: #.GlobalVar #.functionA(SomeParameter) How can I do this? Thanks.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >