Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 75/869 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • PHP: imagepng is creating inordinately large files

    - by Rafael
    I'm using a simple thumbnailing script I wrote and it's pretty standard: $imgbuffer = imagecreatetruecolor($thumbwidth, $thumbheight); switch($type) { case 1: $image = imagecreatefromgif($img); break; case 2: $image = imagecreatefromjpeg($img); break; case 3: $image = imagecreatefrompng($img); break; case 6: $image = imagecreatefrombmp($img); break; case 15: $image = imagecreatefromwbmp($img); break; default: return log_error("Tried to create thumbnail from $img: not a valid image"); } imagecopyresampled($imgbuffer, $image, 0, 0, 0, 0, $thumbwidth, $thumbheight, $width, $height); $output = imagepng($imgbuffer, "$album/thumbs/$imgname.png", 9); 9 is the lowest quality setting, yet from a 400 x 600 JPEG image (at 56kB) I'm getting a thumbnail 27 kB in size (140 x 140). Using imagejpeg (quality of 80) instead of imagepng it's about 4kB. How can this be, especially at the lowest quality setting for imagepng? I tried using imagecopy instead of imagecopyresampled, and imagecreate instead of the true color version. Unfortunately the images come out mangled somehow. Is there any way to get PNG thumbnails of a reasonably small file size (about 4 kB at 140 x 140)? Or do I have to use JPEG?

    Read the article

  • Stack Overflow Accessing Large Vector

    - by cam
    I'm getting a stack overflow on the first iteration of this for loop for (int q = 0; q < SIZEN; q++) { cout<<nList[q]<<" "; } nList is a vector of type int with 376 items. The size of nList depends on a constant defined in the program. The program works for every value up to 376, then after 376 it stops working. Any thoughts?

    Read the article

  • How do large sites accomplish row-level permissions?

    - by JayD3e
    So I am making a small site using cakephp, and my ACL is set up so that every time a piece of content is created, an ACL rule is created to link the owner of the piece of content to the actual content. This allows each owner to edit/delete their own content. This method just seems so inefficient, because there is an equivalent amount of ACL rules as content in the database. I was curious, how do big sites, with millions of pieces of content, solve this problem?

    Read the article

  • Redirect Large Number of URLs (HTML Files) to Wordpress

    - by Chetan
    Hi, I have over 2000 HTML files that are now in Wordpress blog. I have the URL Map of Old_file.html and new wordpress URL. I want 301 redirect but don't want to add 2000 lines to htaccess. Can you please suggest how to accomplish this using PHP so that when there is a request for old url, the php script should lookup into the database and redirect(301) to the new URL ? Thanks.

    Read the article

  • Blackberry (Java) - Setting scrolling be not focused on objects on the screen

    - by paullb
    I have a mainscreen which currently scrolls (and I have the arrows on the right) but the scrolling seems to be focused on the ButtonField objects that I have on the page. Is there any way to set the scrolling to be non-focused scrolling (moving a few pixels each time). Is there a way to set this? Other ideas I have had (which sound hacky so I want to avoid): - Placing NullFields around to scroll - Manually listening to the trackwheelRoll event and moving appropriately

    Read the article

  • Serving large generated files using Google App Engine?

    - by John Carter
    Hiya, Presently I have a GAE app that does some offline processing (backs up a user's data), and generates a file that's somewhere in the neighbourhood of 10 - 100 MB. I'm not sure of the best way to serve this file to the user. The two options I'm considering are: Adding some code to the offline processing code that 'spoofs' it as a form upload to the blob store, and going thru the normal blobstore process to serve the file. Having the offline processing code store the file somewhere off of GAE, and serving it from there. Is there a much better approach I'm overlooking? I'm guessing this is functionality that isn't well suited to GAE. I had thought of storing in the datastore as db.Text or Dd.Blob but there I encounter the 1 MB limit. Any input would be appreciated,

    Read the article

  • Binomial test in Python for very large numbers

    - by Morlock
    I need to do a binomial test in Python that allows calculation for 'n' numbers of the order of 10000. I have implemented a quick binomial_test function using scipy.misc.comb, however, it is pretty much limited around n = 1000, I guess because it reaches the biggest representable number while computing factorials or the combinatorial itself. Here is my function: from scipy.misc import comb def binomial_test(n, k): """Calculate binomial probability """ p = comb(n, k) * 0.5**k * 0.5**(n-k) return p How could I use a native python (or numpy, scipy...) function in order to calculate that binomial probability? If possible, I need scipy 0.7.2 compatible code. Many thanks!

    Read the article

  • Strange behavior with large Object Types

    - by Peter Lang
    I recognized that calling a method on an Oracle Object Type takes longer when the instance gets bigger. The code below just adds rows to a collection stored in the Object Type and calls the empty dummy-procedure in the loop. Calls are taking longer when more rows are in the collection. When I just remove the call to dummy, performance is much better (the collection still contains the same number of records): Calling dummy: Not calling dummy: 11 0 81 0 158 0 Code to reproduce: Create Type t_tab Is Table Of VARCHAR2(10000); Create Type test_type As Object( tab t_tab, Member Procedure dummy ); Create Type Body test_type As Member Procedure dummy As Begin Null; --# Do nothing End dummy; End; Declare v_test_type test_type := New test_type( New t_tab() ); Procedure run_test As start_time NUMBER := dbms_utility.get_time; Begin For i In 1 .. 200 Loop v_test_Type.tab.Extend; v_test_Type.tab(v_test_Type.tab.Last) := Lpad(' ', 10000); v_test_Type.dummy(); --# Removed this line in second test End Loop; dbms_output.put_line( dbms_utility.get_time - start_time ); End run_test; Begin run_test; run_test; run_test; End; I tried with both 10g and 11g. Can anyone explain/reproduce this behavior?

    Read the article

  • Advice for keeping large C++ project modular?

    - by Jay
    Our team is moving into much larger projects in size, many of which use several open source projects within them. Any advice or best practices to keep libraries and dependancies relatively modular and easily upgradable when new releases for them are out? To put it another way, lets say you make a program that is a fork of an open source project. As both projects grow, what is the easiest way to maintain and share updates to the core? Advice regarding what I'm asking only please...I don't need "well you should do this instead" or "why are you"..thanks.

    Read the article

  • Value was either too large or too small for an Int16 error

    - by Barlow Tucker
    I am working on fixing a bug in VB that is giving this error. I am new to VB, so there is some syntax that I am not fully understanding. The code that is throwing the error says: .Row(itemIndex).Item("parentIndex") = CLng(oID) + 1000000 I understand that adding 1000000 is too much for an int16. I can't change that value (not right now anyway). What I don't understand, and can't seem to find, is what .Row is referring too. Any ideas?

    Read the article

  • Import small number of records from a very large CSV file in Biztalk 2006

    - by rwmnau
    I have a Biztalk project that imports an incoming CSV file and dumps it to a database table. The import works fine, but I only need to keep about 200-300 records from a file with upwards of a million rows. My orchestration discards these rows, but the problem is that the flat file I'm importing is still 250MB, and when converted to XML using a regular flat file pipeline, it takes hours to process and sometimes causes the server to run out memory. Is there something I can do to have the Custom Pipeline itself discard rows I don't care about? The very first item in each CSV row is one of a few strings, and I only want to keep rows that start with a certain string. Thanks for any help you're able to provide.

    Read the article

  • "Thread was being aborted" 0n large dataset

    - by Donaldinio
    I am trying to process 114,000 rows in a dataset (populated from an oracle database). I am hitting an error at around the 600 mark - "Thread was being aborted". All I am doing is reading the dataset, and I still hit the issue. Is this too much data for a dataset? It seems to load into the dataset ok though. I welcome any better ways to process this amount of data. rootTermsTable = entKw.GetRootKeywordsByCategory(catID); for (int k = 0; k < rootTermsTable.Rows.Count; k++) { string keywordID = rootTermsTable.Rows[k]["IK_DBKEY"].ToString(); ... } public DataTable GetKeywordsByCategory(string categoryID) { DbProviderFactory provider = DbProviderFactories.GetFactory(connectionProvider); DbConnection con = provider.CreateConnection(); con.ConnectionString = connectionString; DbCommand com = provider.CreateCommand(); com.Connection = con; com.CommandText = string.Format("Select * From icm_keyword WHERE (IK_IC_DBKEY = {0})",categoryID); com.CommandType = CommandType.Text; DataSet ds = new DataSet(); DbDataAdapter ad = provider.CreateDataAdapter(); ad.SelectCommand = com; con.Open(); ad.Fill(ds); con.Close(); DataTable dt = new DataTable(); dt = ds.Tables[0]; return dt; //return ds.Tables[0].DefaultView; }

    Read the article

  • WCF returning custom types

    - by Gena Verdel
    Hi. I'm a newbie to WCF, trying to perform relatively simple task. I'm trying to return list of objects read from the database but cannot overcome some really annoying exceptions. The question is very simple? What's wrong with the picture? [ServiceContract] public interface IDBService { [OperationContract] string Ping(string name); [OperationContract] InitBDResult InitBD(); } public InitBDResult InitBD() { _dc = new CentralDC(); InitBDResult result = new InitBDResult(); result.ord = _dc.Orders.First(); return result; } [DataContract] public class InitBDResult { //[DataMember] //public List<Order> Orders { get; set; } [DataMember] public Order ord { get; set; } }

    Read the article

  • SQL Server missing tables and stored procedures

    - by Robo
    I have an application on a client's site that processes data each night, last night SQL Server 2005 gave the error "Could not find stored procedure 'xxxx'". The stored procedure does exist in the database, has the right permission as far as I can tell, the application runs fine in other nights as well. In previous occasions, the SQL Server has also gave error saying 'database object not found', and refers to a table in the database that does exists. So, on rare occasions, the server thinks certain stored procedures and tables does not exist in the database. The objects it refers to are often ones that are frequently used. Is the database somehow corrupted, is there some sort of repair/health check I can do?

    Read the article

  • Loading large amounts of data to an Oracle SQL Database

    - by James
    Hey all, I was wondering if anyone had any experience with what I am about to embark on. I have several csv files which are all around a GB or so in size and I need to load them into a an oracle database. While most of my work after loading will be read-only I will need to load updates from time to time. Basically I just need a good tool for loading several rows of data at a time up to my db. Here is what I have found so far: I could use SQL Loader t do a lot of the work I could use Bulk-Insert commands Some sort of batch insert. Using prepared statement somehow might be a good idea. I guess I was wondering what everyone thinks is the fastest way to get this insert done. Any tips?

    Read the article

  • Large static arrays are slowing down class load, need a better/faster lookup method

    - by Visualize
    I have a class with a couple static arrays: an int[] with 17,720 elements a string[] with 17,720 elements I noticed when I first access this class it takes almost 2 seconds to initialize, which causes a pause in the GUI that's accessing it. Specifically, it's a lookup for Unicode character names. The first array is an index into the second array. static readonly int[] NAME_INDEX = { 0x0000, 0x0001, 0x0005, 0x002C, 0x003B, ... static readonly string[] NAMES = { "Exclamation Mark", "Digit Three", "Semicolon", "Question Mark", ... The following code is how the arrays are used (given a character code). [Note: This code isn't a performance problem] int nameIndex = Array.BinarySearch<int>(NAME_INDEX, code); if (nameIndex > 0) { return NAMES[nameIndex]; } I guess I'm looking at other options on how to structure the data so that 1) The class is quickly loaded, and 2) I can quickly get the "name" for a given character code. Should I not be storing all these thousands of elements in static arrays?

    Read the article

  • Using Mercurial in a Large Organization

    - by Kristopher Johnson
    I've been using Mercurial for my own personal projects for a while, and I love it. My employer is considering a switch from CVS to SVN, but I'm wondering whether I should push for Mercurial (or some other DVCS) instead. One wrinkle with Mercurial is that it seems to be designed around the idea of having a single repository per "project". In this organization, there are dozens of different executables, DLLs, and other components in the current CVS repository, hierarchically organized. There are a lot of generic reusable components, but also some customer-specific components, and customer-specific configurations. The current build procedures generally get some set of subtrees out of the CVS repository. If we move from CVS to Mercurial, what is the best way to organize the repository/repositories? Should we have one huge Mercurial repository containing everything? If not, how fine-grained should the smaller repositories be? I think people will find it very annoying if they have to pull and push updates from a lot of different places, but they will also find it annoying if they have to pull/push the entire company codebase. Anybody have experience with this, or advice?

    Read the article

  • Optimal setup for Doxygen in a large multi-application COM project

    - by John
    A system has up to 100 VC++ projects, each spitting out a DLL or EXE. In addition there are many COM components with IDL and generated .h/.c files. What's 'the right way' or at least a good way to organise this with Doxygen? One overall doxy project or one per project/solution? And what's the right way to handle COM, which has generated code and a lot of 'fluff' that will bloat generated HTML files.

    Read the article

  • Find a specific couple of lines of code from large git repo

    - by mustISignUp
    So i remember that i once did something in another project and (later removed it), that could be useful now. Thanks to some other SO post i managed to search for a half remembered string.. git grep halfRemeberedNameOfFunction $(git log -g --pretty=format:%h) and Yay! got some results 2d0bcde:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 65fc672:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 24f2858:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 252e3a5:path/to/project/file.c: result = halfRemeberedNameOfFunction( data, args ); b58bc0b:path/to/project/file.c: result = _halfRemeberedNameOfFunction( data, options ); dce8d9d:path/to/project/file.c: result = halfRemeberedNameOfFunction( data, moreData ); But how do i get that file at one of those revisions? Many thanks

    Read the article

  • Align objects to curve with canvas

    - by mitjak
    Is it possible? I'm learning canvas at the moment, and while it's fun to position objects programmatically, it would be most interesting to come up with a way to align them to a curve. E.g. align a series of squares back to back along a wavy line or a circle.

    Read the article

  • firefox does not load large size images

    - by Pradeep
    I am stuck with a kind of bug in FF, wherein it’s unable to load images of big size (I have 8 MB size of image) from the server. The loading of image is all fine on IE. I am still looking out for ways to get rid of this problem. I changed server(IIS) settings to allow bigger file sizes. Also, I used “load” event on image using JQuery and tried all sort of options listed here http://api.jquery.com/load-event/, but nothing worked so far. If anyone of you has come across any such similar problem, and a way to resolve it, it would be nice to hear from you Please note: high resolution images are part of the requirement. Code : <style> img { background-color: #FFFFFF; background-image: url(http://eremurus.hyd:8080/QMS/plugin/imagepanner/loader.gif); background-repeat: no-repeat; background-position: center center; } </style> <script src="../plugin/jquery-ui-1.8.7.custom/js/jquery-1.4.4.min.js" type="text/javascript"></script> <script> jQuery(document).ready(function($){ ///var _url = "http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"; // set up the node / element _im =$("#main"); //_im.bind("load",function(){ $(this).fadeIn(); }); // set the src attribute now, after insertion to the DOM //_im.attr('src',_url); $("#main").one("load",function(){ alert('loaded'); }) .each(function(){ if(this.complete){ $(this).trigger("load"); } }); }); </script> </head> <body> <div id="target"><img id='main' src="http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"> </img></div> </body> </html>

    Read the article

  • Problems opening large csv file

    - by John Tyler
    I have a csv file that is 100mb in size. I need to parse some data out of it into a new format. I tried PHP, but keep running into memory issues. After around the first 150 "rows" or so, the script poops out. This is even on the localhost, and doing everything I can to tune the PHP settings, including max_memory and script_execution_time. Now before I continue, I'd like to know if Python will poop out on me too. Or if I will have to use C++. Can someone name good csv libraries for for these programmin langueage? The file is quoted csv. I mean scheiza I can't even open this text file in OpenOffice without it dying on me. (then again, Java sux as bad as PHP)

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >