Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 43/457 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Value was either too large or too small for an Int16 error

    - by Barlow Tucker
    I am working on fixing a bug in VB that is giving this error. I am new to VB, so there is some syntax that I am not fully understanding. The code that is throwing the error says: .Row(itemIndex).Item("parentIndex") = CLng(oID) + 1000000 I understand that adding 1000000 is too much for an int16. I can't change that value (not right now anyway). What I don't understand, and can't seem to find, is what .Row is referring too. Any ideas?

    Read the article

  • PHP: imagepng is creating inordinately large files

    - by Rafael
    I'm using a simple thumbnailing script I wrote and it's pretty standard: $imgbuffer = imagecreatetruecolor($thumbwidth, $thumbheight); switch($type) { case 1: $image = imagecreatefromgif($img); break; case 2: $image = imagecreatefromjpeg($img); break; case 3: $image = imagecreatefrompng($img); break; case 6: $image = imagecreatefrombmp($img); break; case 15: $image = imagecreatefromwbmp($img); break; default: return log_error("Tried to create thumbnail from $img: not a valid image"); } imagecopyresampled($imgbuffer, $image, 0, 0, 0, 0, $thumbwidth, $thumbheight, $width, $height); $output = imagepng($imgbuffer, "$album/thumbs/$imgname.png", 9); 9 is the lowest quality setting, yet from a 400 x 600 JPEG image (at 56kB) I'm getting a thumbnail 27 kB in size (140 x 140). Using imagejpeg (quality of 80) instead of imagepng it's about 4kB. How can this be, especially at the lowest quality setting for imagepng? I tried using imagecopy instead of imagecopyresampled, and imagecreate instead of the true color version. Unfortunately the images come out mangled somehow. Is there any way to get PNG thumbnails of a reasonably small file size (about 4 kB at 140 x 140)? Or do I have to use JPEG?

    Read the article

  • Advice for keeping large C++ project modular?

    - by Jay
    Our team is moving into much larger projects in size, many of which use several open source projects within them. Any advice or best practices to keep libraries and dependancies relatively modular and easily upgradable when new releases for them are out? To put it another way, lets say you make a program that is a fork of an open source project. As both projects grow, what is the easiest way to maintain and share updates to the core? Advice regarding what I'm asking only please...I don't need "well you should do this instead" or "why are you"..thanks.

    Read the article

  • How to handle large table in MySQL ?

    - by Frantz Miccoli
    I've a database used to store items and properties about these items. The number of properties is extensible, thus there is a join table to store each property associated to an item value. CREATE TABLE `item_property` ( `property_id` int(11) NOT NULL, `item_id` int(11) NOT NULL, `value` double NOT NULL, PRIMARY KEY (`property_id`,`item_id`), KEY `item_id` (`item_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; This database has two goals : storing (which has first priority and has to be very quick, I would like to perform many inserts (hundreds) in few seconds), retrieving data (selects using item_id and property_id) (this is a second priority, it can be slower but not too much because this would ruin my usage of the DB). Currently this table hosts 1.6 billions entries and a simple count can take up to 2 minutes... Inserting isn't fast enough to be usable. I'm using Zend_Db to access my data and would really be happy if you don't suggest me to develop any php side part. Thanks for your advices !

    Read the article

  • Click to make body text larger | JavaScript

    - by Wayne
    Please note this is just an example: <img src="img/normal-font.png" onclick="javascript:document.body.style.fontSize = '13px';" /> &nbsp; <img src="img/medium-font.png" onclick="javascript:document.body.style.fontSize = '14px';" /> &nbsp; <img src="img/large-font.png"onclick="javascript:document.body.style.fontSize = '15px';" /> The body text does indeed enlarge if I choose one of them, but what I like to include is remembering what option you've chosen by reading cookies. In fact, I have no experience in creating cookies in JS, only in PHP. Could someone come up with an example of how to make cookies the simpliest way remembering my option, but whenever someone clicks another one, it should get rid of the cookie that was last set, e.g. Cookie value has 15px, then should update it or remove it with a new cookie with a new value of 13px and so on. Thanks :)

    Read the article

  • Strange behavior with large Object Types

    - by Peter Lang
    I recognized that calling a method on an Oracle Object Type takes longer when the instance gets bigger. The code below just adds rows to a collection stored in the Object Type and calls the empty dummy-procedure in the loop. Calls are taking longer when more rows are in the collection. When I just remove the call to dummy, performance is much better (the collection still contains the same number of records): Calling dummy: Not calling dummy: 11 0 81 0 158 0 Code to reproduce: Create Type t_tab Is Table Of VARCHAR2(10000); Create Type test_type As Object( tab t_tab, Member Procedure dummy ); Create Type Body test_type As Member Procedure dummy As Begin Null; --# Do nothing End dummy; End; Declare v_test_type test_type := New test_type( New t_tab() ); Procedure run_test As start_time NUMBER := dbms_utility.get_time; Begin For i In 1 .. 200 Loop v_test_Type.tab.Extend; v_test_Type.tab(v_test_Type.tab.Last) := Lpad(' ', 10000); v_test_Type.dummy(); --# Removed this line in second test End Loop; dbms_output.put_line( dbms_utility.get_time - start_time ); End run_test; Begin run_test; run_test; run_test; End; I tried with both 10g and 11g. Can anyone explain/reproduce this behavior?

    Read the article

  • Database for managing large volumes of (system) metrics

    - by symcbean
    Hi, I'm looking at building a system for managing and reporting stats on web page performance. I'll be collecting a lot more stats than are available in the standard log formats (approx 20 metrics) but compared to most types of database applications, the base data structure will be very simple. My problem is that I'll be accumulating a lot of data - in the region of 100,000 records (i.e. sets of metrics) per hour. Of course, resources are very limited! So that its possible to sensibly interact with the data, I'd need to consolidate each metric into one minute bins, broken down by URL, then for anything more than 1 day old, consolidated into 10 minute bins, then at 1 week, hourly bins. At the front end, I want to provide a view (prefereably as plots) of the last hour of data, with the facility for users to drill up/down through defined hierarchies of URLs (which do not always map directly to the hierarchy expressed in the path of the URL) and to view different time frames. Rather than coding all this myself and using a relational database, I was wondering if there were tools available which would facilitate both the management of the data and the reporting. I had a look at Mondrian however I can't see from the documentation I've looked at whether it's possible to drop the more granular information while maintaining the consolidated views of the data. RRDTool looks promising in terms of managing the data consolidation, but seems to be rather limited in terms of querying the dataset as a multi-dimensional/relational database. What else whould I be looking at?

    Read the article

  • Redirect Large Number of URLs (HTML Files) to Wordpress

    - by Chetan
    Hi, I have over 2000 HTML files that are now in Wordpress blog. I have the URL Map of Old_file.html and new wordpress URL. I want 301 redirect but don't want to add 2000 lines to htaccess. Can you please suggest how to accomplish this using PHP so that when there is a request for old url, the php script should lookup into the database and redirect(301) to the new URL ? Thanks.

    Read the article

  • Trace large C++ code base?

    - by anon
    Problem: I have just inherited this 200K LOC source code base. There's only a small part of it I need (and I want to rip all else out). What I would like to do is to be able to: 1) run the program a few times 2) have something (here's where you come in) record which lines of code gets executed 3) then rip out all the irrelevant lines of code I realize this has "problems" in the forms of "different args will take different paths"; but for my needs, it's very specific, and I just want something ot get me started on the right line of for ripping stuff out (I'll fine tune those special cases later). Thanks!

    Read the article

  • Using Mercurial in a Large Organization

    - by Kristopher Johnson
    I've been using Mercurial for my own personal projects for a while, and I love it. My employer is considering a switch from CVS to SVN, but I'm wondering whether I should push for Mercurial (or some other DVCS) instead. One wrinkle with Mercurial is that it seems to be designed around the idea of having a single repository per "project". In this organization, there are dozens of different executables, DLLs, and other components in the current CVS repository, hierarchically organized. There are a lot of generic reusable components, but also some customer-specific components, and customer-specific configurations. The current build procedures generally get some set of subtrees out of the CVS repository. If we move from CVS to Mercurial, what is the best way to organize the repository/repositories? Should we have one huge Mercurial repository containing everything? If not, how fine-grained should the smaller repositories be? I think people will find it very annoying if they have to pull and push updates from a lot of different places, but they will also find it annoying if they have to pull/push the entire company codebase. Anybody have experience with this, or advice?

    Read the article

  • Import small number of records from a very large CSV file in Biztalk 2006

    - by rwmnau
    I have a Biztalk project that imports an incoming CSV file and dumps it to a database table. The import works fine, but I only need to keep about 200-300 records from a file with upwards of a million rows. My orchestration discards these rows, but the problem is that the flat file I'm importing is still 250MB, and when converted to XML using a regular flat file pipeline, it takes hours to process and sometimes causes the server to run out memory. Is there something I can do to have the Custom Pipeline itself discard rows I don't care about? The very first item in each CSV row is one of a few strings, and I only want to keep rows that start with a certain string. Thanks for any help you're able to provide.

    Read the article

  • Stack Overflow Accessing Large Vector

    - by cam
    I'm getting a stack overflow on the first iteration of this for loop for (int q = 0; q < SIZEN; q++) { cout<<nList[q]<<" "; } nList is a vector of type int with 376 items. The size of nList depends on a constant defined in the program. The program works for every value up to 376, then after 376 it stops working. Any thoughts?

    Read the article

  • Loading large amounts of data to an Oracle SQL Database

    - by James
    Hey all, I was wondering if anyone had any experience with what I am about to embark on. I have several csv files which are all around a GB or so in size and I need to load them into a an oracle database. While most of my work after loading will be read-only I will need to load updates from time to time. Basically I just need a good tool for loading several rows of data at a time up to my db. Here is what I have found so far: I could use SQL Loader t do a lot of the work I could use Bulk-Insert commands Some sort of batch insert. Using prepared statement somehow might be a good idea. I guess I was wondering what everyone thinks is the fastest way to get this insert done. Any tips?

    Read the article

  • Large static arrays are slowing down class load, need a better/faster lookup method

    - by Visualize
    I have a class with a couple static arrays: an int[] with 17,720 elements a string[] with 17,720 elements I noticed when I first access this class it takes almost 2 seconds to initialize, which causes a pause in the GUI that's accessing it. Specifically, it's a lookup for Unicode character names. The first array is an index into the second array. static readonly int[] NAME_INDEX = { 0x0000, 0x0001, 0x0005, 0x002C, 0x003B, ... static readonly string[] NAMES = { "Exclamation Mark", "Digit Three", "Semicolon", "Question Mark", ... The following code is how the arrays are used (given a character code). [Note: This code isn't a performance problem] int nameIndex = Array.BinarySearch<int>(NAME_INDEX, code); if (nameIndex > 0) { return NAMES[nameIndex]; } I guess I'm looking at other options on how to structure the data so that 1) The class is quickly loaded, and 2) I can quickly get the "name" for a given character code. Should I not be storing all these thousands of elements in static arrays?

    Read the article

  • How do large sites accomplish row-level permissions?

    - by JayD3e
    So I am making a small site using cakephp, and my ACL is set up so that every time a piece of content is created, an ACL rule is created to link the owner of the piece of content to the actual content. This allows each owner to edit/delete their own content. This method just seems so inefficient, because there is an equivalent amount of ACL rules as content in the database. I was curious, how do big sites, with millions of pieces of content, solve this problem?

    Read the article

  • Binomial test in Python for very large numbers

    - by Morlock
    I need to do a binomial test in Python that allows calculation for 'n' numbers of the order of 10000. I have implemented a quick binomial_test function using scipy.misc.comb, however, it is pretty much limited around n = 1000, I guess because it reaches the biggest representable number while computing factorials or the combinatorial itself. Here is my function: from scipy.misc import comb def binomial_test(n, k): """Calculate binomial probability """ p = comb(n, k) * 0.5**k * 0.5**(n-k) return p How could I use a native python (or numpy, scipy...) function in order to calculate that binomial probability? If possible, I need scipy 0.7.2 compatible code. Many thanks!

    Read the article

  • Optimal setup for Doxygen in a large multi-application COM project

    - by John
    A system has up to 100 VC++ projects, each spitting out a DLL or EXE. In addition there are many COM components with IDL and generated .h/.c files. What's 'the right way' or at least a good way to organise this with Doxygen? One overall doxy project or one per project/solution? And what's the right way to handle COM, which has generated code and a lot of 'fluff' that will bloat generated HTML files.

    Read the article

  • Find a specific couple of lines of code from large git repo

    - by mustISignUp
    So i remember that i once did something in another project and (later removed it), that could be useful now. Thanks to some other SO post i managed to search for a half remembered string.. git grep halfRemeberedNameOfFunction $(git log -g --pretty=format:%h) and Yay! got some results 2d0bcde:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 65fc672:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 24f2858:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 252e3a5:path/to/project/file.c: result = halfRemeberedNameOfFunction( data, args ); b58bc0b:path/to/project/file.c: result = _halfRemeberedNameOfFunction( data, options ); dce8d9d:path/to/project/file.c: result = halfRemeberedNameOfFunction( data, moreData ); But how do i get that file at one of those revisions? Many thanks

    Read the article

  • Handling very large lists of objects without paging?

    - by user246114
    Hi, I have a class which can contain many small elements in a list. Looks like: public class Farm { private ArrayList<Horse> mHorses; } just wondering what will happen if the mHorses array grew to something crazy like 15,000 elements. I'm assuming that trying to write and read this from the datastore would be crazy, because I'd get killed on the serialization process. It's important that I can get the entire array in one shot without paging, and each Horse element may only have two string properties in it, so they are pretty lightweight: public class Horse { private String mId; private String mName; } I don't need these horses indexed at all. Does it sound reasonable to just store the mHorse array as a raw Text field, and force my clients to do the deserialization? Something like: public class Farm { private Text mHorsesSerialized; } then whenever the client receives a Farm instance, it has to take the raw string of horses, and split it in order to reinstantiate the list, something like: // GWT client perhaps Farm farm = rpcCall.getMyFarm(); String horsesSerialized = farm.getHorses(); String[] horseBlocks = horsesSerialized.split(","); for (int i = 0; i < horseBlocks.length; i++) { // .. continue deserializing the individual objects ... } yeah... so hopefully it would be quick to read a Farm instance from the datastore, and the serialization penalty is paid by the client, Thanks

    Read the article

  • Processing large recordsets in Rails

    - by japancheese
    Hello, I'm trying to perform a daily operation on a larger than normal dataset (2m+ records). However, Rails seems to take a very long time performing operations on such a dataset. Operations like Dataset.all.each do |data| ... end take a very long time to complete (I assume this is because it can't fit all the items into memory at once, right?). Does anyone have any strategies on how I could handle this situation? I know SQL would probably speed up the process, but I'm looking to use the Rails environment as I can do many more complicated things to the data than I can with just SQL statements.

    Read the article

  • utl_file.FCLOSE() is slow with large files

    - by Dan
    We are using utl_file in Oracle 10g to copy a blob from a table row to a file on the file system and when we call utl_file.fclose() it takes a long time. It's a 10mb file, not very big, and it takes just over a minute to complete. Anyone know why this would be so slow? Thanks

    Read the article

  • Tools for viewing logs of unlimited size

    - by jkff
    It's no secret that application logs can go well beyond the limits of naive log viewers, and the desired viewer functionality (say, filtering the log based on a condition, or highlighting particular message types, or splitting it into sublogs based on a field value, or merging several logs based on a time axis, or bookmarking etc.) is beyond the abilities of large-file text viewers. I wonder: Whether decent specialized applications exist (I haven't found any) What functionality might one expect from such an application? (I'm asking because my student is writing such an application, and the functionality above has already been implemented to a certain extent of usability)

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >