Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 99/1833 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Optimize included files and uses in Delphi

    - by Roland Bengtsson
    I try to increase performance of Delphi 2007 and Codeinsight. In the application there are 483 files added in the DPR file. I don't know if it is imagination but I feel that I got better performance from Codeinsight by simply readd all files in the DPR. I also think (correct me if I'm wrong) that all files that are included in a uses section also should be included in the DPR file for best performance. My question is, does it exists a tool that scan the whole project and give a list what files are missing in the DPR file and what files can be removed? Would also be nice to have a list of uses that can be removed in the PAS files. Regards

    Read the article

  • Hang during databinding of large amount of data to WPF DataGrid

    - by nihi_l_ist
    Im using WPFToolkit datagrid control and do the binding in such way: <WpfToolkit:DataGrid x:Name="dgGeneral" SelectionMode="Single" SelectionUnit="FullRow" AutoGenerateColumns="False" CanUserAddRows="False" CanUserDeleteRows="False" Grid.Row="1" ItemsSource="{Binding Path=Conversations}" > public List<CONVERSATION> Conversations { get { return conversations; } set { if (conversations != value) { conversations = value; NotifyPropertyChanged("Conversations"); } } } public event PropertyChangedEventHandler PropertyChanged; public void NotifyPropertyChanged(string propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public void GenerateData() { BackgroundWorker bw = new BackgroundWorker(); bw.WorkerSupportsCancellation = bw.WorkerReportsProgress = true; List<CONVERSATION> list = new List<CONVERSATION>(); bw.DoWork += delegate { list = RefreshGeneralData(); }; bw.RunWorkerCompleted += delegate { try { Conversations = list; } catch (Exception ex) { CustomException.ExceptionLogCustomMessage(ex); } }; bw.RunWorkerAsync(); } And than in the main window i call GenerateData() after setting DataCotext of the window to instance of the class, containing GenerateData(). RefreshGeneralData() returns some list of data i want and it returns it fast. Overall there are near 2000 records and 6 columns(im not posting the code i used during grid's initialization, because i dont think it can be the reason) and the grid hangs for almost 10 secs!

    Read the article

  • Problem processing large data using Applet-Servlet communication

    - by Marquinio
    Hi everyone. I have an Applet that makes a request to a Servlet. On the servlet it's using the PrintWriter to write the response back to Applet: out.println("Field1|Field2|Field3|Field4|Field5......|Field10"); There are about 15000 records, so the out.println() gets executed about 15000 times. Problem is that when the Applet gets the response from Servlet it takes about 15 minutes to process the records. I placed System.out.println's and processing is paused at around 5000, then after 15 minutes it continues processing and then its done. Has anyone faced a similar problem? The servlet takes about 2 seconds to execute. So seems that the browser/Applet is too slow to process the records. Any ideas appreciated. Thanks.

    Read the article

  • bitshift large strings for encoding QR Codes

    - by icekreaman
    As an example, suppose a QR Code data stream contains 55 data words (each one byte in length) and 15 error correction words (again one byte). The data stream begins with a 12 bit header and ends with four 0 bits. So, 12 + 4 bits of header/footer and 15 bytes of error correction, leaves me 53 bytes to hold 53 alphanumeric characters. The 53 bytes of data and 15 bytes of ec are supplied in a string of length 68 (str68). The problem seems simple enough - concatenate 2 bytes of (right-shifted) header data with str68 and then left shift the entire 70 bytes by 4 bits. This is the first time in many years of programming that I have ever needed to do something like this, I am a c and bit shifting noob, so please be gentle... I have done a little investigation and so far have not been able to figure out how to bitshift 70 bytes of data; any help would be greatly appreciated. Larger QR codes can hold 2000 bytes of data...

    Read the article

  • Nginx: Disallow index.html in URL

    - by Martin Vilcans
    We're generating a site consisting of only static files (using Assemble). Having the .html extension on URLs looks so nineties, so we generate every static HTML file in its own directory and call it index.html. For example, the url http://www.example.com/foo/bar/ is in the file /var/www/foo/bar/index.html. This works well, but there is one small thing nagging me: Now there are two possible URLs to the same resource: http://www.example.com/foo/bar/ (slash URL) http://www.example.com/foo/bar/index.html (index.html URL) By accident someone may link to the index.html form of the URL, which is bad for SEO and looks ugly (remember the nineties?). Is it possible in Nginx to give a 404 error on the index.html URL, but serve the slash URL? I tried this: location ~ /index\.html$ { return 404; } But it seems that Nginx does some internal rewrite of the slash URL to the index.html URL, and then matches this location so we get a 404 even on the slash URL. Note that to catch mistakes, we want index.html URLs to be an error, not just redirect to the slash URL.

    Read the article

  • filesize of large files in c

    - by endeavormac
    How can I get the filesize of a file in C when the filesize is greater than 4gb? ftell returns a 4 byte signed long, limiting it to two bytes. stat has a variable of type off_t which is also 4 bytes (not sure of sign), so at most it can tell me the size of a 4gb file. What if the file is larger than 4 gb?

    Read the article

  • Timeout on Large mySQL Query

    - by Bob Stewart
    I have this query: $theQuery = mysql_query("SELECT phrase, date from wordList WHERE group='nouns'"); while($getWords=mysql_fetch_array($theQuery)) { echo "$getWords[phrase] created on $getWords[date]<br>"; } The data table "wordList" contains 75,000 records in the group "nouns" and every time I load the code I am returned an error. Help!

    Read the article

  • Python 3.1 - Memory Error during sampling of a large list

    - by jimy
    The input list can be more than 1 million numbers. When I run the following code with smaller 'repeats', its fine; def sample(x): length = 1000000 new_array = random.sample((list(x)),length) return (new_array) def repeat_sample(x): i = 0 repeats = 100 list_of_samples = [] for i in range(repeats): list_of_samples.append(sample(x)) return(list_of_samples) repeat_sample(large_array) However, using high repeats such as the 100 above, results in MemoryError. Traceback is as follows; Traceback (most recent call last): File "C:\Python31\rnd.py", line 221, in <module> STORED_REPEAT_SAMPLE = repeat_sample(STORED_ARRAY) File "C:\Python31\rnd.py", line 129, in repeat_sample list_of_samples.append(sample(x)) File "C:\Python31\rnd.py", line 121, in sample new_array = random.sample((list(x)),length) File "C:\Python31\lib\random.py", line 309, in sample result = [None] * k MemoryError I am assuming I'm running out of memory. I do not know how to get around this problem. Thank you for your time!

    Read the article

  • Assigning large UInt32 constants in VB.Net

    - by Kumba
    I inquired on VB's erratic behavior of treating all numerics as signed types back in this question, and from the accepted answer there, was able to get by. Per that answer: Visual Basic Literals Also keep in mind you can add literals to your code in VB.net and explicitly state constants as unsigned. So I tried this: Friend Const POW_1_32 As UInt32 = 4294967296UI And VB.NET throws an Overflow error in the IDE. Pulling out the integer overflow checks doesn't seem to help -- this appears to be a flaw in the IDE itself. This, however, doesn't generate an error: Friend Const POW_1_32 As UInt64 = 4294967296UL So this suggests to me that the IDE isn't properly parsing the code and understanding the difference between Int32 and UInt32. Any suggested workarounds and/or possible clues on when MS will make unsigned data types intrinsic to the framework instead of the hacks they currently are?

    Read the article

  • Extract anything that looks like links from large amount of data in python

    - by Riz
    Hi, I have around 5 GB of html data which I want to process to find links to a set of websites and perform some additional filtering. Right now I use simple regexp for each site and iterate over them, searching for matches. In my case links can be outside of "a" tags and be not well formed in many ways(like "\n" in the middle of link) so I try to grab as much "links" as I can and check them later in other scripts(so no BeatifulSoup\lxml\etc). The problem is that my script is pretty slow, so I am thinking about any ways to speed it up. I am writing a set of test to check different approaches, but hope to get some advices :) Right now I am thinking about getting all links without filtering first(maybe using C module or standalone app, which doesn't use regexp but simple search to get start and end of every link) and then using regexp to match ones I need.

    Read the article

  • Managing Large Database Entity Models

    - by ChiliYago
    I would like hear how other's are effectively (or not) working with the Visual Studio Entity Designer when many database tables exists. It seems to me that navigating the Designer is tough enough to find what you are looking for with just a few tables but how about a database with say 100 to 200 tables? When a table change is made at the database level how is the model updated? Does it overwrite any manual changes you have made to the model? How would you quickly find an entity in the designer to make a change or inspect a change? Seems unrealistic to be scrolling around looking for specific entity. Thanks for your feedback!

    Read the article

  • ActionBar SpinnerAdapter Large Branding followed by selection (spinner)

    - by SatanEnglish
    I'm trying to implement a spinner In the action bar that has brand Name above it. With the ActionBar setListNavigationCallbacks method if possible actionBar.setNavigationMode(ActionBar.NAVIGATION_MODE_LIST); actionBar.setListNavigationCallbacks(mSpinnerAdapter, null); Can anyone give me an Idea of how to do this? I would put some code here but I have no idea where to begin as I have not managed to find relevant information yet. Edit: Using V4.0

    Read the article

  • Value was either too large or too small for an Int16 error

    - by Barlow Tucker
    I am working on fixing a bug in VB that is giving this error. I am new to VB, so there is some syntax that I am not fully understanding. The code that is throwing the error says: .Row(itemIndex).Item("parentIndex") = CLng(oID) + 1000000 I understand that adding 1000000 is too much for an int16. I can't change that value (not right now anyway). What I don't understand, and can't seem to find, is what .Row is referring too. Any ideas?

    Read the article

  • PHP: imagepng is creating inordinately large files

    - by Rafael
    I'm using a simple thumbnailing script I wrote and it's pretty standard: $imgbuffer = imagecreatetruecolor($thumbwidth, $thumbheight); switch($type) { case 1: $image = imagecreatefromgif($img); break; case 2: $image = imagecreatefromjpeg($img); break; case 3: $image = imagecreatefrompng($img); break; case 6: $image = imagecreatefrombmp($img); break; case 15: $image = imagecreatefromwbmp($img); break; default: return log_error("Tried to create thumbnail from $img: not a valid image"); } imagecopyresampled($imgbuffer, $image, 0, 0, 0, 0, $thumbwidth, $thumbheight, $width, $height); $output = imagepng($imgbuffer, "$album/thumbs/$imgname.png", 9); 9 is the lowest quality setting, yet from a 400 x 600 JPEG image (at 56kB) I'm getting a thumbnail 27 kB in size (140 x 140). Using imagejpeg (quality of 80) instead of imagepng it's about 4kB. How can this be, especially at the lowest quality setting for imagepng? I tried using imagecopy instead of imagecopyresampled, and imagecreate instead of the true color version. Unfortunately the images come out mangled somehow. Is there any way to get PNG thumbnails of a reasonably small file size (about 4 kB at 140 x 140)? Or do I have to use JPEG?

    Read the article

  • "Thread was being aborted" 0n large dataset

    - by Donaldinio
    I am trying to process 114,000 rows in a dataset (populated from an oracle database). I am hitting an error at around the 600 mark - "Thread was being aborted". All I am doing is reading the dataset, and I still hit the issue. Is this too much data for a dataset? It seems to load into the dataset ok though. I welcome any better ways to process this amount of data. rootTermsTable = entKw.GetRootKeywordsByCategory(catID); for (int k = 0; k < rootTermsTable.Rows.Count; k++) { string keywordID = rootTermsTable.Rows[k]["IK_DBKEY"].ToString(); ... } public DataTable GetKeywordsByCategory(string categoryID) { DbProviderFactory provider = DbProviderFactories.GetFactory(connectionProvider); DbConnection con = provider.CreateConnection(); con.ConnectionString = connectionString; DbCommand com = provider.CreateCommand(); com.Connection = con; com.CommandText = string.Format("Select * From icm_keyword WHERE (IK_IC_DBKEY = {0})",categoryID); com.CommandType = CommandType.Text; DataSet ds = new DataSet(); DbDataAdapter ad = provider.CreateDataAdapter(); ad.SelectCommand = com; con.Open(); ad.Fill(ds); con.Close(); DataTable dt = new DataTable(); dt = ds.Tables[0]; return dt; //return ds.Tables[0].DefaultView; }

    Read the article

  • How to handle large table in MySQL ?

    - by Frantz Miccoli
    I've a database used to store items and properties about these items. The number of properties is extensible, thus there is a join table to store each property associated to an item value. CREATE TABLE `item_property` ( `property_id` int(11) NOT NULL, `item_id` int(11) NOT NULL, `value` double NOT NULL, PRIMARY KEY (`property_id`,`item_id`), KEY `item_id` (`item_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; This database has two goals : storing (which has first priority and has to be very quick, I would like to perform many inserts (hundreds) in few seconds), retrieving data (selects using item_id and property_id) (this is a second priority, it can be slower but not too much because this would ruin my usage of the DB). Currently this table hosts 1.6 billions entries and a simple count can take up to 2 minutes... Inserting isn't fast enough to be usable. I'm using Zend_Db to access my data and would really be happy if you don't suggest me to develop any php side part. Thanks for your advices !

    Read the article

  • Click to make body text larger | JavaScript

    - by Wayne
    Please note this is just an example: <img src="img/normal-font.png" onclick="javascript:document.body.style.fontSize = '13px';" /> &nbsp; <img src="img/medium-font.png" onclick="javascript:document.body.style.fontSize = '14px';" /> &nbsp; <img src="img/large-font.png"onclick="javascript:document.body.style.fontSize = '15px';" /> The body text does indeed enlarge if I choose one of them, but what I like to include is remembering what option you've chosen by reading cookies. In fact, I have no experience in creating cookies in JS, only in PHP. Could someone come up with an example of how to make cookies the simpliest way remembering my option, but whenever someone clicks another one, it should get rid of the cookie that was last set, e.g. Cookie value has 15px, then should update it or remove it with a new cookie with a new value of 13px and so on. Thanks :)

    Read the article

  • Strange behavior with large Object Types

    - by Peter Lang
    I recognized that calling a method on an Oracle Object Type takes longer when the instance gets bigger. The code below just adds rows to a collection stored in the Object Type and calls the empty dummy-procedure in the loop. Calls are taking longer when more rows are in the collection. When I just remove the call to dummy, performance is much better (the collection still contains the same number of records): Calling dummy: Not calling dummy: 11 0 81 0 158 0 Code to reproduce: Create Type t_tab Is Table Of VARCHAR2(10000); Create Type test_type As Object( tab t_tab, Member Procedure dummy ); Create Type Body test_type As Member Procedure dummy As Begin Null; --# Do nothing End dummy; End; Declare v_test_type test_type := New test_type( New t_tab() ); Procedure run_test As start_time NUMBER := dbms_utility.get_time; Begin For i In 1 .. 200 Loop v_test_Type.tab.Extend; v_test_Type.tab(v_test_Type.tab.Last) := Lpad(' ', 10000); v_test_Type.dummy(); --# Removed this line in second test End Loop; dbms_output.put_line( dbms_utility.get_time - start_time ); End run_test; Begin run_test; run_test; run_test; End; I tried with both 10g and 11g. Can anyone explain/reproduce this behavior?

    Read the article

  • Advice for keeping large C++ project modular?

    - by Jay
    Our team is moving into much larger projects in size, many of which use several open source projects within them. Any advice or best practices to keep libraries and dependancies relatively modular and easily upgradable when new releases for them are out? To put it another way, lets say you make a program that is a fork of an open source project. As both projects grow, what is the easiest way to maintain and share updates to the core? Advice regarding what I'm asking only please...I don't need "well you should do this instead" or "why are you"..thanks.

    Read the article

  • Trace large C++ code base?

    - by anon
    Problem: I have just inherited this 200K LOC source code base. There's only a small part of it I need (and I want to rip all else out). What I would like to do is to be able to: 1) run the program a few times 2) have something (here's where you come in) record which lines of code gets executed 3) then rip out all the irrelevant lines of code I realize this has "problems" in the forms of "different args will take different paths"; but for my needs, it's very specific, and I just want something ot get me started on the right line of for ripping stuff out (I'll fine tune those special cases later). Thanks!

    Read the article

  • Database for managing large volumes of (system) metrics

    - by symcbean
    Hi, I'm looking at building a system for managing and reporting stats on web page performance. I'll be collecting a lot more stats than are available in the standard log formats (approx 20 metrics) but compared to most types of database applications, the base data structure will be very simple. My problem is that I'll be accumulating a lot of data - in the region of 100,000 records (i.e. sets of metrics) per hour. Of course, resources are very limited! So that its possible to sensibly interact with the data, I'd need to consolidate each metric into one minute bins, broken down by URL, then for anything more than 1 day old, consolidated into 10 minute bins, then at 1 week, hourly bins. At the front end, I want to provide a view (prefereably as plots) of the last hour of data, with the facility for users to drill up/down through defined hierarchies of URLs (which do not always map directly to the hierarchy expressed in the path of the URL) and to view different time frames. Rather than coding all this myself and using a relational database, I was wondering if there were tools available which would facilitate both the management of the data and the reporting. I had a look at Mondrian however I can't see from the documentation I've looked at whether it's possible to drop the more granular information while maintaining the consolidated views of the data. RRDTool looks promising in terms of managing the data consolidation, but seems to be rather limited in terms of querying the dataset as a multi-dimensional/relational database. What else whould I be looking at?

    Read the article

  • How to write a large number of nested records in JSON with Python

    - by jamesmcm
    I want to produce a JSON file, containing some initial parameters and then records of data like this: { "measurement" : 15000, "imi" : 0.5, "times" : 30, "recalibrate" : false, { "colorlist" : [234, 431, 134] "speclist" : [0.34, 0.42, 0.45, 0.34, 0.78] } { "colorlist" : [214, 451, 114] "speclist" : [0.44, 0.32, 0.45, 0.37, 0.53] } ... } How can this be achieved using the Python json module? The data records cannot be added by hand as there are very many.

    Read the article

  • Using Mercurial in a Large Organization

    - by Kristopher Johnson
    I've been using Mercurial for my own personal projects for a while, and I love it. My employer is considering a switch from CVS to SVN, but I'm wondering whether I should push for Mercurial (or some other DVCS) instead. One wrinkle with Mercurial is that it seems to be designed around the idea of having a single repository per "project". In this organization, there are dozens of different executables, DLLs, and other components in the current CVS repository, hierarchically organized. There are a lot of generic reusable components, but also some customer-specific components, and customer-specific configurations. The current build procedures generally get some set of subtrees out of the CVS repository. If we move from CVS to Mercurial, what is the best way to organize the repository/repositories? Should we have one huge Mercurial repository containing everything? If not, how fine-grained should the smaller repositories be? I think people will find it very annoying if they have to pull and push updates from a lot of different places, but they will also find it annoying if they have to pull/push the entire company codebase. Anybody have experience with this, or advice?

    Read the article

  • Stack Overflow Accessing Large Vector

    - by cam
    I'm getting a stack overflow on the first iteration of this for loop for (int q = 0; q < SIZEN; q++) { cout<<nList[q]<<" "; } nList is a vector of type int with 376 items. The size of nList depends on a constant defined in the program. The program works for every value up to 376, then after 376 it stops working. Any thoughts?

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >