Search Results

Search found 10417 results on 417 pages for 'large'.

Page 263/417 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Can I have one makefile to build a hierarchical project?

    - by saramah
    I have several hundred files in a non-flat directory structure. My Makefile lists each sourcefile, which, given the size of the project and the fact that there are multiple developers on the project, can create annoyances when we forget to put a new one in or take out the old ones. I'd like to generalize my Makefile so that make can simply build all .cpp and .h files without me having to specify all the filenames, given some generic rules for different types of files. My question: given a large number of files in a directory with lots of subfolders, how do I tell make to build them all without having to specify each and every subfolder as part of the path? And how do I make it so that I can do this with only one Makefile in the root directory?

    Read the article

  • Favicon to PNG in PHP

    - by sailtheworld
    I need a PHP script to convert favicons to PNGs while keeping their original dimensions. I know Google has it's secret icon converter - http://www.google.com/s2/favicons?domain=http://facebook.com/ but this converts favicons to 16x16 even if they they were originally larger. So basically I need this, minus the shrinking effect. I've also seen this - http://www.controlstyle.com/articles/programming/text/php-favicon/ but I couldn't get it to work after hours of messing around with it. Basically I am trying to automatically grab the icon for a link that will be as large as possible - automatically 48x48 png based on a URL would be the perfect scenario, but I don't know of any humanly possible way to do this given that no websites happen to keep a 48x48 icon in a publicly accessible spot. Does anybody know of a script/service or have a suggestion? Thanks!

    Read the article

  • What is the best design for these data base tables?

    - by Mohammed Jamal
    I need to find the best solution to make the DB Normalized with large amount of data expected. My site has a Table Tags (contain key word,id) and also 4 types of data related to this tags table like(articles,resources,jobs,...). The big question is:- for the relation with tags what best solution for optimazaion & query speed? make a table for each relation like: table articlesToTags(ArticleID,TagID) table jobsToTags(jobid,tagid) etc. or put it all in one table like table tagsrelation(tagid,itemid,itemtype) I need your help. Please provide me with articles to help me in this design consider that in future the site can conation new section relate to tag Thanks

    Read the article

  • Microsoft leaning support for VS2010

    - by John
    OK, I am a big fan of WPF, and while it is large area to fully understand, Microsoft has been great in posting loads of training video at http://windowsclient.net/learn/videos_wpf.aspx However with the release of 2010 it all seams to have gone very quite. I expected a lot of the support to be updated for 2010 and I also expected a lot of new videos on the best way to use the new features in 2010. Currently I find myself working through videos based on 2008 (or even 2005) and trying to apply them to 2010. Don't get me wrong it not that I mind doing this, it just that I fear I may be learning methods which have better or different solutions in 2010. It is just me expecting too much of Microsoft, or have I missed out on a new website?

    Read the article

  • Unix: millionth number in the serie 2 3 4 6 9 13 19 28 42 63 ... ?

    - by HH
    It takes about minute to achieve 3000 in my comp but I need to know the millionth number in the serie. The definition is recursive so I cannot see any shortcuts except to calculate everything before the millionth number. How can you fast calculate millionth number in the serie? Serie Def n_{i+1} = \floor{ 3/2 * n_{i} } and n_{0}=2. Interestingly, only one site list the serie according to Goolge: this one. Too slow Bash code #!/bin/bash function serie { n=$( echo "3/2*$n" | bc -l | tr '\n' ' ' | sed -e 's@\\@@g' -e 's@ @@g' ); # bc gives \ at very large numbers, sed-tr for it n=$( echo $n/1 | bc ) #DUMMY FLOOR func } n=2 nth=1 while [ true ]; #$nth -lt 500 ]; do serie $n # n gets new value in the function throught global value echo $nth $n nth=$( echo $nth + 1 | bc ) #n++ done

    Read the article

  • Sql query: use where in or foreach?

    - by phenevo
    Hi, I'm using query, where the piece is: ...where code in ('va1','var2'...') I have about 50k of this codes. It was working when I has 30k codes, but know I get: The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partition I think that problem is related with IN... So now I'm planning use foreach(string code in codes) ...where code =code Is it good Idea ??

    Read the article

  • MATLAB - Delete elements of binary files without loading entire file

    - by Doresoom
    This may be a stupid question, but Google and MATLAB documentation have failed me. I have a rather large binary file (10 GB) that I need to open and delete the last forty million bytes or so. Is there a way to do this without reading the entire file to memory in chunks and printing it out to a new file? It took 6 hours to generate the file, so I'm cringing at the thought of re-reading the whole thing. EDIT: The file is 14,440,000,000 bytes in size. I need to chop it to 14,400,000,000.

    Read the article

  • Is there a module that implements an efficient array type in Erlang?

    - by dsmith
    I have been looking for an array type with the following characteristics in Erlang. append(vector(), term()) O(1) nth(Idx, vector()) O(1) set(Idx, vector(), term()) O(1) insert(Idx, vector(), term()) O(N) remove(Idx, vector()) O(N) I normally use a tuple for this purpose, but the performance characteristics are not what I would want for large N. My testing shows the following performance characteristics... erlang:append_element/2 O(N). erlang:setelement/3 O(N). I have started on a module based on the clojure.lang.PersistentVector implementation, but if it's already been done I won't reinvent the wheel.

    Read the article

  • storing multiple values as binary in one field

    - by Enghoej
    Hi I have a project where I need to store a large number of values. The data is a dataset holding 1024 2Byte Unsigned integer values. Now I store one value at one row together with a timestamp and a unik ID. This data is continously stored based on a time trigger. What I would like to do, is store all 1024 values in one field. So would it be possible to do some routine that stores all the 1024 2byte integer values in one field as binary. Maybe a blobfield. Thanks. Br. Enghoej

    Read the article

  • Can SHA-1 algorithm be computed on a stream? With low memory footprint?

    - by raoulsson
    I am looking for a way to compute SHA-1 checksums of very large files without having to fully load them into memory at once. I don't know the details of the SHA-1 implementation and therefore would like to know if it is even possible to do that. If you know the SAX XML parser, then what I look for would be something similar: Computing the SHA-1 checksum by only always loading a small part into memory at a time. All the examples I found, at least in Java, always depend on fully loading the file/byte array/string into memory. If you even know implementations (any language), then please let me know!

    Read the article

  • SQL server virtual memory usage and perofrmance

    - by user365035
    Hello, I have a very large DB used mostly for analytics. The performance overall is very sluggish. I just noticed that when running the query below, the amount of virtual memory used greatly exceed the amount of physical memory available. Currently, phsycial memory is 10GB (10238 bytes) where as the virtual memory returns significantly more 8388607 bytes. That seems really wrong, but I'm at a bit of a loss on how to proceed. USE [master]; GO select cpu_count , hyperthread_ratio , physical_memory_in_bytes / 1048576 as 'mem_MB' , virtual_memory_in_bytes / 1048576 as 'virtual_mem_MB' , max_workers_count , os_error_mode , os_priority_class from sys.dm_os_sys_info

    Read the article

  • Do Hibernate table classes need to be Serializable?

    - by Scott Leis
    I have inherited a Websphere Portal project that uses Hibernate 3.0 to connect to a SQL Server database. There are about 130 Hibernate table classes in this project. They all implement Serializable. None of them declare a serialVersionUID field, so the Eclipse IDE shows a warning for all of these classes. Is there any actual need for these classes to implement Serializable? If so, is there any tool to add a generated serialVersionUID field to a large number of classes at once (just to make the warnings go away) ?

    Read the article

  • How can I explain to a programmer that CSS positioning has many benefits over table based layouts?

    - by Pat
    I have a friend who wishes to work as a freelance web developer, but insists that tables are the way forwards for layouts. Several points he maintains in favour of tables: 1 This is what was taught at the beginning of 10 years of programming & computer science degrees. 2 Large companies use tables to achieve 'technical' things. 3 It saves time I have coded him some examples of CSS exactly matching table based layouts, and provided many links to articles explaining SEO and accessibility benefits. From the perspective of a client, I have been explaining to him that I wouldn't hire someone using outdated methods as their main strategy for layout. As he is my friend and I wish him every success, I believe it is important for him to gain the best start when pitching for work. The question again: How can I explain to a programmer that CSS positioning has many benefits over table based layouts?

    Read the article

  • irritating TortoiseSVN error - file or directory is corrupted and chkdsk at boot

    - by WalterJ89
    Can't move 'D:\Documents\Websites\blah.svn\tmp\entries' to 'D:\ ... .svn\entries': The file or directory is corrupted and unreadable. Any thoughts on what would cause this? This usually happens when trying to commit a large number of new files. Sometimes an update fixes it but most of the time I have to delete the offending directory, re-download it, and attempt to add or update it again. EDIT: it seems my pc always wanting to chkdsk as boot is related.

    Read the article

  • Using [delphi] MadExcept errorhandling with MS Echange Server 2007

    - by Tony
    I currently use madExcept.MailAsSmtpClient to send my bug reports. However a couple of large clients have upgraded to Exchange Server 2007 and we can't get the SMTP support for our app configured (the app runs on individual workstations so the messages aren't all coming from one IP. We can configure an authenticated account in exchange and access it via SMTP from other clients but it rejects madExcept for some reason). So I have two questions 1) has anyone successfully configured that combo ? or 2) is there an example somewhere of how to use the madExcept.UploadViaHTTP option?

    Read the article

  • How to remove svn folders over FTP on Windows hosting

    - by Loftx
    Hi there, I've accidentally copied a large part of a folder tree from my SVN working copy to my shared Windows web host via FTP. The site is now littered with .svn directories and and I need some way of cleaning them. The only access I have to the server is via FTP, or by running a script on the server. Does any one have a script which can be run remotely to remove the files over FTP from my development machine (any language Windows/Linux is fine) or a script in ASP, ASP.net or PHP I can run directly on the Windows server to remove these directories? Thanks, Tom

    Read the article

  • How to sum up an array of integers in C#

    - by Filburt
    Is there a better shorter way than iterating over the array? int[] arr = new int[] { 1, 2, 3 }; int sum = 0; for (int i = 0; i < arr.Length; i++) { sum += arr[i]; } clarification: Better primary means cleaner code but hints on performance improvement are also welcome. (Like already mentioned: splitting large arrays). It's not like I was looking for killer performance improvement - I just wondered if this very kind of syntactic sugar wasn't already available: "There's String.Join - what the heck about int[]?".

    Read the article

  • Preserving indentation when inserting HTML from MySQL

    - by Benjamin
    I am using MySQL and PHP to populate parts of a site, often with HTML stored in a TEXT field. I like to keep my HTML indented so that the source is neat and easy to read, for example: <body> <div> <p>Blahblah</p> </div> </body> However, when the HTML is pulled from MySQL, I end up with: <body> <div> <p>Blahblahblah</p> </div> </body> This is quite ugly when there is a large amount of HTML being inserted into a DIV that is significantly indented. How can I stop this from happening? FYI, I use wordwrap() to keep each line from being too long.

    Read the article

  • How can I rename by CarrierWave file versions?

    - by AKWF
    Upon the uploading of an image in my application, 4 different sizes are created and saved using CarrierWaves version functionality. However, I am converting all of these versions to JPEG. The source file that is uploaded remains unchanged. So I can upload a TIFF file, and CarrierWave will create :large, :medium, :small, and :thumb versions. My problem is that these files all still end in .tif. Yet they are indeed JPEG files, as I've verified this with the file command. How can I write the filenames correctly for each version, and ensure that CarrierWave will report each version's name correctly?

    Read the article

  • ASP.NET application stopping event?

    - by Barguast
    I have a ASP.NET application which implements a custom in-memory cache. I'm using this as opposed to ASP.NET's caching mechanism as I needed a more complex way to handle what to drop from the cache. Part of this custom cache is a separate thread which occasionally searches for data to drop from the cache whenever it gets too large. What I need to do is signal this cache maintenance thread to stop whenever the ASP.NET application 'exits'. I guess this basically amounts to when the web site is stopped in IIS. Is there a pre-existing event I can utilise to do this? Thanks.

    Read the article

  • Integrating my new program with Windows

    - by Carlos
    I've written a log parser, with some generous and insightful help from the SO community: http://stackoverflow.com/questions/2906630/keeping-the-ui-responsive-while-parsing-a-very-large-logfile Now, I'd like to be able to right click one of these logs, select "MyNewLogParser" from "Open With.." and see it open in my new program. This would require me to Change something about my XP installation to show my program in the dropdown list Change the program so that it knows to open the selected file and run the parsing. What do you call these things, and how is it done? I don't know what to search for...

    Read the article

  • Putting a dollar value on code quality

    - by Chris Nelson
    As noted in another thread, "In most businesses, code quality is defined in dollars." So my company has an opportunity to acquire a large-ish C code base. Obviously, if the code quality is good, the code base is worth more than if it's poor. That is, if we can readily read, understand, and update the code, it's worth more to us than if it's a spaghetti-coded mess. Without being able to see the code ahead of time, we'd like to set some objective measure as an acceptance criteria like "If the XXX measure is below the price will be discounted YY%." What criteria can we or should we measure and what tool can we use to measure it?

    Read the article

  • compute mean in python for a generator

    - by nmaxwell
    Hi, I'm doing some statistics work, I have a (large) collection of random numbers to compute the mean of, I'd like to work with generators, because I just need to compute the mean, so I don't need to store the numbers. The problem is that numpy.mean breaks if you pass it a generator. I can write a simple function to do what I want, but I'm wondering if there's a proper, built-in way to do this? It would be nice if I could say "sum(values)/len(values)", but len doesn't work for genetators, and sum already consumed values. here's an example: import numpy def my_mean(values): n = 0 Sum = 0.0 try: while True: Sum += next(values) n += 1 except StopIteration: pass return float(Sum)/n X = [k for k in range(1,7)] Y = (k for k in range(1,7)) print numpy.mean(X) print my_mean(Y) these both give the same, correct, answer, buy my_mean doesn't work for lists, and numpy.mean doesn't work for generators. I really like the idea of working with generators, but details like this seem to spoil things. thanks for any help -nick

    Read the article

  • Monitor and Terminate Python script based on system resource use

    - by Vincent
    What is the "right" or "best" way to monitor the system resources a python script is using and terminate it if the resource use exceeds some predetermined values. In my case memory usage is of concern. I am not asking how to measure the system resource use although I am open to suggestions. As a simple example, let's assume I have a function that finds prime numbers less than some large number and adds them to a list based on some condition. I don't know ahead of time how many prime numbers will satisfy the condition so I what to be sure to terminate the function if I use up to much system memory (8gb lets say). I know that there are ways to monitor the size of python objects. What I don't know is the proper way to monitor the size of the list and exit is to just include a size test in the prime function loop and exit if it exceeds 8gb or if there is an "external" way to monitor and exit.

    Read the article

  • Embarrassingly parallel workflow creates too many output files

    - by Hooked
    On a Linux cluster I run many (N > 10^6) independent computations. Each computation takes only a few minutes and the output is a handful of lines. When N was small I was able to store each result in a separate file to be parsed later. With large N however, I find that I am wasting storage space (for the file creation) and simple commands like ls require extra care due to internal limits of bash: -bash: /bin/ls: Argument list too long. Each computation is required to run through a qsub scheduling algorithm so I am unable to create a master program which simply aggregates the output data to a single file. The simple solution of appending to a single fails when two programs finish at the same time and interleave their output. I have no admin access to the cluster, so installing a system-wide database is not an option. How can I collate the output data from embarrassingly parallel computation before it gets unmanageable?

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >