Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 94/151 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • How do I Put Several Select Statements into Different Columns

    - by Russ Bradberry
    I basically have 7 select statement that I need to have the results output into separate columns. Normally I would use a crosstab for this but I need a fast efficient way to go about this as there are over 7 billion rows in the table. I am using the vertica database system. Below is an example of my statements: SELECT COUNT(user_id) AS '1' FROM event_log_facts WHERE date_dim_id=20100101 SELECT COUNT(user_id) AS '2' FROM event_log_facts WHERE date_dim_id=20100102 SELECT COUNT(user_id) AS '3' FROM event_log_facts WHERE date_dim_id=20100103 SELECT COUNT(user_id) AS '4' FROM event_log_facts WHERE date_dim_id=20100104 SELECT COUNT(user_id) AS '5' FROM event_log_facts WHERE date_dim_id=20100105 SELECT COUNT(user_id) AS '6' FROM event_log_facts WHERE date_dim_id=20100106 SELECT COUNT(user_id) AS '7' FROM event_log_facts WHERE date_dim_id=20100107

    Read the article

  • How to properly catch a 404 error in .NET

    - by Luke101
    I would like to know the proper way to catch a 404 error with c# asp.net here is the code I'm using HttpWebRequest request = (HttpWebRequest) WebRequest.Create(String.Format("http://www.gravatar.com/avatar/{0}?d=404", hashe)); // execute the request try { //TODO: test for good connectivity first //So it will not update the whole database with bad avatars HttpWebResponse response = (HttpWebResponse)request.GetResponse(); Response.Write("has avatar"); } catch (Exception ex) { if (ex.ToString().Contains("404")) { Response.Write("No avatar"); } } This code works but I just would like to know if this is the most efficient.

    Read the article

  • Copying a 14bit grayscale image (saved in long[]) to a pictureBox

    - by Itsik
    My camera gives me 14bit grayscale images, but the API's function returns a long* to the image data. (so i'm assuming 4 bytes for each pixel) My application is written in C++/CLI, and the pictureBox is of .NET type. I am currently using the BitmapData.LockBits() mechanism to gain pointer access to the image data, and using memcpy(bmpData.Scan0.ToPointer(), imageData, sizeof(long)*height*width) to copy the image data to the Bitmap. For now, the only PixelFormat that is working is 32bit RGB, and the image appears in shades of blue with contours. Trying to initialize the Bitmap as 16bppGrayscale isn't working. I would ideally want to cast the array from long to word and using a 16bit format (hoping the the 14bit data will be displayed properly) but I'm not sure if this works. Also, I don't want to iterate over the image data, so finding the min/max and then histogram stretching to [0..255] isnt an option for me (the display must be as efficient as possible) Thanks

    Read the article

  • Save UIView's representation to file.

    - by fish potato
    What is the easiest way to save UIView's representation to file? My solution is, UIGraphicsBeginImageContext(someView.frame.size); [someView drawRect:someView.frame]; UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); NSString* pathToCreate = @"sample.png"; NSData *imageData = [NSData dataWithData:UIImagePNGRepresentation(image)]; [imageData writeToFile:pathToCreate atomically:YES]; but it seems tricky, and I think there must be more efficient way to do this.

    Read the article

  • Help me find some good 'Reflection' reading materials??

    - by IbrarMumtaz
    If it wasn't you guys I would've never discovered Albahari.com's free threading ebook. My apologies if I come off as being extremely lazy but I need to make efficient use of my time and make sure am not just swallowing any garbage of the web. Therefore I am looking for the best and most informative and with a fair bit of detail for the Reflection chapter in .Net. Reflection is something that comes up time and time again and I want to extend my reading from what I know already from the official 70-536 book. I'm not a big fan of MSDN but at the moment that's al I'm using. Anyone got any other good published reading material off the inter web that can help revision for the entrance exam??? Would be greatly appreciated !!! Thanks in Advance, Ibrar

    Read the article

  • mysql filtering result using left outer join

    - by user288178
    my query: SELECT content.*, activity_log.content_id FROM content LEFT JOIN activity_log ON content.id = activity_log.content_id AND sess_id = '$sess_id' WHERE activity_log.content_id IS NULL AND visibility = $visibility AND content.reported < ".REPORTED_LIMIT." AND content.file_ready = 1 LIMIT 1 The purpose of that query is to get 1 row from the content table that has not been viewed by the user (identified by session_id), but it still returns contents that have been viewed. What is wrong? ( I have checked the table making sure that the content_ids are there) Note: I think this is more efficient than using subqueries, thoughts?

    Read the article

  • Best way to get record counts grouped by month, adjusted for time zone, using SQL or LINQ to SQL

    - by Jeff Putz
    I'm looking for the most efficient way to suck out a series of monthly counts of records in my database, but adjusting for time zone, since the times are actually stored as UTC. I would like my result set to be a series of objects that include month, year and count. I have LINQ to SQL objects that looks something like this: public class MyRecord { public int ID { get; set; } public DateTime TimeStamp { get; set; } public string Data { get; set; } } I'm not opposed to using straight SQL, but LINQ to SQL would at least keep the code a lot more clean. The time zone adjustment is available as an integer (-5, for example). Again, the result set what I'm looking for is objects containing the month, year and count, all integers. Any suggestions? I can think of several ways to do it straight, but not with a time zone adjustment.

    Read the article

  • C#: Efficiently search a large string for occurences of other strings

    - by Jon
    Hi, I'm using C# to continuously search for multiple string "keywords" within large strings, which are = 4kb. This code is constantly looping, and sleeps aren't cutting down CPU usage enough while maintaining a reasonable speed. The bog-down is the keyword matching method. I've found a few possibilities, and all of them give similar efficiency. 1) http://tomasp.net/articles/ahocorasick.aspx -I do not have enough keywords for this to be the most efficient algorithm. 2) Regex. Using an instance level, compiled regex. -Provides more functionality than I require, and not quite enough efficiency. 3) String.IndexOf. -I would need to do a "smart" version of this for it provide enough efficiency. Looping through each keyword and calling IndexOf doesn't cut it. Does anyone know of any algorithms or methods that I can use to attain my goal?

    Read the article

  • Bypassing Windows Copy design

    - by Scott S
    I have been trying to figure out a way to streamline copying files from one drive (network or external in my case) to my main system drives. Short of creating a program that I will have to activate each time then choosing all the info in it, I have no really good design to bypass it. I have been wondering if there was a way to intercept a straight-forward windows copy with a program to do it my way. The basic design would be that upon grabbing the copy (actually for this to be efficient, a group of different copies), the program would organize all the separate copies into a single stream of copies. I've been wanting to do this as recently I have been needing to make backups of a lot of data and move it a lot as all my drives seem to be failing the past few months.

    Read the article

  • Faster way to convert from 24 bit wav pcm format to float?

    - by LMO
    I need to read data in from a wav file in 24 bit pcm format, and convert to float. I'm using Python 2.7.2. The wave package reads the data in as a string, so what I've tried is: # read in entire wav file wdata = f.readframes(nFrames) # unpack into signed integers and convert to float data = array.array('f') for i in range(0,nFrames*3,3): data.append(float(struct.unpack('<i', '\x00'+ wdata[i:i+3])[0])) # normalize sample values data = data / 0x800000 This is quite a bit faster than my earlier approaches, but still quite slow. Can anyone suggest a more efficient method?

    Read the article

  • Methods of pulling data from a database

    - by kingrichard2005
    I'm getting ready to start a C# web application project and just wanted some opinions regarding pulling data from a database. As far as I can tell, I can either use C# code to access the database from the code behind (i.e. LINQ) of my web app or I can call a stored procedure that will collect all the data and then read it with a few lines of code in my code behind. I'm curious to know which of these two approaches, or any other approach, would be the most efficient, elegant, future proof and easiest to test.

    Read the article

  • What is it about Fibonacci numbers?

    - by Ian Bishop
    Fibonacci numbers have become a popular introduction to recursion for Computer Science students and there's a strong argument that they persist within nature. For these reasons, many of us are familiar with them. They also exist within Computer Science elsewhere too; in surprisingly efficient data structures and algorithms based upon the sequence. There are two main examples that come to mind: Fibonacci heaps which have better amortized running time than binomial heaps. Fibonacci search which shares O(log N) running time with binary search on an ordered array. Is there some special property of these numbers that gives them an advantage over other numerical sequences? Is it a density quality? What other possible applications could they have? It seems strange to me as there are many natural number sequences that occur in other recursive problems, but I've never seen a Catalan heap.

    Read the article

  • Get notified/receive explicit intents when an activity starts

    - by qtips
    Hi, I am developing an application than gets notified when an activity chosen by a user is started. For this to work, the best approach would be to register a broadcastreceiver for ACTION_MAIN explicit intents, which as far as I know doesn't work (because these intents have specific targets). Another, probably less efficient approach, is to use the system ActivityManager and poll on the getRunningTask() which returns a list of all running tasks at the moment. The polling can be done by a background service. By monitoring the changes in this list, I can see whether an activity is running or not, so that my application can get notified. The downside is ofcourse the polling. I have not tried this yet, but I think that this last approach will probably work. Does anyone know of a better approach(es) or suggestions which are less intensive?

    Read the article

  • Unlock a file with unlocker from a c# App?

    - by netadictos
    I am trying to unlock a file from a C# program, using unlocker. In my UI, I put a button to unlock the file the app couldn't delete. When the user pushes the button, I want unlocker (the famous app) to be opened. I have read about in the Unlocker web, and there is some explanations about the commandline to use but nothing works. I write the following code but nothing happens: "c:\Program Files\unlocker\unlocker.exe" -L "PATHFORTHEFILE.doc" Nothing happens. I have tried without parameters and with -LU. Any idea? Something more efficient than unlocker to integrate it with software?

    Read the article

  • Ways to divide the high/low byte from a 16bit address?

    - by Grissiom
    Hello, I'm developing a software on 8051 processor. A frequent job is to divide the high and low byte of a 16bit address. I want to see there are how many ways to achieve it. The ways I come up so far are: (say ptr is a 16bit pointer, and int is 16bit int) ADDH = (unsigned int) ptr >> 8; ADDL = (unsigned int) ptr & 0x00FF; and ADDH = ((unsigned char *)&ptr)[0]; ADDL = ((unsigned char *)&ptr)[1]; Does anyone have any other bright ideas? ;) And anyone can tell me which way is more efficient?

    Read the article

  • Mercurial - How to stop tracking modified file but keep the first version in repository.

    - by teerapap
    I create the hg repository with my source tree. I want to keep the first version of some files such as Makefile in the repository and then hg don't see it modified even through I modified it. Original problem is that ./configure usually modifies the Makefile but I don't want the build files to committed in the repository. So I want to keep only first version of configure and Makefile in the repository so that everybody who clone my repository can run ./configure by themself and not bother the repository I tried hg remove or hg forget but those are stop tracking and also delete the files in the next revision of reporitory. .hgignore doesn't do the things too. I think of hg revert everytimes I run ./configure or make but it's not efficient way. Are there any better ways?

    Read the article

  • CUDA small kernel 2d convolution - how to do it

    - by paulAl
    I've been experimenting with CUDA kernels for days to perform a fast 2D convolution between a 500x500 image (but I could also vary the dimensions) and a very small 2D kernel (a laplacian 2d kernel, so it's a 3x3 kernel.. too small to take a huge advantage with all the cuda threads). I created a CPU classic implementation (two for loops, as easy as you would think) and then I started creating CUDA kernels. After a few disappointing attempts to perform a faster convolution I ended up with this code: http://www.evl.uic.edu/sjames/cs525/final.html (see the Shared Memory section), it basically lets a 16x16 threads block load all the convolution data he needs in the shared memory and then performs the convolution. Nothing, the CPU is still a lot faster. I didn't try the FFT approach because the CUDA SDK states that it is efficient with large kernel sizes. Whether or not you read everything I wrote, my question is: how can I perform a fast 2D convolution between a relatively large image and a very small kernel (3x3) with CUDA?

    Read the article

  • Updating Silverlight with data. JSON or WCF?

    - by Alastair Pitts
    We will be using custom Silverlight 4.0 controls on our ASP.NET MVC web page to display data from our database and was wondering what the most efficient method was? We will be having returned values of up to 100k records (of 2 properties per record). We have a test that uses the HTML Bridge from Javascript to Silverlight. First we perform a post request to a controller action in the MVC web app and return JSON. This JSON is then passed to the Silverlight where it is parsed and the UI updated. This seems to be rather slow, with the stored procedure (the select) taking about 3 seconds and the entire update in the browser about 10-15sec. Having a brief look on the net, it seems that WCF is another option, but not having used it, I wasn't sure of it's capability or suitability. Does anyone have any experiences or recommendations?

    Read the article

  • How to store static content across branches in a single location in version control

    - by Shravan
    [Just a random thought] I have a pdf doc that is downloaded when the user clicks on 'help' on my website. Now, this is a pretty huge document and is saved in version control (SVN) and is thus copied for all branches that exist in SVN. This is static content and something that developers are not working on, and does not change often. Is there a more efficient way to store it (that would not hamper local deployments) that would make SVN checkouts and updates relatively faster. I know the benefit we get is not huge, this is something that came to my head none the less.

    Read the article

  • How do I request a single random row from a force.com database in SOQL?

    - by Ollie C
    Total row-count is in the range 10k-100k rows. Can I use RAND() on force.com? Unfortunately although all the rows have a unique numeric identifier, there are many gaps, and I'd often want to select a random row from a filtered subset anyway. I suspect there's no particularly efficient way to do this, but is it possible at all? Ultimately all I want to do is to extract one row from a table (or a subset based on specific filter criteria) at random. If force.com doesn't let me select a random row, then can I query the rows to select from, and assign sequential IDs to all the rows, say 1-1,035, and then select a random number in that range locally, say 349, and then get row 349?

    Read the article

  • asp.net regex to find anchor tags and replace their url

    - by ace
    Hi -i'm trying to find all the anchor tags and appending the href value with a variable. for example <a href="/page.aspx">link</a> will become <a href="/page.aspx?id=2"> <A hRef='http://www.google.com'><img src='pic.jpg'></a> will become <A hRef='http://www.google.com?id=2'><img src='pic.jpg'></a> I'm able to match all the anchor tags and href values using regex, then i manually replace the values using string.replace, however i dont think its the efficient way to do this. Is there a solution where i can use something like regex.replace(html,newurlvalue)

    Read the article

  • For Qt 4.6.x, how to auto-size text to fit in a specified width?

    - by darenchow
    Inside of my QGraphicsRectItem::paint(), I am trying to draw the name of the item within its rect(). However, for each of the different items, they can be of variable width and similarly names can be of variable length. Currently I am starting with a maximum font size, checking if it fits and decrementing it until I find a font size that fits. So far, I haven't been able to find a quick and easy way to do this. Is there a better, or more efficient way to do this? Thanks! void checkFontSize(QPainter *painter, const QString& name) { // check the font size - need a better algorithm... this could take awhile while (painter->fontMetrics().width(name) > rect().width()) { int newsize = painter->font().pointSize() - 1; painter->setFont(QFont(painter->font().family(), newsize)); } }

    Read the article

  • cutting a text file into multiple parts in emacs

    - by Gaurish Telang
    Hi I am using the GNU-Emacs-23 editor. I have this huge text file containing about 10,000 lines which I want to chop into multiple files. Using the mouse to select the required text to paste in another file is the really painful. Also this is prone to errors too. If I want to divide the text file according to the line numbers into say 4 file where first file:lines 1-2500 second file:lines 2500-5000 third file :lines 5000-7500 fourth file: lines: 7500-10000 how do I do this? At the very least, is there any efficient way to copy large regions of the file just by specifying line numbers

    Read the article

  • Resetting Objects vs. Constructing New Objects

    - by byronh
    Is it considered better practice and/or more efficient to create a 'reset' function for a particular object that clears/defaults all the necessary member variables to allow for further operations, or to simply construct a new object from outside? I've seen both methods employed a lot, but I can't decide which one is better. Of course, for classes that represent database connections, you'd have to use a reset method rather than constructing a new one resulting in needless connecting/disconnecting, but I'm talking more in terms of abstraction classes. Can anyone give me some real-world examples of when to use each method? In my particular case I'm thinking mostly in terms of ORM or the Model in MVC. For example, if I would want to retrieve a bunch of database objects for display and modify them in one operation.

    Read the article

  • Reverse massive text file in Java

    - by DanJanson
    What would be the best approach to reverse a large text file that is uploaded asynchronously to a servlet that reverses this file in a scalable and efficient way? text file can be massive (gigabytes long) can assume mulitple server/clustered environment to do this in a distributed manner. open source libraries are encouraged to consider I was thinking of using Java NIO to treat file as an array on disk (so that I don't have to treat the file as a string buffer in memory). Also, I am thinking of using MapReduce to break up the file and process it in separate machines. Any input is appreciated. Thanks. Daniel

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >