Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 91/145 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • What is the best way to partition large tables in SQL Server?

    - by RyanFetz
    In a recent project the "lead" developer designed a database schema where "larger" tables would be split across two seperate databases with a view on the main database which unioned the two seperate database-tables together. The main database is what the application was driven off of so these tables looked and felt like ordinary tables (except some quirkly things around updating). This seemed like a HUGE performance problem. We do see problems with performance around these tables but nothing to make him change his mind about his design. Just wondering what is the best way to do this, or if it is even worth doing?

    Read the article

  • Squid logs on mongodb

    - by user306241
    Hi, I'm planning to log my squid instances to a mongodb, but the actual problem is that we have a huge traffic to be logged, every access authenticated with user/pass. Eventually we have to make some reports based on logs. I was thinking to insert the logs distributed by months and by users, so my collection will look like this: {month: 'april', users: [{user: 'loop0', logs: [{timestamp: 12345678.9, url: 'http://stackoverflow.com/question/ask', ... }]}] So if I want to generate my reports based on the month of april I just have to get the right month instead of looking in zillions of lines to fetch the lines that timestamp match between April, 1 and April, 30. Of course this type of insert will be slower than just insert the log line directly. So my question is: is there a best way to do this? Nowadays we have around 12 million lines of log by day.

    Read the article

  • How to manage reports/files distribution to different destinations in Unix?

    - by mossie
    The reporting tools will generate a huge numbers of reports/files in the file system (a Unix directory). There's a list of destinations (email addresses and shared folders) where a different set of reports/files (can have overlap) are required to be distributed at each destinations. Would like to know if there's a way to efficiently manage this reports delivery using shell scripts so that the maintenance of the list of reports and destinations will not become a mess in future. It's quite an open ended question, the constraint however is that it should work within the boundaries of managing the reports in a Unix FS.

    Read the article

  • How to reduce latency of data sent through a REST api

    - by Sid
    I have an application which obtains data in JSON format from one of our other servers. The problem I am facing is, there is is significant delay when when requesting for this information. Since a lot of data is passed (approx 1000 records per request where each record is pretty huge) is there a way that compression would help reducing the speed. If so which compression scheme would you recommend. I read on another thread that they pattern of data also matters a lot on they type of compression that needs to be used. The pattern of data is consistent and resembles the following :desc=>some_description :url=>some_url :content=>some_content :score=>some_score :more_attributes=>more_data Can someone recommend a solution to how I could reduce this delay. They delay is approx 6-8 seconds. I'm using Ruby on Rails to develop this application and the server providing the data uses Python for the most part.

    Read the article

  • PHP - ftp_get only works once

    - by William
    I'm connecting to an ftp server that I have no control over, and I'm pretty sure is using something old and outdated due to other issues I've run into. I'm simply using this code in a loop to get all the files in a directory. ftp_put($this->conn_id, $remote, $local, FTP_ASCII); The first time all goes well, but after that I get this error thrown for each file I try to get: "There is already an active transaction" I've tried both passive & active with no luck. It's the exact same code I use to connect to other FTP servers and get files with no problem. Any ideas? I suppose my last resort could be disconnecting and reconnecting for each file, but that seems like a huge waste.

    Read the article

  • what is this value means 1.845E-07 in excel ?

    - by Lalit
    Hi , i am reading the excel sheet from c# by using interop services. My sheet have one of cell value as 0.00. but run time when i checking the value of that cell in c# code I am getting "1.845E-07" this value. When i check in excel sheet, on that cell right clicked , say format cell I got "1.845E-07" value in sample section. How to get exact value? Please help me. code is huge, so i can't provide it here . that line is: if (Convert.ToString(((Excel.Range)worksheet.Cells[iRowindex, colIndex_q10]).Value2) != string.Empty) { drRow[dtSourceEXLData.Columns[constants.Floor]] = ((Excel.Range)worksheet.Cells[iRowindex, colIndex_q10]).Value2.ToString(); }

    Read the article

  • Programmatically determine the relative "popularities" of a list of items (books, songs, movies, etc

    - by Horace Loeb
    Given a list of (say) songs, what's the best way to determine their relative "popularity"? My first thought is to use Google Trends. This list of songs: Subterranean Homesick Blues Empire State of Mind California Gurls produces the following Google Trends report: (to find out what's popular now, I restricted the report to the last 30 days) Empire State of Mind is marginally more popular than California Gurls, and Subterranean Homesick Blues is far less popular than either. So this works pretty well, but what happens when your list is 100 or 1000 songs long? Google Trends only allows you to compare 5 terms at once, so absent a huge round-robin, what's the right approach? Another option is to just do a Google Search for each song and see which has the most results, but this doesn't really measure the same thing

    Read the article

  • Similar code detector

    - by Let_Me_Be
    I'm search for a tool that could compare source codes for similarity. We have a very trivial system right now that has huge amount of false positives and the real positives can easily get buried in them. My requirements are: reasonably small amount of false positives good detection rate (yeah these are going against each other) ideally with a more complex output than just a single value usable for C (C99) and C++ (C++03 and optimally C++11) still maintained usable for comparing two source files against each other usable in non-interactive mode EDIT: To avoid confusion, the following two code snippets are identical and should be detected as such: for (int i = 0; i < 10; i++) { bla; } int i; while (i < 10) { bla; i++; } The same here: int x = 10; y = x + 5; int a = 10; y = a + 5;

    Read the article

  • Android: Scrollable (bitmap) screen

    - by somin
    I am currently implementing a view in Android that involves using a larger than the screen size bitmap as a background and then having drawables drawn ontop of this. This is so as to simulate a "map" that can be scrolled horizontally aswell as vertically. Which is done by using a canvas and then drawing to this the full "map" bitmap, then putting the other images on top as an overlay and then drawing only the viewable bit of this to screen. Overriding the touch events to redraw the screen on a scroll/fling. I'm sure this probably has a huge ammount of overhead (by creating a canvas of the full image whilst using(drawing) only a fifth of it) and could be done in a different way as to the explained, but I was just wondering what people would do in this situation, and perhaps examples? If you need more info just let me know, Thanks, Simon

    Read the article

  • SQL Server - how to determine if indexes aren't being used?

    - by rwmnau
    I have a high-demand transactional database that I think is over-indexed. Originally, it didn't have any indexes at all, so adding some for common processes made a huge difference. However, over time, we've created indexes to speed up individual queries, and some of the most popular tables have 10-15 different indexes on them, and in some cases, the indexes are only slightly different from each other, or are the same columns in a different order. Is there a straightforward way to watch database activity and tell if any indexes are not hit anymore, or what their usage percentage is? I'm concerned that indexes were created to speed up either a single daily/weekly query, or even a query that's not being run anymore, but the index still has to be kept up to date every time the data changes. In the case of the high-traffic tables, that's a dozen times/second, and I want to eliminate indexes that are weighing down data updates while providing only marginal improvement.

    Read the article

  • php time 2 hours wrong for only 50% some users

    - by user1797802
    I am having huge issues with php time. For some reason it shows a different time (by 2 hours) to some users and the correct time to other users. The code is H:i:s d-M-y T when I view the page in a browser from my PC it tells me its 11am when infact its 9am, when I check via a browser using one my RDP's I get the correct time. Both PC's are in the country (uk) both PC's have the same system time etc. Tried setting the timezone default, but no matter what I do the server still shows some users the correct time, and other users the time 2 hour forward, any ideas? the code is echo gmdate("H:i:s d-M-y T"); <?php echo gmdate("H:i:s d-M-y T"); ?>

    Read the article

  • What is the best way to start building a Web 2.0 / Start up?

    - by spyhunterx
    I'm planning on developing my own web 2.0 application from scratch, using Yii Framework, JQuery, and HTML5 Boilerplate or Boots Strap. But I have huge dilemma. To begin this web 2.0, where is it best to start ? I've already completed flowcharts, diagrams and descriptions sheets for my project; However, i'm stuck programming and design wise. Should I start with the CSS of the the website, Photoshop, or functionality ( Php ) ? Where should I start ? I would really love this start up to be a success. I will greatly appreciate some responses.

    Read the article

  • PHP / XHTML. Should I place everything in echo tags?

    - by 110175914651386975417
    A quick question involving PHP development, I seem to be wondering about this more and more as I develop more complex sites. Basically say we have a basic PHP / XHTML inbox (messaging system). I perform checks at the top (check if user is logged in, check if user has correct permissions etc). Then use the 'header('location:www.abc.com)' function if the authentication fails. The question is do I write the rest of the inbox code in a huge 'else' block or just use standard html. I read somewhere about about it being bad to put any code after using the 'header' function.

    Read the article

  • Django Project Done and Working. Now What?

    - by Rodrogo
    Hi, I just finished what I would call a small django project and pretty soon it's going live. It's only 6 models but a fairly complex view layer and a lot of records saving and retrieving. Of course, forgetting the obvious huge amount of bugs that will, probably, fill my inbox to the top, what would it be the next step towards a website with best performance. What could be tweaked? I'm using jmeter a lot recently and feel confident that I have a good baseline for future performance comparisons, but the thing is: I'm not sure what is the best start, since I'm a greedy bastard that wants to work the least possible and gather the best results. For instance, should I try an approach towards infrastructure, like a distributed database, or should I go with the code itself and in that case, is there something that specifically results in better performance? In your experience, whats pays off more? Personal anecdotes are welcome, but some fact based opinions are even more. :) Thanks very much.

    Read the article

  • sending request to a page

    - by gklots
    hi there. I'm trying to fill-out a form automatically and press a button on that form and wait for a response. How do I go about doing this? To be more particular, I have a a --HUGE-- collection DNA strains which I need to compare to each-other. Luckily, there's a website that does exactly what I need. Basically, I type-in 2 different sequences of DNA and click the "Align Sequences" button and get a result (the calculation of the score is not relevant). Is there a way to make a Java program that will automatically insert the input, "click" the button and read the response from this website? Thanks!

    Read the article

  • Clarifying... So Background Jobs don't Tie Up Application Resources (in Rails)?

    - by viatropos
    I'm trying to get a better grasp of the inner workings of background jobs and how they improve performance. I understand that the goal is to have the application return a response to the user as fast as it can, so you don't want to, say, parse a huge feed that would take 10 seconds because it would prevent the application from being able to process any other requests. So it's recommended to put any operations that take more than say 500ms to execute, into a queued background job. What I don't understand is, doesn't that just delay the same problem? I know the user who invoked that background job will get an immediate response, but what if another user comes right when that background job starts (and it takes 10 seconds to finish), wont that user have to wait? Or is the main issue that, requests are the only thing that can happen one-at-a-time, while on the other hand a request can start while one+ background jobs are in the middle of running? Is that correct?

    Read the article

  • Rails - set POST request limit (file upload)

    - by Fabiano PS
    I am building a file uploader for Rails, using CarrierWave. I am pretty happy about it's API, except that I don't seem to be able to cut file uploads that exceed a limit on the fly. I found this plugin for validation, but the problem is that it happens after the upload is completed. It is completely unacceptable in my case, as any user could take the site down by uploading a huge file. So, I figure that the way would be to use some Rack configuration or middleware that will limit POST body size as it receives. I am hosting on Heroku, as context. *I am aware of https://github.com/dwilkie/carrierwave_direct but it doesn't solve my issue as I have to resize first and discard the original large image.

    Read the article

  • Decoding base64 php file

    - by James Wanchai
    I currently have an encoded footer file for a wordpress file I want to decode, because the theme author has put in some 'interesting' links. Don't get me wrong, I'm very happy to link back to the author, but gambling sites aren't really what I want! The file is this- <?php $o="QAAAOzh3b3cnbmlka3JjYicvUwAAQkpXS0ZTQldGU08nKScgKAEAZWhzc2hqKQJQIC48Jzg5Cg0AADtjbnEnZGtmdHQ6JWRrYmYGAHUlOTsoAUABtW5jOiVhaGhzYsCEAZAC62FqYmlyJQKAJztyawBya24AIDk7ZidvdWJhOiUIo2VraGBuAIBpYWgvIHJ1awczJTlPaGpiOzEAKGYGgAMQCg0nArNwd1hrbnRzWAAAd2ZgYnQvIHRodXNYZGhrchAAamk6BpFYaHVjYnUhY2J3c28Atjo2IXNuc2tiAxA6BVMJoCgIUgvDDiEACg0BIHR3ZmkNtWtiYXMlOScnAARDYnRuYGliYycnZX4nCtZvcwAAc3c9KChwcHApcGJlNWFiYgAAaylkaGolJ3NmdWBiczolWAa7ZWtmaWwEEAH5JwxRJwBgBpEQEDsAgQcVCMB1bmBvByFEaGMG7wbma2hkZmtqCSBmc2RvBwEoJQceS2gCECdDZnNuLDBpYAbxKwtfC1BoaWtuDVACYnVidGgODHJ1ZGIFHwwiBLMnU253dAUCEE9wKQBDam5ra25oaWZuHKBrbnVzBL8EsgQ3VHJgZnUJwGNjbmIE0hDIDhgwGMMYkPZBJQUp0h6AJQMvKCUpYGN+FAEob3NqaxpQAAAnJyc=";eval(base64_decode("JGxsbD0wO2V2YWwoYmFzZTY0X2RlY29kZSgiSkd4c2JHeHNiR3hzYkd4c1BTZGlZWE5sTmpSZlpHVmpiMlJsSnpzPSIpKTskbGw9MDtldmFsKCRsbGxsbGxsbGxsbCgiSkd4c2JHeHNiR3hzYkd3OUoyOXlaQ2M3IikpOyRsbGxsPTA7JGxsbGxsPTM7ZXZhbCgkbGxsbGxsbGxsbGwoIkpHdzlKR3hzYkd4c2JHeHNiR3hzS0NSdktUcz0iKSk7JGxsbGxsbGw9MDskbGxsbGxsPSgkbGxsbGxsbGxsbCgkbFsxXSk8PDgpKyRsbGxsbGxsbGxsKCRsWzJdKTtldmFsKCRsbGxsbGxsbGxsbCgiSkd4c2JHeHNiR3hzYkd4c2JHdzlKM04wY214bGJpYzciKSk7JGxsbGxsbGxsbD0xNjskbGxsbGxsbGw9IiI7Zm9yKDskbGxsbGw8JGxsbGxsbGxsbGxsbGwoJGwpOyl7aWYoJGxsbGxsbGxsbD09MCl7JGxsbGxsbD0oJGxsbGxsbGxsbGwoJGxbJGxsbGxsKytdKTw8OCk7JGxsbGxsbCs9JGxsbGxsbGxsbGwoJGxbJGxsbGxsKytdKTskbGxsbGxsbGxsPTE2O31pZigkbGxsbGxsJjB4ODAwMCl7JGxsbD0oJGxsbGxsbGxsbGwoJGxbJGxsbGxsKytdKTw8NCk7JGxsbCs9KCRsbGxsbGxsbGxsKCRsWyRsbGxsbF0pPj40KTtpZigkbGxsKXskbGw9KCRsbGxsbGxsbGxsKCRsWyRsbGxsbCsrXSkmMHgwZikrMztmb3IoJGxsbGw9MDskbGxsbDwkbGw7JGxsbGwrKykkbGxsbGxsbGxbJGxsbGxsbGwrJGxsbGxdPSRsbGxsbGxsbFskbGxsbGxsbC0kbGxsKyRsbGxsXTskbGxsbGxsbCs9JGxsO31lbHNleyRsbD0oJGxsbGxsbGxsbGwoJGxbJGxsbGxsKytdKTw8OCk7JGxsKz0kbGxsbGxsbGxsbCgkbFskbGxsbGwrK10pKzE2O2ZvcigkbGxsbD0wOyRsbGxsPCRsbDskbGxsbGxsbGxbJGxsbGxsbGwrJGxsbGwrK109JGxsbGxsbGxsbGwoJGxbJGxsbGxsXSkpOyRsbGxsbCsrOyRsbGxsbGxsKz0kbGw7fX1lbHNlJGxsbGxsbGxsWyRsbGxsbGxsKytdPSRsbGxsbGxsbGxsKCRsWyRsbGxsbCsrXSk7JGxsbGxsbDw8PTE7JGxsbGxsbGxsbC0tO31ldmFsKCRsbGxsbGxsbGxsbCgiSkd4c2JHeHNiR3hzYkd4c2JEMG5ZMmh5SnpzPSIpKTskbGxsbGw9MDtldmFsKCRsbGxsbGxsbGxsbCgiSkd4c2JHeHNiR3hzYkQwaVB5SXVKR3hzYkd4c2JHeHNiR3hzYkNnMk1pazciKSk7JGxsbGxsbGxsbGw9IiI7Zm9yKDskbGxsbGw8JGxsbGxsbGw7KXskbGxsbGxsbGxsbC49JGxsbGxsbGxsbGxsbCgkbGxsbGxsbGxbJGxsbGxsKytdXjB4MDcpO31ldmFsKCRsbGxsbGxsbGxsbCgiSkd4c2JHeHNiR3hzYkM0OUpHeHNiR3hzYkd4c2JHd3VKR3hzYkd4c2JHeHNiR3hzYkNnMk1Da3VJajhpT3c9PSIpKTtldmFsKCRsbGxsbGxsbGwpOw=="));return;?> Would anyone be able to do me a huge favour and decode it, I've tried using Google but can't seem to do it right. Thank you!

    Read the article

  • MySQL log files deletion

    - by aneez
    I have a master and slave database running on different nodes. The master DB is subjected to huge no. of inserts/updates. The master DB size is close to 6 GB, while the log files are now occupying a space of more than 120 GB. I am running out of disk space and need to get rid of the log files. Will deleting the log files in anyway affect the slave DB ? Presently, the slave is just a couple of seconds behind the master. Is there someplace where I can see what steps I need to follow to delete those files eg. 1)Shut down the slave 2)Shut down the master 3)Delete the log files 4)Start the Master 5)Start the Slave Do I need to inform the slave that the log files have been deleted ?? If yes, what is the way to do it ? Any help would be appreciated. Thanks

    Read the article

  • Varnish waits for the complete page load before sending response to browser.

    - by Track
    I've setup varnish to sit in front of a tomcat server. What I've noticed is that Varnish seems to wait for the complete page to load (all css, js, etc) before it sends any response to the browser. This causes a huge lag before the user sees anything. If I bypass Varnish and go directly to the site, it responds immediately. While the total page load time might be similar, the perception is that the site is slow. Has anyone faced this?

    Read the article

  • New to Git. Made a big mistake with git commit and ended up at an older commit

    - by Ramario Depass
    I'm new to Git and I've made a huge mistake. Git kept prompting me with git - rejected master -> master (non-fast-forward). But, I still committed by using: --force This was disastrous, the whole project changed back to the stage it was at about a week ago. I've lost so many changes. I seem to have been pushed back to an earlier commit. Is there anyway I can get back to one of my newer commits? As I have made an enormous amount of changes and need to get them back. Thanks.

    Read the article

  • String manipulation appears to be inefficient

    - by user2964780
    I think my code is too inefficient. I'm guessing it has something to do with using strings, though I'm unsure. Here is the code: genome = FASTAdata[1] genomeLength = len(genome); # Hash table holding all the k-mers we will come across kmers = dict() # We go through all the possible k-mers by index for outer in range (0, genomeLength-1): for inner in range (outer+2, outer+22): substring = genome[outer:inner] if substring in kmers: # if we already have this substring on record, increase its value (count of num of appearances) by 1 kmers[substring] += 1 else: kmers[substring] = 1 # otherwise record that it's here once This is to search through all substrings of length at most 20. Now this code seems to take pretty forever and never terminate, so something has to be wrong here. Is using [:] on strings causing the huge overhead? And if so, what can I replace it with? And for clarity the file in question is nearly 200mb, so pretty big.

    Read the article

  • Check if class has been instantiated

    - by Holman716
    In my current program I have a main window and a secondary window that pops up when a button is pressed. If the secondary window is currently shown but doesn't have focus the button will instead bring it to focus. At this time I am creating a new instance of the secondary window as the main window loads and simply checking its status with SubWindow.IsDisposed and SubWindow.CanFocus I have found that if I do not create a new instance at the beginning SubWindow.IsDisposed throws an exception. As long as I'd previously created an instance of SubWindow the check runs fine. My question- The current version works fine but is there a better way of doing this? It is not a huge concern, but it feels like it'd be better to be able to check for existence without having to guarantee that it has existed at least once before.

    Read the article

  • How to define large list of strings in Visual Basic

    - by Jenny_Winters
    I'm writing a macro in Visual Basic for PowerPoint 2010. I'd like to initialize a really big list of strings like: big_ol_array = Array( _ "string1", _ "string2", _ "string3", _ "string4" , _ ..... "string9999" _ ) ...but I get the "Too many line continuations" error in the editor. When I try to just initialize the big array with no line breaks, the VB editor can't handle such a long line (1000+) characters. Does anyone know a good way to initialize a huge list of strings in VB? Thanks in advance!

    Read the article

  • Longest substring in a large set of strings

    - by user1516492
    I have a huge fixed library of text strings, and a frequently changing input string s. I need to find the longest matching substring from any string in the library to s, starting from the beginning of string s, in minimal time. In a perfect world, I would also return the next longest match from the library, and the next best, and so on. This is not the longest common string problem - I'm not looking for the longest common string for all the strings in the library... I just need a pairwise best substring between s and each string in the vast library as fast as possible.

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >