Search Results

Search found 30217 results on 1209 pages for 'website performance'.

Page 235/1209 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Providing downloads on ASP.net website

    - by Dave
    I need to provide downloads of large files (upwards of 2 GB) on an ASP.net website. It has been some time since I've done something like this (I've been in the thick-client world for awhile now), and was wondering on current best practices for this. Ideally, I would like: To be able to track download statistics: # of downloads is essential; actual bytes sent would be nice. To provide downloads in a way that "plays nice" with third-party download managers. Many of our users have unreliable internet connections, and being able to resume a download is a must. To allow multiple users to download the same file simultaneously. My download files are not security-sensitive, so providing a direct link ("right-click to download...") is a possibility. Is just providing a direct link sufficient, letting IIS handle it, and then using some log analyzer service (any recommendations?) to compile and report the statistics? Or do I need to intercept the download request, store some info in a database, then send a custom Response? Or is there an ASP.net user control (built-in or third party) that does this? I appreciate all suggestions.

    Read the article

  • Is jdbc or ldap faster for basic read operations?

    - by Brandon
    I have a set of user data which I am try to access. Due to the way our company's employee data is set up, the information is available both through LDAP and through a table in our DB. I was curious, for standard read operations which would generally be a higher performance query?

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • How to handle very frequent updates to a Lucene index

    - by fsm
    I am trying to prototype an indexing/search application which uses very volatile indexing data sources (forums, social networks etc), here are some of the performance requirements, Very fast turn-around time (by this I mean that any new data (such as a new message on a forum) should be available in the search results very soon (less than a minute)) I need to discard old documents on a fairly regular basis to ensure that the search results are not dated. Last but not least, the search application needs to be responsive. (latency on the order of 100 milliseconds, and should support at least 10 qps) All of the requirements I have currently can be met w/o using Lucene (and that would let me satisfy all 1,2 and 3), but I am anticipating other requirements in the future (like search relevance etc) which Lucene makes easier to implement. However, since Lucene is designed for use cases far more complex than the one I'm currently working on, I'm having a hard time satisfying my performance requirements. Here are some questions, a. I read that the optimize() method in the IndexWriter class is expensive, and should not be used by applications that do frequent updates, what are the alternatives? b. In order to do incremental updates, I need to keep committing new data, and also keep refreshing the index reader to make sure it has the new data available. These are going to affect 1 and 3 above. Should I try duplicate indices? What are some common approaches to solving this problem? c. I know that Lucene provides a delete method, which lets you delete all documents that match a certain query, in my case, I need to delete all documents which are older than a certain age, now one option is to add a date field to every document and use that to delete documents later. Is it possible to do range queries on document ids (I can create my own id field since I think that the one created by lucene keeps changing) to delete documents? Is it any faster than comparing dates represented as strings? I know these are very open questions, so I am not looking for a detailed answer, I will try to treat all of your answers as suggestions and use them to inform my design. Thanks! Please let me know if you need any other information.

    Read the article

  • Connecting to database suddenly started throwing exception for my website

    - by Nirvan
    I have an MVC3 application hosted by third party hosting provider. The site has been running well for the past 3 months without any problems. Today suddenly the Application started throwing following Exception as recorded in my logs part of which is shown below. System.Data.ProviderIncompatibleException: The provider did not return a ProviderManifestToken string. --- System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The message is self explanatory and I first thought I should increase the connect timeout, but then the exception was still thrown suggesting the other part (Server Not Responding). I contacted my hosting provider and he said there was nothing wrong on his part. So I am stuck with a down website and don't know what to do. Any ideas why the provider is throwing the exception listed above. Also, is it possible for me to remotely connect to the database on the hosting server with limited authority. Any tools for that ? I don't have an exposure in database subject, except for application programming. regards, Nirvan

    Read the article

  • Login to website and use cookie to get source for another page

    - by Stu
    I am trying to login to the TV Rage website and get the source code of the My Shows page. I am successfully logging in (I have checked the response from my post request) but then when I try to perform a get request on the My Shows page, I am re-directed to the login page. This is the code I am using to login: private string LoginToTvRage() { string loginUrl = "http://www.tvrage.com/login.php"; string formParams = string.Format("login_name={0}&login_pass={1}", "xxx", "xxxx"); string cookieHeader; WebRequest req = WebRequest.Create(loginUrl); req.ContentType = "application/x-www-form-urlencoded"; req.Method = "POST"; byte[] bytes = Encoding.ASCII.GetBytes(formParams); req.ContentLength = bytes.Length; using (Stream os = req.GetRequestStream()) { os.Write(bytes, 0, bytes.Length); } WebResponse resp = req.GetResponse(); cookieHeader = resp.Headers["Set-cookie"]; String responseStream; using (StreamReader sr = new StreamReader(resp.GetResponseStream())) { responseStream = sr.ReadToEnd(); } return cookieHeader; } I then pass the cookieHeader into this method which should be getting the source of the My Shows page: private string GetSourceForMyShowsPage(string cookieHeader) { string pageSource; string getUrl = "http://www.tvrage.com/mytvrage.php?page=myshows"; WebRequest getRequest = WebRequest.Create(getUrl); getRequest.Headers.Add("Cookie", cookieHeader); WebResponse getResponse = getRequest.GetResponse(); using (StreamReader sr = new StreamReader(getResponse.GetResponseStream())) { pageSource = sr.ReadToEnd(); } return pageSource; } I have been using this previous question as a guide but I'm at a loss as to why my code isn't working.

    Read the article

  • Resources for Prformance testing

    - by munna
    Our small concern is entrusted with creating an Application on ASP.NET with client-server model. As we are almost done with the development we are creating a small team for Performance test. I have googled in the net about the topic but without much help. If anyone of you can share 'How, What and Why' about perf test, it would be great help.

    Read the article

  • microsoft access query speed...

    - by V.S.
    Hello everyone! I am now writing a report about MS Access and I can't find any information about its performance speed in comparison to other alternatives such as Micorsoft SQL Server, MySQL, Oracle, etc... It's obvious that MS Access is going to be the slowest among the rest, but there is no solid documents confirming this other than forums threads, and I don't have the time and resources to do the research myself :( Hoping for your help, V.S.

    Read the article

  • C# - Fast and simple multi dimensional data structures?

    - by Jeremy Rudd
    I need to store multi-dimensional data consisting of numbers in a manner thats easy to work with. I'm capturing data in real time, and once processed I would destroy and GC older data. This data structure must be fast so it won't hit my overall app performance. The faster the better. What are my choices in terms of platform supported data structures? I'm using VS 2010. and .NET 4.

    Read the article

  • IE7 (sometimes) not showing website properly

    - by Ra y Mon
    We are a bit desperate... We have launched our website http://www.buscounviaje.com We tested all browsers (IE6-8, Firefox, Safari, Chrome, ...) to make sure everything was OK. However, there are some users (IE7 and IE6) that are complaining that they see everything 'white' with black letters (i.e. CSS styles not being applied). One user said he was getting an "Error 0: Object expected" However, we do not see that error in Firebug, nor on our local installations of IE6&7. Other users with IE6&7 are also visualizing the web correctly. We have no idea where the problem could be, and we cannot test it because our IE6&7 work fine. Anyone sees the web page without styles and give us a hint on where the problem might be? Reasons we can think of... we are compressing js and css and some versions of IE6&7 are not able to decompress them we are trying to use a non-existing object in javascript and some versions of IE6&7 do not like it the cache does not seem to be the problem... we guided a user through emptying his cache and he could still not see the web site correctly.

    Read the article

  • Windows Phone - failing to get a string from a website with login information

    - by jumantyn
    I am new to accessing web services with Windows Phone 7/8. I'm using a WebClient to get a string from a php-website. The site returns a JSON string but at the moment I'm just trying to put it into a TextBox as a normal string just to test if the connection works. The php-page requires an authentication and I think that's where my code is failing. Here's my code: WebClient client = new WebClient(); client.Credentials = new NetworkCredential("myUsername", "myPassword"); client.DownloadStringCompleted += new DownloadStringCompletedEventHandler(client_DownloadStringCompleted); client.DownloadStringAsync(new Uri("https://www.mywebsite.com/ba/php/jsonstuff.php")); void client_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { try { string data = e.Result; this.jsonText.Text = data; } catch (Exception ex) { System.Diagnostics.Debug.WriteLine(ex.Message); } } This returns first a WebException and then a TargetInvocationException. If I replace the Uri with for example "http://www.google.com/index.html" the jsonText TextBox gets filled with html text from Google (oddly enough, this also works even when the WebClient credentials are still set). So is the problem in the setting of the credentials? I couldn't find any good results when searching for guides on how to access php-pages with credentials, only without them. Then I found a short mention somewhere to use the WebClient.Credentials property. But should it work some other way? Update: here's what I can get out of the WebException (sorry for the bad formatting): System.Net.WebException: The remote server returned an error: NotFound. ---System.Net.WebException: The remote server returned an error: NotFound. at System.Net.Browser.ClientHttpWebRequest.InternalEndGetResponse(IAsyncResult asyncResult) at System.Net.Browser.ClientHttpWebRequest.<c_DisplayClasse.b_d(Object sendState) at System.Net.Browser.AsyncHelper.<c_DisplayClass1.b_0(Object sendState) --- End of inner exception stack trace --- at System.Net.Browser.AsyncHelper.BeginOnUI(SendOrPostCallback beginMethod, Object state) at System.Net.Browser.ClientHttpWebRequest.EndGetResponse(IAsyncResult asyncResult) at System.Net.WebClient.GetWebResponse(WebRequest request, IAsyncResult result) at System.Net.WebClient.DownloadBitsResponseCallback(IAsyncResult result)

    Read the article

  • Is mono fast enough for Mac OS X?

    - by prosseek
    I have to use .NET/C# for the next company project. As I've developed my project on Mac, I looked into the mono for development environment/tool. Is the mono for Mac OS X is fast enough? I mean, what about the performance in running the assembly compared to running the same code on .NET under windows machine? Do I have to buy PC laptop for developing C#/.NET in practical sense?

    Read the article

  • How an application or website finds your ip?

    - by johnkills
    I think there are only two ways a application or a server could get your IP. If it is an application, java/flash, I think it could check your network settings locally and send your IP back to the server. Then the server would know. The other way it could find is that it could analyze the packet headers. Then find there your IP information. But if I wanted it to stop doing it. If it was analyzing locally my IP information I could stop that packet or change its information so the website would be confused about the IP information. If it was analyzing the packet headers and if knew what packets it was analyzing because it wont analyze every packet, I could stop sending those packets. Example: Websites that checks your IP, how does it do it? If you are not downloading any application, you would exclude the 1. scenarion. Then the only possibility is that it was analyzing packet headers but what kind of packets? It was not one question only but if anyone knows something about it, I would like to know too. :) Thanks

    Read the article

  • What's the fastest lookup algorithm for a key, pair data structure (i.e, a map)?

    - by truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 – 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • .htaccess trickery multi-language website

    - by user1658741
    I have a website right now that uses two languages (french and english) The way it works right now is that if someone goes to mysite.com/folder/file.php for example, file.php is simply a script that figures out which language to use, get's it's own path and filename(file.php) and serves up mysite.com/en/folder/file.php (if the language is english). However what shows up in the URL is still mysite.com/folder/file.php. For any folder and any file the same script is used. If I want to add a new file I have to add the file to the folder the user types into the browser as well to the en and fr folders. Could I do some .htaccess trickery so that whatever URL is typed, one .php file gets open that checks the language and what folder/file was requested and then serves up the correct language file? here's the php file that is served up for any files in the URL. <?php // Get current document path which is mirrored in the language folders $docpath = $_SERVER['PHP_SELF']; // Get current document name (Used when switching languages so that the same current page is shown when language is changed) $docname = GetDocName(); //call up lang.php which handles display of appropriate language webpage. //lang.php uses $docpath and $docname to give out the proper $langfile. //$docpath/$docname is mirrored in the /lang/en and /lang/fr folders $langfile = GetDocRoot()."/lang/lang.php"; include("$langfile"); //Call up the proper language file to display function GetDocRoot() { $temp = getenv("SCRIPT_NAME"); $localpath=realpath(basename(getenv("SCRIPT_NAME"))); $localpath=str_replace("\\","/",$localpath); $docroot=substr($localpath,0, strpos($localpath,$temp)); return $docroot; } function GetDocName() { $currentFile = $_SERVER["SCRIPT_NAME"]; $parts = Explode('/', $currentFile); $dn = $parts[count($parts) - 1]; return $dn; } ?>

    Read the article

  • Remove in between folder structure from the url in a php website

    - by pabz
    I have a php website having following folder structure (basic structure). project_name app controller model view css js img index.php So when I view index.php in WAMP the url is http://localhost/project_name/ But when I go inside the site (eg. login.php which resides under view folder) url is like this. http://localhost/project_name/app/view/login.php I found that using .htaccess we can change the urls. So I tried this (in .htaccess). RewriteEngine on RewriteBase / Redirect 301 /project_name/app/view/login.php /project_name/login.php RewriteRule ^/project_name/login.php$ /project_name/app/view/login.php [L] Now url is http://localhost/project_name/login.php It is correct. But it seems php does not use the original link to grab the file (ie. from /project_name/app/view/login.php) but from here /project_name/login.php So it throws 404 error. What should I change? Please help me, i am just trying to hide /app/view/ part from the url so that user won't see my folder structure. I have read about various ways of doing that for about 9hrs today but still couldn't get anything working correctly. Hope my question is clear enough. Any help is greatly appreciated!

    Read the article

  • Website badge system

    - by linkyndy
    I am currently working on a widget-based website, built entirely on user socialization. Since a reputation system pays off for attracting users, I decided to implement one of these. Now, I would like to hear some solutions on how should this be implemented the right way (take, for example, Foursquare's badge system). Basically, I need to be able to do the following: have a badges table, where I can add, edit and delete badges; be able to enable and disable a badge; be able to introduce a new badge, but without writing new code - simply give some parameters to the add badge form regarding what should be followed in order for a user to receive a badge; be able to give badges in real time - meaning that whenever a user accomplishes whatever it needs to receive a badge, the system should know immediately to give the badge to that user; also, the system should not be overloaded with "badge listeners" - I believe interrogating each user request with every badge requirements is time consuming; These being said, I would like to hear your opinions on how to implement the right way a badge system (logic, database schema, methods etc.) Thank you very much!

    Read the article

  • What would be better, (1 database + 4 tables) or (2 databases + 2 tables each) ?

    - by griseldas
    Hi there, I would like to be advised on what would be better (in regards to performance) A) 1 DATABASE with 4 tables or B) 2 DATABASES (same server), each with 2 tables. The tables size and usage are more or less similar, so the 2 tables on Database 1 would be similar usage/size to the 2 tables on database 2 The tables could have +500,000 records and the 2 tables on each database are not related (no join queries etc between them) Thanks in advance for your comments

    Read the article

  • Maximum capabilities of MySQL

    - by cdated
    How do I know when a project is just to big for MySQL and I should use something with a better reputation for scalability? Is there a max database size for MySQL before degradation of performance occurs? What factors contribute to MySQL not being a viable option compared to a commercial DBMS like Oracle or SQL Server?

    Read the article

  • What does "performant" software actually mean?

    - by Roddy
    I see it used a lot, but haven't seen a definition that makes complete sense. Wiktionary says "characterized by an adequate or excellent level of performance or efficiency", which isn't much help. Initially I though performant just meant "fast", but others seem to think it's also about stability, code quality, memory use/footprint, or some combination of all those. I think this is a "real" question - but if enough people reckon this is a subjective question, that's an answer in itself.

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >