Search Results

Search found 13550 results on 542 pages for 'processing js'.

Page 230/542 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • How can I kill off a Python web app on GAE early following a redirect?

    - by Mike Hayes
    Hi Disclaimer: completely new to Python from a PHP background Ok I'm using Python on Google App Engine with Google's webapp framework. I have a function which I import as it contains things which need to be processed on each page. def some_function(self): if data['user'].new_user and not self.request.path == '/main/new': self.redirect('/main/new') This works fine when I call it, but how can I make sure the app is killed off after the redirection. I don't want anything else processing. For example I will do this: class Dashboard(webapp.RequestHandler): def get(self): some_function(self) #Continue with normal code here self.response.out.write('Some output here') I want to make sure that once the redirection is made in some_function() (which works fine), that no processing is done in the get() function following the redirection, nor is the "Some output here" outputted. What should I be looking at to make this all work properly? I can't just exit the script because the webapp framework needs to run. I realise that more than likely I'm just doing things in completely the wrong way any way for a Python app, so any guidance would be a great help. Hopefully I have explained myself properly and someone will be able to point me in the right direction. Thanks

    Read the article

  • iOS UIImageView dataWithContentsOfURL returning empty

    - by user761389
    I'm trying to display an image from a URL in a UIImageView and I'm seeing some very peculiar results. The bit of code that I'm testing with is below imageURL = @"http://images.shopow.co.uk/image/user_dyn/1073/32/32"; imageURL = @"http://images.shopow.co.uk/assets/profile_images/default/32_32/avatar-male-01.jpg"; NSURL *imageURLRes = [NSURL URLWithString:imageURL]; NSData *imageData = [NSData dataWithContentsOfURL:imageURLRes]; UIImage *image = [UIImage imageWithData:imageData]; NSLog(@"Image Data: %@", imageData); In it's current form I can see data in the output window which is what I'd expect. However if comment out the second imageURL so I'm referencing the first I'm getting empty data and therefore nil is being returned by imageWithData. What is possibly more confusing is that the first image is basically the same as the second but it's been through a PHP processing script. I'm nearly certain that it isn't the script that's causing the issue because if I use this instead imageURL = @"http://images.shopow.co.uk/image/product_dynimg/389620/32/32" the image is displayed and this uses the same image processing script. I'm struggling to find any difference in the images that would cause this to occur. Any help would be appreciated.

    Read the article

  • How can I take advantage of IObservable/IObserver to get rid of my "god object"?

    - by Will
    In a system I'm currently working on, I have many components which are defined as interfaces and base classes. Each part of the system has some specific points where they interact with other parts of the system. For example, the data readying component readies some data which eventually needs to go to the data processing portion, the communications component needs to query different components for their status for relaying to the outside, etc. Currently, I glue these parts of the system together using a "god object", or an object with intimate knowledge of different parts of the system. It registers with events over here and shuttles the results to methods over there, creates a callback method here and returns the result of that method over there, and passes many requests through a multi-threaded queue for processing because it "knows" certain actions have to run on STA threads, etc. While its convenient, it concerns me that this one type knows so much about how everybody else in the system is designed. I'd much prefer a more generic hub that can be given instances which can expose events or methods or callbacks or that can consume these. I've been seeing more about the IObservable/IObserver features of the reactive framework and that are being rolled into .NET 4.0 (I believe). Can I leverage this pattern to help replace my "god object"? How should I go about doing this? Are there any resources for using this pattern for this specific purpose?

    Read the article

  • How can I efficiently manipulate 500k records in SQL Server 2005?

    - by cdeszaq
    I am getting a large text file of updated information from a customer that contains updates for 500,000 users. However, as I am processing this file, I often am running into SQL Server timeout errors. Here's the process I follow in my VB application that processes the data (in general): Delete all records from temporary table (to remove last month's data) (eg. DELETE * FROM tempTable) Rip text file into the temp table Fill in extra information into the temp table, such as their organization_id, their user_id, group_code, etc. Update the data in the real tables based on the data computed in the temp table The problem is that I often run commands like UPDATE tempTable SET user_id = (SELECT user_id FROM myUsers WHERE external_id = tempTable.external_id) and these commands frequently time out. I have tried bumping the timeouts up to as far as 10 minutes, but they still fail. Now, I realize that 500k rows is no small number of rows to manipulate, but I would think that a database purported to be able to handle millions and millions of rows should be able to cope with 500k pretty easily. Am I doing something wrong with how I am going about processing this data? Please help. Any and all suggestions welcome.

    Read the article

  • How Best to Replace Ugly Queries and Dynamic PL/SQL with C#?

    - by Mike
    Hi, I write a lot of one-off Oracle SQL queries (in Toad), and sometimes they can get complex, involving lots of unions, joins, and subqueries, and sometimes requiring dynamic SQL. That is, sometimes SQL queries require set based processing along with significant procedural processing. This is what PL/SQL is custom made for, but as a language it does not begin to compare to C#. Now and then I convert a PL/SQL procedure to C#, and am always amazed at how much cleaner and easier to both read and write the C# version is. The C# program might for example construct a SQL query string piece by piece and/or run several queries and process them as needed. The C# version is usually much faster as well, which must mean that I'm not very good at PL/SQL either. I do not currently have access to LINQ. My question is, how best to package all these little C# programs, which are really just mini reports, that is, replacements for ugly SQL queries? Right now I'm actually using NUnit to hold them, and calling each report a [Test], even though they aren't really tests. NUnit just happens to provide a convenient packaging framework.

    Read the article

  • Datastructure choices for highspeed and memory efficient detection of duplicate of strings

    - by Jonathan Holland
    I have a interesting problem that could be solved in a number of ways: I have a function that takes in a string. If this function has never seen this string before, it needs to perform some processing. If the function has seen the string before, it needs to skip processing. After a specified amount of time, the function should accept duplicate strings. This function may be called thousands of time per second, and the string data may be very large. This is a highly abstracted explanation of the real application, just trying to get down to the core concept for the purpose of the question. The function will need to store state in order to detect duplicates. It also will need to store an associated timestamp in order to expire duplicates. It does NOT need to store the strings, a unique hash of the string would be fine, providing there is no false positives due to collisions (Use a perfect hash?), and the hash function was performant enough. The naive implementation would be simply (in C#): Dictionary<String,DateTime> though in the interest of lowering memory footprint and potentially increasing performance I'm evaluating a custom data structures to handle this instead of a basic hashtable. So, given these constraints, what would you use? EDIT, some additional information that might change proposed implementations: 99% of the strings will not be duplicates. Almost all of the duplicates will arrive back to back, or nearly sequentially. In the real world, the function will be called from multiple worker threads, so state management will need to be synchronized.

    Read the article

  • How do I efficiently parse a CSV file in Perl?

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although it might work effectively. edit It can only involve built-in tools that ship with Perl 5.8. For bureaucratic reasons, I cannot use any third party modules (even if hosted on cpan) another edit Let's assume that our solution is only allowed to deal with the file data once it is entirely loaded into memory. yet another edit I just grasped how stupid this question is. Sorry for wasting your time. Voting to close.

    Read the article

  • What's the *right* way to handle a POST in FP?

    - by Malvolio
    I'm just getting started with FP and I'm using Scala, which may not be the best way, since I can always fall back to an imperative style if the going gets tough. I'd just rather not. I've got a very specific question that points to a broader lacuna in my understanding of FP. When a web application is processing a GET request, the user wants information that already exists on the web-site. The application only has to process and format the data in some way. The FB way is clear. When a web application is processing a POST request, the user wants change the information held on the site. True, the information is not typically held in application variables, it's in a database or a flat-file, but still, I get the feeling I'm not grokking FP properly. Is there a pattern for handling updates to static data in an FP language? My vague picture of this is that the application is handed the request and the then-current site state. The application does its thing and returns the new site-state. If the current site-state hasn't changed since the application started, the new state becomes the current state and the reply is sent back to the browser (this is my dim image of Clojure's style); if the current state has been changed (by another thread, well, something else happens ...

    Read the article

  • How to repair Java in Ubuntu after trying to switch to Java 6 using update-java-alternatives

    - by Kau-Boy
    I tried to switch from Java 5 to Java 6 using the "update-java-alternatives" command like explained on this page: https://help.ubuntu.com/community/Java But afterwards I get the following error when I tried to execute java: root@webserver:~# java Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I also tried to reinstall the java binaries using "apt-get" but I didn't succeed reinstalling it. I would like to post the "apt-get" errors, but unfortunately I don't know how to print out the error messages in English and not in German. My system is a Ubuntu 8.04 ROOT server. Here is the (Google translated) english text tring to install Java 6 again: root@server:~# apt-get install sun-java6-jdk Reading package lists ... Ready Dependency tree Reading state information ... Ready sun-java6-jdk is already the newest version. sun-java6-jdk set to manually installed. 0 upgraded, 0 newly installed, 0 to remove and 86 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Set up a sun-java6-bin (6-03-0ubuntu2) ... Could not create the Java virtual machine. dpkg: error processing sun-java6-bin (- configure): Subprocess post-installation script returned error exit status 1 Errors were encountered while processing: sun-java6-bin E: Sub-process / usr / bin / dpkg returned an error code (1) I hope that this might help you helping me.

    Read the article

  • Optimizing near-duplicate value search

    - by GApple
    I'm trying to find near duplicate values in a set of fields in order to allow an administrator to clean them up. There are two criteria that I am matching on One string is wholly contained within the other, and is at least 1/4 of its length The strings have an edit distance less than 5% of the total length of the two strings The Pseudo-PHP code: foreach($values as $value){ foreach($values as $match){ if( ( $value['length'] < $match['length'] && $value['length'] * 4 > $match['length'] && stripos($match['value'], $value['value']) !== false ) || ( $match['length'] < $value['length'] && $match['length'] * 4 > $value['length'] && stripos($value['value'], $match['value']) !== false ) || ( abs($value['length'] - $match['length']) * 20 < ($value['length'] + $match['length']) && 0 < ($match['changes'] = levenshtein($value['value'], $match['value'])) && $match['changes'] * 20 <= ($value['length'] + $match['length']) ) ){ $matches[] = &$match; } } } I've tried to reduce calls to the comparatively expensive stripos and levenshtein functions where possible, which has reduced the execution time quite a bit. However, as an O(n^2) operation this just doesn't scale to the larger sets of values and it seems that a significant amount of the processing time is spent simply iterating through the arrays. Some properties of a few sets of values being operated on Total | Strings | # of matches per string | | Strings | With Matches | Average | Median | Max | Time (s) | --------+--------------+---------+--------+------+----------+ 844 | 413 | 1.8 | 1 | 58 | 140 | 593 | 156 | 1.2 | 1 | 5 | 62 | 272 | 168 | 3.2 | 2 | 26 | 10 | 157 | 47 | 1.5 | 1 | 4 | 3.2 | 106 | 48 | 1.8 | 1 | 8 | 1.3 | 62 | 47 | 2.9 | 2 | 16 | 0.4 | Are there any other things I can do to reduce the time to check criteria, and more importantly are there any ways for me to reduce the number of criteria checks required (for example, by pre-processing the input values), since there is such low selectivity?

    Read the article

  • Right Language for the Job

    - by Manoj
    Using the right language for the job is the key - this is the comment I read in SO and I also belive thats the right thing to do. Because of this we ended up using different languages for different parts of the project - like perl, VBA(Excel Macros), C# etc. We have three to four languages currently in use inside the project. Using the right language for the job has made it immensly more easy to do automate a job, but of late people are complaining that any new person who has to take over the project will have to learn so many different languages to get started. Also it is difficult to find such kind of person. Please note that this is a one to two person working on the project maximum at a given point of time. I would like to know if the method we are following is right or should we converge to single language and try to use it across all the job even though another language might be better suited for it. Your experenece related to this would also help. Languages used and their purpose: Perl - Processing large text file(log files) C# with Silverlight for web based reporting. LabVIEW for automation Excel macros for processing data in excel sheets, generating graphs and exporting to powerpoint.

    Read the article

  • How do I call Matlab in a script on Windows?

    - by Benjamin Oakes
    I'm working on a project that uses several languages: SQL for querying a database Perl/Ruby for quick-and-dirty processing of the data from the database and some other bookkeeping Matlab for matrix-oriented computations Various statistics languages (SAS/R/SPSS) for processing the Matlab output Each language fits its niche well and we already have a fair amount of code in each. Right now, there's a lot of manual work to run all these steps that would be much better scripted. I've already done this on Linux, and it works relatively well. On Linux: matlab -nosplash -nodesktop -r "command" or echo "command" | matlab -nosplash -nodesktop ...opens Matlab in a "command line" mode. (That is, no windows are created -- it just reads from STDIN, executes, and outputs to STDOUT/STDERR.) My problem is that on Windows (XP and 7), this same code opens up a window and doesn't read from / write to the command line. It just stares me blankly in the face, totally ignoring STDIN and STDOUT. How can I script running Matlab commands on Windows? I basically want something that will do: ruby database_query.rb perl legacy_code.pl ruby other_stuff.rb matlab processing_step_1.m matlab processing_step_2.m # etc, etc. I've found out that Matlab has an -automation flag on Windows to start an "automation server". That sounds like overkill for my purposes, and I'd like something that works on both platforms. What options do I have for automating Matlab in this workflow?

    Read the article

  • Template access of symbol in unnamed namespace

    - by Fred Larson
    We are upgrading our XL C/C++ compiler from V8.0 to V10.1 and found some code that is now giving us an error, even though it compiled under V8.0. Here's a minimal example: test.h: #include <iostream> #include <string> template <class T> void f() { std::cout << TEST << std::endl; } test.cpp: #include <string> #include "test.h" namespace { std::string TEST = "test"; } int main() { f<int>(); return 0; } Under V10.1, we get the following error: "test.h", line 7.16: 1540-0274 (S) The name lookup for "TEST" did not find a declaration. "test.cpp", line 6.15: 1540-1303 (I) "std::string TEST" is not visible. "test.h", line 5.6: 1540-0700 (I) The previous message was produced while processing "f<int>()". "test.cpp", line 11.3: 1540-0700 (I) The previous message was produced while processing "main()". We found a similar difference between g++ 3.3.2 and 4.3.2. I also found in g++, if I move the #include "test.h" to be after the unnamed namespace declaration, the compile error goes away. So here's my question: what does the Standard say about this? When a template is instantiated, is that instance considered to be declared at the point where the template itself was declared, or is the standard not that clear on this point? I did some looking though the n2461.pdf draft, but didn't really come up with anything definitive.

    Read the article

  • How to ask questions to an obstructionist?

    - by Rob Wells
    This is not related to my other recently posted question about "working with a star developer". In a similar vein, how do you work with someone who will only answer the specific question that you ask. I worked with someone who, when you asked a question on a specific aspect of the system, would give you the answer just related to the specific bit you'd asked about. For example, when processing radar messages I'd ask about an aspect of message number RJ546 and he would answer just about that specific part of RJ546. He wouldn't mention anything about the other freaky parts of the message, or mention any related aspects of the other messages. Then you'd go off and work on the processing and all of a sudden all this other freakiness would pop up. What's a good technique when working with this type of person? BTW I later found out that the person who I'd come in to replace had quit because he got sick and tired of having these surprises pop up due to the lack of information provided by this person. Edit: I forgot to add that the person was deliberately obstructionist and believed that job security came from hoarded knowledge not being disseminated.

    Read the article

  • Need help with jquery json data transfer from php file

    - by Scarface
    Hey guys I am trying to return the latest 10 results of a query from a php file, through json format, to a jquery getjson function that prints results. I am getting weird problems though. For example I am only getting 8 entries returned, and some are disordered, and sometimes nothing is returned. I am not really sure what I am doing wrong, so if anyone has any ideas I would really appreciate it. This is my query ($res) SELECT time, user, message FROM comments WHERE topic_id='$topic_id' ORDER BY time DESC LIMIT 10 This is the processing of the results while($row = mysql_fetch_array($res)){ $message=$row['message']; $user=$row['user']; if($row['message'] AND $row['time'] > $_GET['time']) $data[] = $row; } $out = json_encode($data); print $out; And this is the retrieval where prepare is just a function that returns information into a div $.getJSON(files+"processing.php?action=load&time="+0+"&topic_id="+topic_id+"&t=" + (new Date()), function(json) { if(json.length) { for(i=0; i < 10; i++) { $('#comment-list').prepend(prepare(json[i])); $('#list-' + count).fadeIn(1500); } } }); function prepare(response) { count++; var string = '<li class="comment-list" id="list-'+count+'">' //organize info into a div +'</li>'; return string; }

    Read the article

  • How to open multiple socket connections and do callbacks in PHP

    - by Click Upvote
    I'm writing some code which processes a queue of items. The way it works is this: Get the next item flagged as needing to be processed from the mysql database row. Request some info from a google API using Curl, wait until the info is returned. Do the remainder of the processing based on the info returned. Flag the item as processed in the db, move onto the next item. The problem is that on step # 2. Google sometimes takes 10-15 seconds to return the requested info, during this time my script has to remain halted and wait. I'm wondering if I could change the code to do the following instead: Get the next 5 items to be processed as usual. Request info for items 1-5 from google, one after the other. When the info for item 1 is returned, a 'callback' should be done which calls up a function or otherwise calls some code which then does the remainder of the processing on items 1-5. And then the script starts over until all pending items in db are marked processed. How can something like this be achieved?

    Read the article

  • IIS 7 returns 304 instead of 200

    - by Ola Herrdahl
    I have a strange issue with IIS 7. Sometimes it seems to return a 304 instead of a 200. Here is a sample request captured with Fiddler: (Note that the file requested is not located in my browsers cache yet.) GET https://[mysite]/Content/js/jquery.form.js HTTP/1.1 Accept: */* Referer: https://[mysite]/Welcome/News Accept-Language: sv-SE User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; OfficeLiveConnector.1.4; OfficeLivePatch.1.3; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Host: [mysite] Connection: Keep-Alive Cache-Control: no-cache Cookie: ... Note that there is no If-Modified-Since or If-None-Match in the request. But still the response is: HTTP/1.1 304 Not Modified Cache-Control: public Expires: Tue, 02 Mar 2010 06:26:08 GMT Last-Modified: Mon, 22 Feb 2010 21:58:44 GMT ETag: "1CAB40A337D4200" Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 01 Mar 2010 17:06:34 GMT Does anyone have a clue of what could be wrong here? I'm running IIS 7 on Windows Web Server 2008 R2.

    Read the article

  • How to push changes from Test server to Live server?

    - by anonymous
    As a beginner, I finally noticed the issue with making changes to the live server I've been working on, now that I have a couple users on it, since I bring it down so often. I created an EC2 image of my live server and set up a separate instance on EC2, so now I have 2 EC2 instances, Stage and Production. I set up GitHub and push changes to stage and test my code there, and when it's all done and working, I push it to the production branch, and everything is good. And there is a slight issue here since I name my files config_stage.js and config_production.js and set up .gitignore on each server, and in my code, I would have it read the ENV flags and set up the appropriate configs, is this the correct approach? And my main question is: how do you keep track of non-code changes to the server? For example, I installed HAProxy, Stunnel, Redis, MongoDB and several other things onto the Stage server for testing and now that it's all working and good, how do I deploy them to production? Right now, I'm just keeping track of everything I installed and copying configuration files over, which is very tedious and I'm afraid I may have missed a step somewhere. Is there a better way to port these changes over from my test server to my live server?

    Read the article

  • Apache Caching and Expires configuration

    - by mcondiff
    I'm looking for a best possible caching/expires configuration for my specific situation. I realize that some sites have advocated turning etags off: Header unset ETag, FileETag None I know that I should use either Expires or Cache-Control. In additions, I know that I should use either Last-modified or ETAGs (Per ySlow docs). I inherited a clients server that uses the following in .htaccess: <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|xml|txt|html|htm)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> With this server I am not going to be able to rely on staff to rename images, css and js in web applications so I do not want to set the expires far in the future without knowing (with a good certainty) that "most/all" browsers will check to see if content has changed. What I do not want to happen is someone call me and say the website is broken because they replaced an image and it's not showing up. But I do want to take the most advantage I can with caching and expires while still maintaining that mostly all browsers will check with the server to see if components have changed. I have access to both the .htaccess and apache .conf file and it is a single server, the content is not deployed on multiple servers. What would be the best .htaccess or .conf configuration for me to achieve my goals for this clients server? Thanks for your help

    Read the article

  • Wrong CSS mime type with Roundcube 0.5 beta and nginx

    - by Julien Vehent
    I'm running into a CSS problem. This is a setup based on Debian Squeeze (nginx/0.7.67, php5/cgi) on which I installed the latest Roundcube 0.5 beta. PHP is properly processed, login works fine but the CSS files are not loaded and Firefox is throwing the following errors: Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/common.css?s=1290600165 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/mail.css?s=1290156319 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 As far as I understand, nginx doesn't see the .css extension (because ofthe ?s= argument) and thus set the mime type with the default value, being text/html. Should I fix this in nginx (and how ?) or is it roundcube's related ? Edit: It seems that it's nginx related. The content-type isn't set for any other type than text/html. I had to include manually the following declarations to force CSS and JS content-types. That's ugly, and I never had the problem before... any idea ? location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; }

    Read the article

  • Hosting a javascript api file for third party sites the way sharethis, uservoice, analytics do it.

    - by Dayson
    I'm preparing to launch a service soon which will provide third party websites a widget. The widget requires my javascript file in the website's code. Exactly the same way services like analytics, uservoice, sharethis, getclicky, etc provide you with a javascript snippet to add to your page. Therefore, my javascript file is going to be hotlinked by tons of websites which possibly receive a lot of requests too. I need advice/opinions on the following aspects: What's the right location for hosting this file? Should I use a sub-domain for it? I was thinking of something like http://api.myservice.com/js/foo.js . Remember, once websites start embedding this file, its location CANNOT change under any circumstances. Right now we can afford just one dedicated server. So I have minified my file, enabled gzip and plan to use some good cache control headers through apache. Also, in the near future when the requests pickup, I will use a http proxy like Varnish. Is this a good plan for the near future? Should I be considering a CDN in the future (since we can't afford it now)? If so how do I make sure we're prepared to migrate to it without breaking services. Pros/Cons of moving just this file to a CDN? Also, since its just one javascript file(50kb), any affordable CDN so we could consider it in the beginning itself? Any other word of advice I could use? Anything I shouldn't overlook at this stage which I would regret later? (both in terms of server + javascript ajax limitations) Thanks in advance.

    Read the article

  • How to find cause of main file system going to read only mode

    - by user606521
    Ubuntu 12.04 File system goes to readonly mode frequently. First of all I have read this question file system is going into read only mode frequently already. But I have to know if it's not caused by something else than dying hard drive. This is server provided by my client and I am just runing there some node.js workers + one node.js server and I am using mongodb. From time to time (every 20-50h) system suddenly makes filesystem read only, mongodb process fails (due read-only fs) and my node workers/server (which are started by forever) are just killed. Here is the log from dmesg - I can see there some errors and messages that FS is going to read-only, and there is also some JOURNAL error but I would like to find cause of those errors.. http://speedy.sh/Ux2VV/dmesg.log.txt edit smartctl -t long /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.5.0-23-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net SMART support is: Unavailable - device lacks SMART capability. A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. What I am doing wrong? Same is for sda2. Morover now when I type any command that not exists in shell I get this: Sorry, command-not-found has crashed! Please file a bug report at: https://bugs.launchpad.net/command-not-found/+filebug Please include the following information with the report:

    Read the article

  • HTML Redirect issue with Apache2

    - by Vijit Jain
    I am facing an issue with the ProxyPass on my Apache server on Ubuntu. I have configured Apache to deal with Virtual Hosts on my server. There is an application with runs on the server and uses ports 8001 8002. I need to do something like www.example.com/demo/origin to display the contents that I would see when I visit www.example.com:8000. The contents to be displayed are a host of HTML pages. This is the section of the virtual host config that has issues ProxyPass /demo/vader http://www.example.com:8001/ ProxyPassReverse /demo/vader http://www.example:8001/ ProxyPass /demo/skywalker http://www.example.com:8002/ ProxyPassReverse /demo/skywalker http://www.example.com:8002/ Now when I visit example.com/demo/skywalker, I see the first page of port 8002, say the login.html page. The second should have been www.example.com/demo/skywalker/userAction.html, instead the server shows www.example.com:8000/login.html. In the error logs I see something like: [Mon Nov 11 18:01:20 2013] [debug] mod_proxy_http.c(1850): proxy: HTTP: FILE NOT FOUND /htdocs/js/demo.72fbff3c9a97f15a4fff28e19b0de909.min.js I do not have any folder htdocs in the system. This is only an issue while viewing .html pages. Otherwise, no such issue occurs. When I visit localhost:8001 it will show any and all contents without any errors or issues. www.example.com/demo/skywalker displays a separate webpage www.example.com/demo/origin displays a different webpage and www.example.com/demo/vader displays a different webpage. I have also tried to use one more type of combination, <Location /demo/origin/> ProxyPass http://localhost:8000/ ProxyPassReverse http://localhost:8000/ ProxyHTMLURLMap http://localhost:8000/ / </Location> This fails as well. I would greatly appreciate if anyone can help me resolve this issue.

    Read the article

  • Unusual Caching Issue with IE 7/8 and IIS 7

    - by Daniel A. White
    We recently moved a site into production running Server 2008 x64 and IIS 7. The ASP.NET pages apparently load just fine, but when it comes to IE 7 and 8, a weird caching issue has cropped up with the CSS and JavaScript files on the page. On a very sporadic schedule, IE does not get all the files necessary to compose the page (i.e. CSS and JS files). When I manually go to the missing files from the address bar, they come back from local cache as empty. I F5 these source files and magically they come down properly. I refresh the site after loading a few files and the cache seems to hold. This problem has only been reproduced (again, sporadically) on IE 7 and 8 running XP. Chrome and Firefox appear to be immune. We have set IIS to use server-side kernel caching for CSS, JS and images. We also have set to expire content for the App_Themes and Scripts directories to expire immediately. One initial thought it was a SWF loading an FLV on page load. These fixes have not remedied the problem. We had no problems on our staging server which is using Server 2003 and IIS 6. Any ideas would be greatly appreciated. P.S. It sounds similar to this problem: but we do have the Static Content module installed. http://serverfault.com/questions/115099/iis-content-length-0-for-css-javascript-and-images

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >