Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 684/837 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • jQuery AJAX & an ASP.NET web service works locally but not remotely

    - by Alex
    Interesting one here. I have an ASP.NET 1.1 project that contains a web service in it. I'm using jQuery's AJAX functionality to call some services from the client. This is what my code looks like: $.ajax({ type: "POST", url: 'foo.asmx/functionName', data: 'foo1=' + foo1 + '&foo2=' + foo2, dataType: "xml", success: function(xml) { //do something with my xml data }, error: function(request, error){ //handle my error } }); This works great when I run the site from my IDE on localhost. However, when I deploy this site to any other server I get a parsererror error from jQuery. It does not appear to even call my service as I dropped in some code to write a log file to disk and it's not making it there. The same exact XML should be returned from both my localhost and the server I deployed to. Any ideas?

    Read the article

  • Calculate time taken by each cpp file to compile in VS2005?

    - by Rajiv Podar
    Hi Guys, I am writing a tool which can be used to make the matrix for the current performance of the project. For that I required to get the time taken by each file to get compiled. I tried with the following option but didn't succeed :( Tools-Options-Proejcts & Solutions - VC++ Project Settings - Build Timing- Yes From the above option I am able to get the whole time taken to build the solution but my problem is to get for each one. I am using VS2005 So anyone is having any idea then pls revert back ASAP....

    Read the article

  • INSERT INTO ... SELECT ... vs dumping/loading a file in MySQL

    - by Daniel Huckstep
    What are the implications of using a INSERT INTO foo ... SELECT FROM bar JOIN baz ... style insert statement versus using the same SELECT statement to dump (bar, baz) to a file, and then insert into foo by loading the file? In my messing around, I haven't seen a huge difference. I would assume the former would use more memory, but the machine that this runs on has 8GB of RAM, and I never even see it go past half used. Are there any huge (or long term) performance implications that I'm not seeing? Advantages/disadvantages of either?

    Read the article

  • Efficient job progress update in web application

    - by Endru6
    Hi, Creating a web application (Django in my case, but I think the question is more general) that is administrating a cluster of workers doing queued jobs, there is a need to track each jobs progress. When I've done it using database UPDATE (PostgreSQL in this case), it severely hits the database performance, because each UPDATE creates a new row in a table, and in my case only vacuuming DB removes obsolete rows. Having 30 jobs running and reporting progress every 1 minute DB may require vacuuming (and it means huge slow downs on a front end side for all the employees working with the system) every 10 days. Because the progress information isn't critical, ie. it doesn't have to be persistent, how would you do the progress updates from jobs without using an overhead database implies? There are 30 worker servers, each doing 1 or 2 jobs simultaneously, 1 front end server which serves a web application to users, and 1 database server.

    Read the article

  • When is BIG, big enough for a database?

    - by David ???
    I'm developing a Java application that has performance at its core. I have a list of some 40,000 "final" objects, i.e., I have an initialization input data of 40,000 vectors. This data is unchanged throughout the program's run. I am always preforming lookups against a single ID property to retrieve the proper vectors. Currently I am using a HashMap over a sub-sample of a 1,000 vectors, but I'm not sure it will scale to production. When is BIG, actually big enough for a use of DB? One more thing, an SQLite DB is a viable option as no concurrency is involved, so I guess the "threshold" for db use, is perhaps lower.

    Read the article

  • How to Get the Method/Function Call Trace for a Specific Run?

    - by JackWM
    Given a Java or JavaScript program, after its execution, print out a sequence of calls. The calls are in invocation order. E.g. main() { A(); } A() { B(); C(); } Then the call trace should be: main -> A() -> B() -> C() Is there any tool that can profile and output this kind of information? It seems this is common a need for debugging or performance tuning. I noticed that some profilers can do this, but I prefer a simpler/easy-to-use one. Thanks!

    Read the article

  • What is the difference between these two ways of creating NSStrings?

    - by adame
    NSString *myString = @"Hello"; NSString *myString = [NSString stringWithString:@"Hello"]; I understand that using method (1) creates a pointer to a string literal that is defined as static memory (and cannot be deallocated) and that using (2) creates an NSString object that will be autoreleased. Is using method (1) bad? What are the major differences? Is there any instances where you would want to use (1)? Is there a performance difference? P.S. I have searched extensively on Stack Overflow and while there are questions on the same topic, none of them have answers to the questions I have posted above.

    Read the article

  • The case of the mysterious MySQL caching across restarts

    - by shanusmagnus
    I found a very slow MySQL query in my web app. The weird thing is that the query is only slow the first time it's executed, despite the fact that the query_cache is set to its default (query_cache_size 0) like so: mysql> show variables like 'query%'; +------------------------------+---------+ | Variable_name | Value | +------------------------------+---------+ | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 0 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | +------------------------------+---------+ The even weirder thing is that this speedup persists even after the MySQL server has been stopped and restarted (I'm using OSX, and perform this restart using the system preferences pane.) The only way I can re-create the poor performance of the initial query is by rebooting the system. So my question is: how is this happening? Obviously some sort of caching at work, but where? And how does it persist across database restarts? This query is mediated through our web app, which comes via PHP/Apache, but there are no extra bells and whistles, and the curious caching also persists across Apache restarts. Help?

    Read the article

  • How to track when my application has unexpectedly shut down?

    - by Vilx-
    I'm writing an application whose purpose involves a lot of logging of different events. Among those I would also like to have an event that the application was shut down - even if unexpectedly like because of a power loss. Naturally, when the power goes out I don't get a chance to write anything anywhere. So my idea was to continuously write a timestamp in some known location (say, once per minute), and when the application was next run, it could determine the approximate time of the unexpected shutdown. A precision of 1 minute would be acceptable for me. However I'm worried that caching at the OS and disk level might interfere with this approach. Is there a better way or if not - how to make sure that the data I just wrote is REALLY written out to the physical medium? Added: Oh, almost forgot the buzzword line: Windows XP and above; .NET 3.5; C#.

    Read the article

  • SQL alert for a stored procedure?

    - by superdupersomething
    I have a SQL 2005 setup and am rather new :) Been cracking at this for a few hours and I just need some help. I have been able to setup alerts successfully for the standard "SQL server performance events", its fun. So I already have email alerts working. However I need the alert thing to run a stored procedure I have created, and depending on its output it will alert me or not via email. So far I have been trying to use the WMI events, but I keep getting an error "The @wmi_query could not be executed in the @wmi_namespace provided. Verify that an event class selected in the query exists in the namespace and that the query has the correct syntax" the query definitely works so I have no idea.. is there a different way to do this?

    Read the article

  • Is it possible to exclude folders from a web application project in vs 2010?

    - by JL
    I had previously asked this question. At the time I was working with VS 2008. To restate the question. I have a web application that generates 1000's of small xml files in a certain directory. I would like to exclude this directory from the web application project in visual studio 2010. With vs 2008 it was not possible. Has anything changed? Besides the general wait for VS to iterate through this directory and add an item in the solution explorer for each file, it also strains my system resources, so I would like to exclude it from the project, but the dir and files need to physically exist on disk, because they are part of the application. Any OOB VS 2010 solutions, or any good workarounds? Thanks Update: This also sums up the issue nicely http://forums.asp.net/t/1179077.aspx

    Read the article

  • Have a workaround for the 1,000 file limit in a directory on windows mobile 5?

    - by nateday76
    I need to download more than 1,000 files into a windows mobile 5 directory located on the storage card. If i copy the files onto the storage card via my desktop there is no problem. But when i try to download the files from the handheld device I get a disk full error, even though there is plenty of room due to the 1,000 file limit. Has anyone run into this and found a workaround? I'm going to try zipping all of the files then decompressing on the device but not sure that this will work.

    Read the article

  • Should I put a try-finally block after every Object.Create?

    - by max
    I have a general question about best practice in OO Delphi. Currently, I put try-finally blocks anywhere I create an object to free that object after usage (to avoid memory leaks). E.g.: aObject := TObject.Create; try aOBject.AProcedure(); ... finally aObject.Free; end; instead of: aObject := TObject.Create; aObject.AProcedure(); .. aObject.Free; Do you think it is good practice, or too much overhead? And what about the performance?

    Read the article

  • iPad: How can I implement a scrolling timeline using a static image?

    - by BeachRunnerJoe
    I'm diving into iOS development and I'm building a simple timeline app using a static timeline image that I already have. The timeline image won't fit on the screen. The width of the image is about five times the width of the iPad screen, so I have to allow the user to scroll the image horizontally. Here's a mockup... For each item on the timeline, the user can tap it to receive a description at the bottom of the screen. My questions are... I was planning to use a UIScrollView with a PageControl at the bottom. Can a UIScrollView hold a single view that holds the entire timeline image or do I have to break the the timeline image up into multiple views? Are there any performance issues I need to consider when implementing this with a UIScrolLView, using a static image? Are there other approaches to implementing this scrollable timeline that I should consider other than using a UIScrollView? Thanks so much in advance for your wisdom!

    Read the article

  • How to setup a http(s) proxy with record/replay functionality?

    - by superb
    The use case is: I want to do some android app performance tests and I want to fix the data when app got from web. The solution I come up with is to setup a local http proxy, which can first record all http traffic, and later replay then app is running perf tests. I found http://mitmproxy.org/, which has exactly the features I want. but seems with the default settings it cannot be used as a https proxy. I tried using it as proxy and login to facebook but doesn't work. I am not familiar with the https protocol and how cert things work. Any one can provide some help? Thanks a lot.

    Read the article

  • Is it possible to remove folders from a web application build process in vs 2010?

    - by JL
    I had previously asked this question. At the time I was working with VS 2008. To restate the question. I have a web application that generates 1000's of small xml files in a certain directory. I would like to exclude this directory from the build process in visual studio 2010. With vs 2008 it was not possible. Has anything changed? Besides the general wait for VS to iterate through this directory with each build, it also strains my system resources, so I would like to exclude it from the project, but the dir and files need to physically exist on disk, because they are part of the application. Any OOB VS 2010 solutions, or any good workarounds? Thanks

    Read the article

  • "Inlining" (kind of) functions at runtime in C

    - by fortran
    Hi, I was thinking about a typical problem that is very JIT-able, but hard to approach with raw C. The scenario is setting up a series of function pointers that are going to be "composed" (as in maths function composition) once at runtime and then called lots and lots of times. Doing it the obvious way involves many virtual calls, that are expensive, and if there are enough nested functions to fill the CPU branch prediction table completely, then the performance with drop considerably. In a language like Lisp, I could probably process the code and substitute the "virtual" call by the actual contents of the functions and then call compile to have an optimized version, but that seems very hacky and error prone to do in C, and using C is a requirement for this problem ;-) So, do you know if there's a standard, portable and safe way to achieve this in C? Cheers

    Read the article

  • Optimal search queries

    - by Macros
    Following on from my last question http://stackoverflow.com/questions/2788082/sql-server-query-performance, and discovering that my method of allowing optional parameters in a search query is sub optimal, does anyone have guidelines on how to approach this? For example, say I have an application table, a customer table and a contact details table, and I want to create an SP which allows searching on some, none or all of surname, homephone, mobile and app ID, I may use something like the following: select * from application a inner join customer c on a.customerid = a.id left join contact hp on (c.id = hp.customerid and hp.contacttype = 'homephone') left join contact mob on (c.id = mob.customerid and mob.contacttype = 'mobile') where (a.ID = @ID or @ID is null) and (c.Surname = @Surname or @Surname is null) and (HP.phonenumber = @Homphone or @Homephone is null) and (MOB.phonenumber = @Mobile or @Mobile is null) The schema used above isn't real, and I wouldn't be using select * in a real world scenario, it is the construction of the where clause I am interested in. Is there a better approach, either dynamic sql or an alternative which can achieve the same result, without the need for many nested conditionals. Some SPs may have 10 - 15 criteria used in this way

    Read the article

  • InvalidCastException: System.Web.UI.PartialCachingControl -> MyCustomControl when OutputCaching

    - by marcinn
    The problem: I am unable to use OutputCaching with my controls which derives from MyCustomControl. Controls are loaded dynamically using definitions from database with Page.LoadControl method. When I add to ascx <%@ OutputCache VaryByParam="*" Duration="3600"% the "InvalidCastException: System.Web.UI.PartialCachingControl - MyCustomControl" exception is thrown. I am unable to modify assembly witch contains dynamic loading controls logic. Is there any way to fix it in derived controls? The second question is about iis7 and native output caching - is it resolves this problem? (I tried to set up several performance counters and I saw that cache wasn't hit...)

    Read the article

  • Can a SQL Server 2008 database support both a REST and SOAP web services within two different endpoints?

    - by PaulDecember
    Say you have a SQL Server 2008 database. You build a SOAP web service. You then deploy or publish this using Visual Studio 2010 in one website. Now, using the same database, you build a REST web service, in a different solution. You deploy this on another website. Can you consume the endpoints and/or .svc file of both the SOAP and REST web services, though they reference the same SQL Server 2008 database? I don't see why not, but before I go down this path and spend days I'd like to make sure. Also if there's a performance hit to the database if it is running both SOAP and REST at the same time--again, I don't see why it would matter, but I must make sure. Thanks.

    Read the article

  • Drupal 6: moving localhost to server | clear cached data

    - by artmania
    Hi friends, I worked on localhost to build my drupal site. then, I moved to server. Steps: Clear cache tables (Site Configuration Performance Clear Cached Data button) on localhost Export db and then import on live server Move files/folders to live server Edit settings.php to reflect live server config everything is working great until I make Clear Cached Data on Server. than my custom theme, custom front page, etc... all mess up :( What can be the problem? I appreciate helps so much!! Thanks! !!SORTED!! I used a subtheme for Zen, named mgf . then I had taken a back-up as mgf- somehow after I make Clear Cached Data, Drupal links to this mgf- which is old and doesnt have latest stylings. I just removed this folder from themes, and it linked to the right-latest mgf.

    Read the article

  • Creating AVI files in OpenCV

    - by user80003
    Hello. I have been trying to create an application using OpenCV and Visual Studio 2008, to capture images from a webcam, apply a filter to them, and then write them to an AVI file. Everything works, except creating the AVI file. The problem is that it works on my computer, but it doesn't work on my colleague's computer. The reason for that (I think) is that he does not have the necessary video encoders for OpenCV to use. The cvCreateVideoWriter function does not return NULL, but I end up with a 0kb file on the disk.

    Read the article

  • Which DB Server should I use?

    - by Alex
    I have to develop a new (desktop) app for a small business. This business currently has an Access database with millions of records. The file size is about 1.5 GB. The boss told me that searching on this DB is very slow. The DB consists of a single table with about 20 fields. I also think the overall DB design isn't great. I thought to use another DB server with a new design to improve both performance and efficiency. Considering this is a relatively small business, I don't want to spend much for a DB license, so I want to ask you what would you do. Continue to use Access, maybe improving and optimizing the DB in some way Buy a DB server license (in this case, which one?) ? (any idea?)

    Read the article

  • Facing problem in configuring Reporting Server

    - by idrees99
    Hi all, I am using Sql server 2005 express edition and i want to Install and configure Reporting server on my local machine.Now i have installed the reporting server but the issue is that i am unable to configure it properly.when ever i go to start the reporting services it gives me the following message: THE SQL SERVER REPORTING SERVICE(SQLEXPRESS)service on Local computer started and then stopped. Some services stop automatically if they have no work to do, for example, the performance Logs and Alerts service. I am using WindowsXp Professional. plz help me out as i have just started using sql server and i dont have any idea.

    Read the article

  • How can I edit a js file sent by the server before it gets to my browser?

    - by pstone
    During a normal browsing session I want to edit a specific javascript file before the browser receives since once it gets there it's impossible to edit. Is there are any tool for this? For what I need it I can't just save it and edit it on my disk. I'm ready to learn how to program it myself but if anyone can point out more or less what I have to do I'd be very grateful. I'd have to intercept the packets until I have the whole file while blocking the browser from receiving it any part of it, then edit it manually and forward it to the same port. I don't think I can do this by just using pcap, I've read a bit about scapy but I'm not sure if it can help me either. Thanks in advance.

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >