Search Results

Search found 32994 results on 1320 pages for 'second level cache'.

Page 76/1320 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • Cache Simulator in C

    - by DuffDuff
    Ok this is only my second question, and it's quite a doozy. It's for a school assignment, but no one (including the TAs) seems to be able to help me. It's kind of a tall order but I'm not sure where else to turn. Essentially the assignment was to make a cache simulator. This version is direct mapping and is actually only a small portion of the whole project, but if I can't even get this down I have no chance with other associativities. I'm posting my whole code because I don't want to make any assumptions about where the problem is. This is the test case: http://www.mediafire.com/?ty5dnihydnw And you run the following command: ./sims 512 direct 32 fifo wt pinatrace.out You're supposed to get: hits: 604037 misses 138349 writes: 239269 reads: 138349 But I get: Hits: 587148 Misses: 155222 Writes: 239261 Reads: 155222 If anyone could at least point me in the right direction it would be greatly appreciated. I've been stuck on this for about 12 hours. #include <stdio.h> #include <stdlib.h> #include <string.h> #include <math.h> struct myCache { int valid; char *tag; char *block; }; /* sim [-h] <cache size> <associativity> <block size> <replace alg> <write policy> <trace file> */ //God willing I come up with a better Hex to Bin convertion that maintains the beginning 0s... void hex2bin(char input[], char output[]) { int i; int a = 0; int b = 1; int c = 2; int d = 3; int x = 4; int size; size = strlen(input); for (i = 0; i < size; i++) { if (input[i] =='0') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='1') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='2') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='3') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='x') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='5') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='6') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='7') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='8') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='9') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='a') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='b') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='c') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='d') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='e') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='f') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '1'; } } output[32] = '\0'; } int main(int argc, char* argv[]) { FILE *tracefile; char readwrite; int trash; int cachesize; int blocksize; int setnumber; int blockbytes; int setbits; int blockbits; int tagsize; int m; int count = 0; int count2 = 0; int count3 = 0; int i; int j; int xindex; int jindex; int kindex; int lindex; int setadd; int totalset; int writeMiss = 0; int writeHit = 0; int cacheMiss = 0; int cacheHit = 0; int read = 0; int write = 0; int size; int extra; char bbits[100]; char sbits[100]; char tbits[100]; char output[100]; char input[100]; char origtag[100]; if (argc != 7) { if (strcmp(argv[0], "-h")) { printf("./sim2 <cache size> <associativity> <block size> <replace alg> <write policy> <trace file>\n"); return 0; } else { fprintf(stderr, "Error: wrong number of parameters.\n"); return -1; } } tracefile = fopen(argv[6], "r"); if(tracefile == NULL) { fprintf(stderr, "Error: File is NULL.\n"); return -1; } //Determining size of sbits, bbits, and tag cachesize = atoi(argv[1]); blocksize = atoi(argv[3]); setnumber = (cachesize/blocksize); printf("setnumber: %d\n", setnumber); setbits = (round((log(setnumber))/(log(2)))); printf("sbits: %d\n", setbits); blockbits = log(blocksize)/log(2); printf("bbits: %d\n", blockbits); tagsize = 32 - (blockbits + setbits); printf("t: %d\n", tagsize); struct myCache newCache[setnumber]; //Allocating Space for Tag Bits, initiating tag and valid to 0s for(i=0;i<setnumber;i++) { newCache[i].tag = (char *)malloc(sizeof(char)*(tagsize+1)); for(j=0;j<tagsize;j++) { newCache[i].tag[j] = '0'; } newCache[i].valid = 0; } while(fgetc(tracefile)!='#') { setadd = 0; totalset = 0; //read in file fseek(tracefile,-1,SEEK_CUR); fscanf(tracefile, "%x: %c %s\n", &trash, &readwrite, origtag); //shift input Hex size = strlen(origtag); extra = (10 - size); for(i=0; i<extra; i++) input[i] = '0'; for(i=extra, j=0; i<(size-(2-extra)); j++, i++) input[i]=origtag[j+2]; input[8] = '\0'; // Convert Hex to Binary hex2bin(input, output); //Resolving the Address into tbits, sbits, bbits for (xindex=0, jindex=(32-blockbits); jindex<32; jindex++, xindex++) { bbits[xindex] = output[jindex]; } bbits[xindex]='\0'; for (xindex=0, kindex=(32-(blockbits+setbits)); kindex<32-(blockbits); kindex++, xindex++){ sbits[xindex] = output[kindex]; } sbits[xindex]='\0'; for (xindex=0, lindex=0; lindex<(32-(blockbits+setbits)); lindex++, xindex++){ tbits[xindex] = output[lindex]; } tbits[xindex]='\0'; //Convert set bits from char array into ints for(xindex = 0, kindex = (setbits -1); xindex < setbits; xindex ++, kindex--) { if (sbits[xindex] == '1') setadd = 1; if (sbits[xindex] == '0') setadd = 0; setadd = setadd * pow(2, kindex); totalset += setadd; } //Calculating Hits and Misses if (newCache[totalset].valid == 0) { newCache[totalset].valid = 1; strcpy(newCache[totalset].tag, tbits); } else if (newCache[totalset].valid == 1) { if(strcmp(newCache[totalset].tag, tbits) == 0) { if (readwrite == 'W') { cacheHit++; write++; } if (readwrite == 'R') cacheHit++; } else { if (readwrite == 'R') { cacheMiss++; read++; } if (readwrite == 'W') { cacheMiss++; read++; write++; } strcpy(newCache[totalset].tag, tbits); } } } printf("Hits: %d\n", cacheHit); printf("Misses: %d\n", cacheMiss); printf("Writes: %d\n", write); printf("Reads: %d\n", read); }

    Read the article

  • PHP File Upload second file does not upload, first file does without error

    - by Curtis
    So I have a script I have been using and it generally works well with multiple files... When I upload a very large file in a multiple file upload, only the first file is uploaded. I am not seeing an errors as to why. I figure this is related to a timeout setting but can not figure it out - Any ideas? I have foloowing set in my htaccess file php_value post_max_size 1024M php_value upload_max_filesize 1024M php_value memory_limit 600M php_value output_buffering on php_value max_execution_time 259200 php_value max_input_time 259200 php_value session.cookie_lifetime 0 php_value session.gc_maxlifetime 259200 php_value default_socket_timeout 259200

    Read the article

  • HttpPostAttribute not found? worked a second ago

    - by Dejan.S
    Hi I just opened up a project I done in MVC a while back, fired right up I looked at in the browser and now all the sudden it just wont find the [HttpPost] & [HttpPostAttribute]. What can be the problem? errormessage The type or namespace name 'HttpPost' could not be found (are you missing a using directive or an assembly reference?)

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • user disallowed geolocation - notify user second time

    - by Dror
    When trying to get geolocation on iPhone the first time - I declined. Every other time I want to get the location (before another reload of the page) I get no response (no error and no success): navigator.geolocation.getCurrentPosition( function (location) { ig_location.lat = location.coords.latitude; ig_location.lng = location.coords.longitude; alert('got it!'); }, function(PositionError) { alert('failed!' + PositionError.message); } ); Is there a way to notify the user every time I fail to get the location? (I do not need to use watchPosition...)

    Read the article

  • jQuery Datepicker - Programatically Limit Second Datepicker

    - by Dodinas
    Hello all, I've recently been using the following piece of code to limit my 2nd Date picker (end date) so that it does not precede the date of the 1st Date picker. $("#datepicker").datepicker({ minDate: +5, maxDate: '+1M +10D', onSelect: function(dateText, inst){ var the_date = dateText; $("#datepicker2").datepicker('option', 'minDate', the_date); } }); $("#datepicker2").datepicker({ maxDate: '+1M +10D', onSelect: function(dateText, inst){ } }); However, lately, I wanted to format my datepickers using: dateFormat: 'yy-mm-dd' But now, the 2nd datepicker actually allows the user to pick a date 1 day before. For example, if the user picks the 1st date: 2010-04-03, when the 2nd Datepicker pops up, they are able to pick 2010-04-02 (1 day before their first selected date). I do not want the user to be able to pick a date that was before their first selected day. Any ideas why this isn't working after I added in the "dateFormat"?

    Read the article

  • long polling vs streaming for about 1 update/second

    - by jcee14
    is streaming a viable option? will there be a performance difference on the server end depending on which i choose? is one better than the other for this case? I am working on a GWT application with Tomcat running on the server end. To understand my needs, imagine updating the stock prices of several stocks concurrently.

    Read the article

  • Workflow Foundation 4 - DeclarativeServiceLibrary - Error while calling second ReceiveAndSendReply

    - by dotnetexperiments
    Hi, I have created a DeclarativeServiceLibrary using VS2010 beta 2, Please check this image of Sequential Service Following is the code used to call these two activities ` int? data = 123; ServiceReference1.ServiceClient client1 = new ServiceReference1.ServiceClient(); string result1 = client1.GetData(data); //This line shows error :( string result2 = client1.Operation1(); Response.Write(result1 + " :: ::" + result2);` client1.GetData works perfectly, but client1.Operation1 show the following error. Please let me know how to fix this. There is no context attached to the incoming message for the service and the current operation is not marked with "CanCreateInstance = true". In order to communicate with this service check whether the incoming binding supports the context protocol and has a valid context initialized.

    Read the article

  • Python error with IndentationError: unindent does not match any outer indentation level

    - by Vikrant Cornelio
    from tweepy import Stream from tweepy import OAuthHandler from tweepy.streaming import StreamListener ckey='W1VPPrau42ENAWP1EnDGpQ' csecret='qxtY2rYNN0QT0Ndl1L4PJhHcHuWRJWlEuVnHFDRSE' atoken='1577208120-B8vGWIquxbmscb9xdu5AUzENv09kGAJUCddJXAO' asecret='tc9Or4XoOugeLPhwmCLwR4XK8oUXQHqnl10VnQpTBzdNR' class listener(StreamListener): def on_data(self,data): print data return True def on_error(self,status): print status auth=OAuthHandler(ckey,csecret) auth.set_access_token(atoken,asecret) twitterStream=Stream(auth,listener()) twitterStream.filter(track=["car"]) I typed this in my Python shell i got an error...the error was IndentationError: unindent does not match any outer indentation level..Please help me!!!!!!!!!!!

    Read the article

  • App crashes only after second execution only in Release configuration

    - by denbec
    Hey all, i know this is probably not an easy question to answer, as it's hard to describe on my hand. I have an app that runs without problems on the device in Debug Configuration (also multiple times). Once I put it into Release Configuration (which I need before publishing?), the app starts without problems and I can proceed to the next page, where I show an core-plot graph. BUT only if I run it from xcode. As soon as I end the App and start it again, it opens without problems, but on the next page, it crashes. Now I don't have anything to debug other than the crash report: Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0xcf10000a Crashed Thread: 0 Thread 0 Crashed: 0 libobjc.A.dylib 0x000026f2 objc_msgSend + 14 1 StandbyCheck 0x0001fbea -[CPXYTheme newGraph] (CPXYTheme.m:36) 2 StandbyCheck 0x00007c06 -[SCGraphCell initWithStyle:reuseIdentifier:] (SCGraphCell.m:28) 3 StandbyCheck 0x00076b4a -[TTTableViewDataSource tableView:cellForRowAtIndexPath:] (TTTableViewDataSource.m:128) 4 UIKit 0x0007797a -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:withIndexPath:] + 514 5 UIKit 0x000776b0 -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:] + 28 6 UIKit 0x00037e78 -[UITableView(_UITableViewPrivate) _updateVisibleCellsNow] + 940 7 UIKit 0x000367d4 -[UITableView layoutSubviews] + 176 8 StandbyCheck 0x000734b8 -[TTTableView layoutSubviews] (TTTableView.m:226) [...] Now, can someone point in any direction? What are the differences in Debug/Release Modes? How could I possibly debug this failure? I've been searching for hours now, please help me :( Thanks, Dennis

    Read the article

  • CacheManager.getCacheFileBaseDir() always returns null

    - by Leon
    Hi, I've been trying to use the CacheManager for caching some http requests but it failed every time with a nullpointer exception. After some digging I believe I found out why: CacheManager.getCacheFileBaseDir() always returns null so when I try to use CacheManager.getCacheFile() or CacheManager.saveCacheFile() they fail. CacheManager.cacheDisabled() returns false :S I hadn 't created a cache partition via the AVD manager so I thought the problem lie there. But after creating a cache partition getCacheFile() still return null: 03-16 00:25:16.321: ERROR/AndroidRuntime(296): Caused by: java.lang.NullPointerException 03-16 00:25:16.321: ERROR/AndroidRuntime(296): at android.webkit.CacheManager.getCacheFile(CacheManager.java:296) What could be the problem? I've got the code posted here: http://pastebin.com/eaJwfXEK But it's a bit messy because I've been trying tons of stuff. Why does CacheManager.getCacheFileBaseDir() return null and not a File object? Thanks in advance! Leon

    Read the article

  • How to prevent caching from jQuery Ajax?

    - by cynwong
    Hi, Could anyone please help me with this? I have a web page using .manifest for offline storage caching. In that page, I use jQuery ajax call to get the data from the server. If I first load the page, it is OK. I can switch between Online and Offline. But the problem is when I go back online and refresh the page. jQuery ajax cannot be able to talk to server anymore. Is there a way to for ajax to talk to the server or clear offline cache? My ajax call is as such: $.ajax({ type: "GET", url: requestUrl, success: localSuccess, error: error, dataType: "text", cache:false });

    Read the article

  • Playing online video on iphone web, the iphone seems to cache the reference movie

    - by Mad Oxyn
    We are working on an online mobile video app, and are trying to play a reference movie on an iphone from the iphone browser. The problem is, the movie plays the first time, but when we try to play a different movie the second time, it doesnt. Quicktime gives a general error message sayin "Cannot Play Movie". More precisely this is what we are trying to do: Connect with an iPhone to a webserver that serves a reference movie (generated with quicktime pro). The ref movie automatically gets downloaded to the iphone by quicktime. Quicktime then chooses one of the 3 references in the reference movie, based on the connection speed, and tries to download the designated movie via a relative path. A servlet gets called and forwards the relative path to the right movie. This all works the first time. However the second time when we want download different movies, we get the quicktime error. Test case: Open reference movie 1 Movie plays in Quicktime Open reference movie 2 ERROR: Quicktime gives an error - Cannot Play Movie Shut down Iphone Turn Iphone on again Open reference movie 2 Iphone plays movie Open movie ERROR: Quicktime gives an error - Cannot Play Movie Did anyone encounter similar issues with quicktime and the use of reference movies and is there something we can do to work around that?

    Read the article

  • Getting Started with CacheMoney

    - by Matt Grande
    I recently installed cache-money. After some difficulties getting memcached and cache-money set up, I thought I had it working. It cached the one query on my login page fine. I login, and go to my message index page and get this error: indices delegated to @cache_config.indices, but @cache_config is nil: Slug(id: integer, name: string, sluggable_id: integer, sequence: integer, sluggable_type: string, scope: string, created_at: datetime) Searching for the first part of that error message returns 0 hits on Google, so I'm at a loss on where to even begin. Any suggestions?

    Read the article

  • How do I start a second console application in Visual Studio when one is already running

    - by Kettenbach
    Hi All, I am working through some examples in a WCF book. There is a Host project and Client project within a single solution. Both are console applications. The Host is the startup app, but the Client app doesn't seem to open the Console like the book says. Book says while the Host is running, run the Client. The Run button is disabled tho as it is already running. The book example definitely has them in the same solution and a single instance of Visual Studio. Anyways, what am I missing here? I have done this with two instances of VS, but I truly have never does this in a single instance. Any help is always appreciated. Cheers, ~ck in San Diego

    Read the article

  • Creating a second form page for a has_many relationship

    - by Victor Martins
    I have an Organization model that has_many users through affiliations. And, in the form of the organization ( the standard edit ) I use semanting_form_for and semantic_fields_for to display the organization fields and affiliations fields. But I wish to create a separete form just to handle the affiliations of a specific organization. I was trying to go to the Organization controller and create a an edit_team and update_team methods then on the routes create those pages, but it's getting a mess and not working. am I on the right track?

    Read the article

  • XML deserialization doesn't read in second level

    - by Andy
    Sorry if the title doesn't make much sense, but I'm not too familiar with the terminology. My question is: I have a config.xml file for a program I'm working on. I created an xsd file by 'xsd.exe config.xml'. I then took this xsd and added it to the solution in visual studio. My last step used a program called xsd2code that turned that xsd file into a class I can serialize too. The problem is it doesn't read more then a layer deep in the xml tree. By this I mean the elements in the root node get deserialized into my object, but those that are in a node inside the root node are not. I found this out by putting a breakpoint after the deserialization and looking at my object. Any Ideas? Let me know if this needs some clarification or you need a snippet of something.

    Read the article

  • Memory mapped files causes low physical memory

    - by harik
    I have a 2GB RAM and running a memory intensive application and going to low available physical memory state and system is not responding to user actions, like opening any application or menu invocation etc. How do I trigger or tell the system to swap the memory to pagefile and free physical memory? I'm using Windows XP. If I run the same application on 4GB RAM machine it is not the case, system response is good. After getting choked of available physical memory system automatically swaps to pagefile and free physical memory, not that bad as 2GB system. To overcome this problem (on 2GB machine) attempted to use memory mapped files for large dataset which are allocated by application. In this case virtual memory of the application(process) is fine but system cache is high and same problem as above that physical memory is less. Even though memory mapped file is not mapped to process virtual memory system cache is high. why???!!! :( Any help is appreciated. Thanks.

    Read the article

  • Need Database Help - A second opinion - thank you

    - by user287745
    i have designed an er model and then normalized it till the BCNF and converted it into tables using vs08. my problem is i do not know from where to get the normalized database checked to see if it has no mistakes in normalization- can not be further normalized. please do not give answers such as- ask a friend- ask your professor- do not have these resources available- it is very very hard and really time consuming waiting for the relevant person to be available. so are there any sites from where i can ask help from other designers- people like you to check the normalized database? please note:- it should be free, sorry for accept rate, was not aware of accepting the answers, all the help is appreciated thank you

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >