Search Results

Search found 3752 results on 151 pages for 'offline caching'.

Page 136/151 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • NHibernate Performance Optimization | Suggestions invited!!!

    - by user336749
    Hi, I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen. While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen. For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet. The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param). Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization. My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case? Thanks in advance for any suggestions. Regards, -Mike

    Read the article

  • Question about how AppFabric's cache feature can be used.

    - by Kevin Buchan
    Question about how AppFabric's cache feature can be used. I apologize for asking a question that I should be able to answer from the documentation, but I have read and read and searched and cannot answer this question, which leads me to believe that I have a fundamentally flawed understanding of what AppFabric's caching capabilities are intended for. I work for a geographically disperse company. We have a particular application that was originally written as a client/server application. It’s so massive and business critical that we want to baby step converting it to a better architected solution. One of the ideas we had was to convert the app to read its data using WCF calls to a co-located web server that would cache communication with the database in the United States. The nature of the application is such that everyone will tend to be viewing the same 2000 records or so with only occasional updates and those updates will be made by a limited set of users. I was hoping that AppFabric’s cache mechanism would allow me to set up one global cache and when a user in Asia, for example, requested data that was not in the cache or was stale that the web server would read from the database in the USA, provide the data to the user, then update the cache which would propagate that data to the other web servers so that they would know not to go back to the database themselves. Can AppFabric work this way or should I just have the servers retrieve their own data from the database?

    Read the article

  • Ajax Update Panel Not Displaying Updated Image in IE, Chrome and FireFox

    - by jmease
    I have a parent page with an Ajax Update Panel that contains an image and a button. There is a hyperlink that opens up a child page. When the child page is submitted, there is an onclientclick event that triggers a javascript function that clicks the button in the update panel on the parent page, the button's click event being the trigger for the panel as well as the event that updates the image URL. When I use this on my android tablet, it works perfectly. However, it doesn't work at all on any browser I've used on a PC (Windows XP). The Image URL updates, but the updated image doesn't display without refreshing the entire page. In IE, I can right click on the image and click Show Image and it updates. In Chrome and Firefox, I have to refresh the entire page. Why would an Ajax control only work properly on the Android OS and what could I be doing wrong that would cause the image not to redisplay on my PC without refreshing the page even though the image URL is clearly being updated properly. I suspect a caching issue, but don't know how to correct.

    Read the article

  • How to send java.util.logging to log4j?

    - by matt b
    I have an existing application which does all of its logging against log4j. We use a number of other libraries that either also use log4j, or log against Commons Logging, which ends up using log4j under the covers in our environment. One of our dependencies even logs against slf4j, which also works fine since it eventually delegates to log4j as well. Now, I'd like to add ehcache to this application for some caching needs. Previous versions of ehcache used commons-logging, which would have worked perfectly in this scenario, but as of version 1.6-beta1 they have removed the dependency on commons-logging and replaced it with java.util.logging instead. Not really being familiar with the built-in JDK logging available with java.util.logging, is there an easy way to have any log messages sent to JUL logged against log4j, so I can use my existing configuration and set up for any logging coming from ehcache? Looking at the javadocs for JUL, it looks like I could set up a bunch of environment variables to change which LogManager implementation is used, and perhaps use that to wrap log4j Loggers in the JUL Logger class. Is this the correct approach? Kind of ironic that a library's use of built-in JDK logging would cause such a headache when (most of) the rest of the world is using 3rd party libraries instead.

    Read the article

  • LinQ optimization

    - by Budda
    Here is a peace of code: void MyFunc(List<MyObj> objects) { MyFunc1(objects); foreach( MyObj obj in objects.Where(obj1=>obj1.Good)) { // Do Action With Good Object } } void MyFunc1(List<MyObj> objects) { int iGoodCount = objects.Where(obj1=>obj1.Good).Count(); BeHappy(iGoodCount); // do other stuff with 'objects' collection } Here we see that collection is analyzed twice and each time the value of 'Good' property is checked for each member: 1st time when calculating count of good objects, 2nd - when iterating through all good objects. It is desirable to have that optimized, and here is a straightforward solution: before call to MyFunc1 makecreate an additional temporary collection of good objects only (goodObjects, it can be IEnumerable); get count of these objects and pass it as an additional parameter to MyFunc1; in the 'MyFunc' method iterate not through 'objects.Where(...)' but through the 'goodObjects' collection. Not too bad approach (as far as I see), but additional parameter is required to be passed. Question: is there any LinQ out-of-the-box functionality that allows any caching during 1st Where().Count(), remembering a processed collection and use it in the next iteration? Any thoughts are welcome. Thanks.

    Read the article

  • php memory how much is too much

    - by Rob
    I'm currently re-writing my site using my own framework (it's very simple and does exactly what I need, i've no need for something like Zend or Cake PHP). I've done alot of work in making sure everything is cached properly, caching pages in files so avoid sql queries and generally limiting the number of sql queries. Overall it looks like it's very speedy. The average time taken for the front page (taken over 100 times) is 0.046152 microseconds. But one thing i'm not sure about is whether i've done enough to reduce php memory usage. The only time i've ever encountered problems with it is when uploading large files. Using memory_get_peak_usage(TRUE), which I THINK returns the highest amount of memory used whilst the script has been running, the average (taken over 100 times) is 1572864 bytes. Is that good? I realise you don't know what it is i'm doing (it's rather simple, get the 10 latest articles, the comment count for each, get the user controls, popular tags in the sidebar etc). But would you be at all worried with a script using that sort of memory getting hit 50,000 times a day? Or once every second at peak times? I realise that this is a very open ended question. Hopefully you can understand that it's a bit of a stab in the dark and i'm really just looking for some re-assurance that it's not going to die horribly come re-launch day.

    Read the article

  • MVC3 Custom view engine

    - by eth0
    I have a custom View Engine that derives from WebFormViewEngine. There's a lot of stuff going on in here, mostly caching. I want to be able to use WebFormViewEngine AND RazorViewEngine at the same time, is this possible? Ideally I'd like to do; ViewEngines.Add(new MyViewEngine<WebFormsViewEngine>()); ViewEngines.Add(new MyViewEngine<RazorViewEngine>()); if a .ascx/.aspx/.master file exists then use WebForms, otherwise use Razor is a .cshtml file exists. EDIT: I should of worded my question better. As my custom view engine derives from WebFormViewEngine it obviously uses WebForms, I can't derive from two classes. I can derive from RazorViewEngine but then I'll loose WebForms. I can duplicate my code entirely, derive from RazorViewEngine and edit the views file extensions, etc. but as I said I've got a lot of custom code in my view engine and would be duplicating hundreds of lines. WebFormViewEngine and RazorViewEngine derive from BuildManagerViewEngine which in turn implements IViewEngine. The problem with that is I have to implement methods CreatePartialView() and CreateView() but how would I know what to return (WebForms/Razor?) using generics?

    Read the article

  • How do I choose what and when to cache data with ob_start rather than query the database?

    - by Tim Santeford
    I have a home page that has several independent dynamic parts. The parts consist of a list of recent news from the company, a site statistics panel, and the online status of certain employees. The recent news changes monthly, site statistics change daily, and online statuses change on a per minute bases. I would like to cache these panels so that the db is not hit on every page load. Is using ob_start() then ob_get_contents() to cache these parts to a file the correct way to do this or is there a better method in PHP5 for doing this? In asking this question I'm trying to answer these additional questions: How can I determine the correct approach for caching this data without doing extensive benchmarking? Does it make sense to cache these parts in different files and then join them together per requests or should I re-query the data and cache once per minute? I'm looking for a rule of thumb for planning pages and for situations where doing testing is not cost effective (The client is not paying enough for it I mean). Thanks!

    Read the article

  • Refetching a previously visited page

    - by user613665
    All, I am having a field day with page refetching. Any help or pointer will be greatly appreciated!! The behavior is a bit specific to mobile browser. Problem: I have two pages and created a shortcut link to pg#1 in the home page. Through a form submit button, user is taken from pg#1 to pg#2. All that is working fine. Now once I am on pg#2. I will leave the browser and click the shortcut later. The browser will stay on pg#2 and won't go to pg#1 even though the path in URLS is different between the two views. It is almost like Django decides that since I have already visited view#1, it doesn't need to fetch it again. This problem or behavior doesn't happen if I move the same code that handle the two views and the templates to a bare bone test project. Setup: I am using django-registration, context session. I am not using any HTML caching tag. I already have DEBUG turned on in my settings.py. Are there other ways that I can tell what the server is doing. Thanks in advance. pdxMobile Update: Here is the code snippets. def sendmsg(request): if request.method =='POST': messages.add_message(request, messages.INFO, "Hello world") return redirect ('rcvmsg') return render_to_response('sendMsg.html',RequestContext(request)) def rcvmsg(request): '''view that receives the msg.''' printMsg ='Didnt get a message' if messages: thisMsg = messages.get_messages(request) for rcvMsg in thisMsg: printMsg = rcvMsg return render_to_response('rcvMsg.html',{'print_msg':printMsg},RequestContext(request)) URL: url(r'^rcvMsg/','mydomain.mainApp.views.rcvmsg',name='rcvmsg'), (r'^sendMsg/code','mydomain.mainApp.views.sendmsg'),

    Read the article

  • geb StaleElementReferenceException

    - by Brian Mortenson
    I have just started using geb with webdriver for automating testing. As I understand it, when I define content on a page, the page element should be looked up each time I invoke a content definition. //In the content block of SomeModule, which is part of a moduleList on the page: itemLoaded { waitFor{ !loading.displayed } } loading { $('.loading') } //in the page definition moduleItems {index -> moduleList SomeModule, $("#module-list > .item"), index} //in a test on this page def item = moduleItems(someIndex) assert item.itemLoaded So in this code, I think $('.loading') should be called repeatedly, to find the element on the page by its selector, within the context of the module's base element. Yet I sometimes get a StaleElementReference exception at this point. As far as I can tell, the element does not get removed from the page, but even if it does, that should not produce this exception unless $ is doing some caching behind the scenes, but if that were the case it would cause all sorts of other problems. Can someone help me understand what's happening here? Why is it possible to get a StaleElementReferenceException while looking up an element? A pointer to relevant documentation or geb source code would be useful as well.

    Read the article

  • MVC -DropDownList - Categories - SubCategories

    - by Frajer
    In my database I have tbales: categoreis and SubCategoreis. I would Like to create one Dropdownlist containg both of these. Something like: «Välj» <option value='1000' style='background-color:#dcdcc3;font-weight:bold;' id='cat1000' > -- FORDON -- /// this is from Categoreis Table </option> <option value='1020' id='cat1020' > Bilar /// this is from SubCategoreis </option> <option value='1040' id='cat1040' > Bildelar & Biltillbehör /// this is from Categoreis </option> <option value='1060' id='cat1060' > Båtar /// this is from Categoreis </option> <option value='1080' id='cat1080' > Båtdelar & tillbehör /// this is from Categoreis </option> Any samples how I could solve this? should I use Helpers or MVCUsercontrol??? I think that caching is importatnt in this case.Help me out ! Thanks!!

    Read the article

  • Wildcards in jnlp template file

    - by Andy
    Since the last security changes in Java 7u40, it is required to sign a JNLP file. This can either be done by adding the final JNLP in JNLP-INF/APPLICATION.JNLP, or by providing a template JNLP in JNLP-INF/APPLICATION_TEMPLATE.JNLP in the signed main jar. The first way works well, but we would like to allow to pass a previously unknown number of runtime arguments to our application. Therefore, our APPLICATION_TEMPLATE.JNLP looks like this: <?xml version="1.0" encoding="UTF-8"?> <jnlp codebase="*"> <information> <title>...</title> <vendor>...</vendor> <description>...</description> <offline-allowed /> </information> <security> <all-permissions/> </security> <resources> <java version="1.7+" href="http://java.sun.com/products/autodl/j2se" /> <jar href="launcher/launcher.jar" main="true"/> <property name="jnlp...." value="*" /> <property name="jnlp..." value="*" /> </resources> <application-desc main-class="..."> * </application-desc> </jnlp> The problem is the * inside of the application-desc tag. It is possible to wildcard a fixed number of arguments using multiple argument tags (see code below), but then it is not possible to provide more or less arguments to the application (Java Webstart will no start with an empty argument tag). <application-desc main-class="..."> <argument>*</argument> <argument>*</argument> <argument>*</argument> </application-desc> Does someone can confirm this problem and/or has a solution for passing a previously undefined number of runtime arguments to the Java application? Thanks alot!

    Read the article

  • IIS6 Log time recording problems

    - by Hafthor
    On three separate occasions on two separate servers at nearly the same times, 6.9 hours seemingly went by without any data being written to the IIS logs, but, on closer inspection, it appears that it was all recorded all at once. Here's the facts as I know them: Windows Server 2003 R2 w/ IIS6 Logging using GMT, server local time GMT-7. Application was still operating and I have SQL data to prove that Time gaps appear in log file, not across two # headers appear at gap Load balancer pings every 30 seconds No caching Here's info on a particular case: an entry appears for 2009-09-21 18:09:27 then #headers the next entry is for 2009-09-22 01:21:54, and so are the next 1600 entries in this log file and 370 in the next log file. about half of the ~2000 entries on 2009-09-22 01:21:54 are load balancer pings (est. at 2/min for 6.9hrs = 828 pings) then entries are recorded as normal. I believe that these events may coincide with me deploying an ASP.NET application update into those machines. Here's some relevant content from the logs in question: ex090921.log line 3684 2009-09-21 17:54:40 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:11 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:42 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:13 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:45 GET /ping.aspx - 80 404 0 0 3733 122 0 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-21 18:04:37 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 3078 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 109 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 3828 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 ... continues until line 5449 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 <eof> ex090922.log #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-22 00:00:16 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 ... continues until line 367 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:30 GET /ping.aspx - 80 200 0 0 277 122 0 ... back to normal behavior Note the seemingly correct date/time written to the #header of the new log file. Also note that /ping.aspx returned 404 then switched to 200 just as the problem started. I rename the "I'm alive page" so the load balancer stops sending requests to the server while I'm working on it. What you see here is me renaming it back so the load balancer will use the server. So, this problem definitely coincides with me re-enabling the server. Any ideas?

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • Certificates Validations Issues

    - by user298331
    Hi All, i am facing some issues related certificates.i need some help to resolve these issues. Requirements : security mode="TransportWithMessageCredential" binding binding name="basicHttpEndpointBinding" certificateValidationMode ="ChainTrust" revocationMode="Online" Certificates : Service Cerificates : Transportlevel : XXXX.cer my cerificate name is my system DNS name and it is having root node i.e RootTrnCA.cer this is used to enable https.but am not validationg transport level certificates. Message Level : services.ca.iim (VXXXX.Cer--Act.Mac.Ca--services.ca.iim ) Client Cerificates : Transportlevel : ZZZZ.cer my cerificate name is my system DNS name and it is having root node i.e RootTrnCA.cer ignoring transport certificate errors through coading..... Message Level : client.ca.iim (VXXXX.Cer--Act.Mac.Ca--client.ca.iim ) Issues : 1) Response message is not contain Service certificate Signature in Soap header.so i am not able to validate Server certificate details in Client code. 2)if i use the transport with message credential and Chaintrust.i am getting error : The revocation function was unable to check revocation because the revocation server was offline.) so please very the below service and cleint config and correct me if i am wrong. Service config : Client config : i am attaching certificate through coading : objProxy.ChannelFactory.Credentials.ClientCertificate.SetCertificate(System.Security.Cryptography.X509Certificates. StoreLocation.LocalMachine, System.Security.Cryptography.X509Certificates. StoreName.My, X509FindType.FindBySubjectName, "client.ca.iim"); <binding name="XXXXXServiceHost.Http" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="Certificate" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="https://XXXXXX/XXXServiceHost/MemberSvc.svc/soap11" binding="basicHttpBinding" bindingConfiguration="XXXServiceHost.Http" contract="ServiceReference1.IMemberIBA" name="XXXServiceHost.Http" /> </client> </system.serviceModel>Please Verify both and Help me how to resolve above two issues . Thanks Babu

    Read the article

  • Using an SSD with no AHCI [ICH7 base] - Windows 7 hangs frequently

    - by h4xnoodle
    I have a Shuttle Intel G31 + ICH7 (base -- not M/R etc) system. I just bought an OCZ Vertex 3 120gb [VTX3-25SAT3-120G] which includes the Sandforce 2218 firmware. The ICH7 does not support AHCI. I understand that this can be a problem. What I don't understand, is if it's necessary to have the proper performance of this drive. I know that without AHCI I may get a limited read/write speed -- this is fine. What my concern is, is the constant freezing/hangs I'm getting with Windows 7 on any disk activity. The 'Highest Active Time' flip-flops from 0 to 100% every minute or so regardless of large or small files. EDIT: The threads/processes with the highest response time is the kernel. I've been reading about other people with Shuttle SG31G2s, and they seem to be using SSDs no problem. Is this the controller's fault? The fact that I do not have AHCI enabled? It makes sense to me that if this SSD requires AHCI features that it would cause Windows to hang, but I would like to fully determine my situation before returning things/reformatting. To initially have my drive recognise the SSD at all, I had to change the BIOS option to Force Gen II instead of Auto for the SATA controller. I then installed Windows with no problem. There were no errors in the event log related to disk usage, but watching the perfmon I could see the highest active time and the processes (usually pagefile.sys being written to, or chrome/firefox caching) which was correlated to the hanging. So now what I need answered is: should I be returning this SSD and getting one with a different controller, or returning the SSD all-together as it will never work out and I will continue to get these hangs. Posts I've read: Windows 7 New SSD SATA AHCI? -- suggests to use AHCI http://forums.anandtech.com/showthread.php?t=2189868 -- Sandforce issues Windows 7 freezes with SSD -- and attached posts Why does my Windows 7 PC / SSD drive keep freezing? -- this is not the controller I have, but still a related issue. Windows 7 hangs after longer inactivity of user -- also tried messing with power settings with no luck. It was already set to 'Never' for turning off HDDs.

    Read the article

  • Varnish configuration to only cache for non-logged in users

    - by davidsmalley
    I have a Ruby on Rails application fronted by varnish+nginx. As most of the sites content is static unless you are a logged in user, I want to cache the site heavily with varnish when a user is logged out but only to cache static assets when they are logged in. When a user is logged in they will have the cookie 'user_credentials' present in their Cookie: header, in addition I need to skip caching on /login and /sessions in order that a user can get their 'user_credentials' cookie in the first place. Rails by default does not set a cache friendly Cache-control header, but my application sets a "public,s-max-age=60" header when a user is not logged in. Nginx is set to return 'far future' expires headers for all static assets. The configuration I have at the moment is totally bypassing the cache for everything when logged in, including static assets — and is returning cache MISS for everything when logged out. I've spent hours going around in circles and here is my current default.vcl director rails_director round-robin { { .backend = { .host = "xxx.xxx.xxx.xxx"; .port = "http"; .probe = { .url = "/lbcheck/lbuptest"; .timeout = 0.3 s; .window = 8; .threshold = 3; } } } } sub vcl_recv { if (req.url ~ "^/login") { pipe; } if (req.url ~ "^/sessions") { pipe; } # The regex used here matches the standard rails cache buster urls # e.g. /images/an-image.png?1234567 if (req.url ~ "\.(css|js|jpg|jpeg|gif|ico|png)\??\d*$") { unset req.http.cookie; lookup; } else { if (req.http.cookie ~ "user_credentials") { pipe; } } # Only cache GET and HEAD requests if (req.request != "GET" && req.request != "HEAD") { pipe; } } sub vcl_fetch { if (req.url ~ "^/login") { pass; } if (req.url ~ "^/sessions") { pass; } if (req.http.cookie ~ "user_credentials") { pass; } else { unset req.http.Set-Cookie; } # cache CSS and JS files if (req.url ~ "\.(css|js|jpg|jpeg|gif|ico|png)\??\d*$") { unset req.http.Set-Cookie; } if (obj.status >=400 && obj.status <500) { error 404 "File not found"; } if (obj.status >=500 && obj.status <600) { error 503 "File is Temporarily Unavailable"; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } }

    Read the article

  • Flash was "not designed to function across LANs". Any workarounds?

    - by Triynko
    See: http://helpx.adobe.com/flash/kb/problems-using-flash-authoring-across.html Issue When using Adobe Flash across a local area network (LAN) and networked drives/folders, you may experience any of the following problems:" Flash crashes while performing a test movie on FLA files located on a networked drive or folder. FLA files get corrupted when opening from or saving to networked drives or folder. Flash does not reflect changes in custom class after compiling. Flash, Flash Video Encoder, or Adobe Media Encodercrashes or corrupts Flash Video (FLV) files while encoding source located on networked drives or folder. Flash Video Encoder or Adobe Media Encoder crashes or corrupts FLV files where the output folder is a networked drive or folder. Published Flash Player (SWF) files and projectors are unable to load content located on networked drives or folder. More than one instance of a SWF or Projector on client machines cannot play back FLV files located on a networked drive or folder. Reason The Adobe Flash IDE, FLV Encoder, Adobe Media Encoderand Flash Player were not designed to function across LANs. Solution Use of Flash files across local networks is not supported in any context. Published content should access data through a web server. All file sources should be opened and saved on the local system. Using Flash in such a scenario for project collaboration or content deployment is highly discouraged and may corrupt your source files. If you need to work in a collaborative environment or store source files on a server, use the project panel and/or a third-party version control system. SERIOUSLY? I cannot work on files located on a mapped network drive? How did they mess that one up? Does the Flash IDE really open the source file and wipe it clean to do the saving, rather than saving a copy first then replacing it as an atomic file system operation? How hard would it be for them make a dummy temporary file for saving then issue a MOVE command? Any workarounds for this, like something that can make a network drive as stable as a local drive, like some kind of automatic local caching and synching?

    Read the article

  • getView() (for a Custom ListView ) doesn't get called on notifyDatasetChanged()

    - by hungson175
    Hi everyone, I have the following problem, and searched for a while but haven't got any solution from the net: I have a custom list view, each item has the following layout (I just post the essential): <LinearLayout> <ImageView android:id="@+id/friendlist_iv_avatar" /> <TextView andorid:id="@+id/friendlist_tv_nick_name" /> <ImageView android:id="@+id/friendlist_iv_status_icon" /> </LinearLayout> And I have a class FriendRowItem, which is inflated from the above layout: public class FriendRowItem extends LinearLayout{ private ImageView ivAvatar; private ImageView ivStatusIcon; private TextView tvNickName; public FriendRowItem(Context context) { super(context); RelativeLayout friendRow = (RelativeLayout) Helpers.inflate(context, R.layout.friendlist_row); this.addView(friendRow); ivAvatar = (ImageView)findViewById(R.id.friendlist_iv_avatar); ivStatusIcon = (ImageView)findViewById(R.id.friendlist_iv_status_icon); tvNickName = (TextView)findViewById(R.id.friendlist_tv_nick_name); } public void setPropeties(Friend friend) { //Avatar ivAvatar.setImageResource(friend.getAvatar().getDrawableResourceId()); //Status Status.Type status = friend.getStatusType(); if ( status == Type.ONLINE) { ivStatusIcon.setImageResource(R.drawable.online_icon); } else { ivStatusIcon.setImageResource(R.drawable.offline_icon); } //Nickname String name = friend.getChatID(); if ( friend.hasName()) { name = friend.getName(); } tvNickName.setText(name); } } In the main activity, I have a custom listview: lvMainListView, with an custom adapter (whose class extends ArrayAdapter - and off course: override the method getView ), the data set of the adapter is: ArrayList<Friend> friends: private class FriendRowAdapter extends ArrayAdapter<Friend> { public FriendRowAdapter(Context applicationContext, int friendlistRow, ArrayList<Friend> friends) { super(applicationContext, friendlistRow, friends); } @Override public View getView(int position,View convertView,ViewGroup parent) { Friend friend = getItem(position); FriendRowItem row = (FriendRowItem) convertView; if ( row == null ) { row = new FriendRowItem(ShowFriendsList.this.getApplicationContext()); } row.setPropeties( friend ); return row; } } the problem is when I change the status of a friend from OFFLINE to ONLINE, then call notifyDataSetChanged(), nothing happens : the status icon of that friend doesn't change. I tried debugging, and saw the code: notifyDataSetChanged() get called, but the custom getView() is not fired ! Can you please tell me, that is normal in Android, or did I do something wrong ? (I am using Android 1.5). Thank you in advance, Son

    Read the article

  • How to troubleshoot performance issues of PHP, MySQL and generic I/O

    - by jbx
    I have a WordPress based website running on a shared hosting. Its response time is very decent (around 2s to retrieve the HTML page and 5s to load all the resources). I was planning to move it to a dedicated virtual server (Ubuntu 12.04 LTS), which should theoretically improve things and make them more consistent given its not shared. However I observed severe performance degredation, with the page taking 10seconds to be generated. I ruled out network issues by editing /etc/hosts on the server and mapping the domain to 127.0.0.1. I used the Apache load tester ab to get the HTML, so JS, CSS and images are all excluded. It still took 10 seconds. I have Zpanel installed on the server which also uses MySQL, and its pages come up quite fast (1.5s) and also phpMyAdmin. Performing some queries on the wordpress database directly through phpMyAdmin returns them quite fast too, with query times in the 10 to 30 millisecond region. Memory is also sufficient, with only 800Mb being used of the 1Gb physical memory available, so it doesn't seem to be a swap issue either. I have also installed APC to try to improve the PHP performance, but it didn't have any effect. What else should I look for? What could be causing this degradation in performance? Could it be some kind of I/O issue since I am running on a cloud based virtual server? I wish to be able to raise the issue with my provider but without showing actual data from some diagnosis I am afraid he will just blame my application. UPDATE with sar output (every second) when I did an HTTP request: 02:31:29 CPU %user %nice %system %iowait %steal %idle 02:31:30 all 0.00 0.00 0.00 0.00 0.00 100.00 02:31:31 all 2.22 0.00 2.22 0.00 0.00 95.56 02:31:32 all 41.67 0.00 6.25 0.00 2.08 50.00 02:31:33 all 86.36 0.00 13.64 0.00 0.00 0.00 02:31:34 all 75.00 0.00 25.00 0.00 0.00 0.00 02:31:35 all 93.18 0.00 6.82 0.00 0.00 0.00 02:31:36 all 90.70 0.00 9.30 0.00 0.00 0.00 02:31:37 all 71.05 0.00 0.00 0.00 0.00 28.95 02:31:38 all 14.89 0.00 10.64 0.00 2.13 72.34 02:31:39 all 2.56 0.00 0.00 0.00 0.00 97.44 02:31:40 all 0.00 0.00 0.00 0.00 0.00 100.00 02:31:41 all 0.00 0.00 0.00 0.00 0.00 100.00 My suspicion that this comes from I/O related issue is also because a caching plugin I use to reduce the amount of queries to the database, by precompiling PHP pages is actually making things worse instead of better. It seems that file access is making things worse instead.

    Read the article

  • Malware - Technical anlaysis

    - by nullptr
    Note: Please do not mod down or close. Im not a stupid PC user asking to fix my pc problem. I am intrigued and am having a deep technical look at whats going on. I have come across a Windows XP machine that is sending unwanted p2p traffic. I have done a 'netstat -b' command and explorer.exe is sending out the traffic. When I kill this process the traffic stops and obviously Windows Explorer dies. Here is the header of the stream from the Wireshark dump (x.x.x.x) is the machines IP. GNUTELLA CONNECT/0.6 Listen-IP: x.x.x.x:8059 Remote-IP: 76.164.224.103 User-Agent: LimeWire/5.3.6 X-Requeries: false X-Ultrapeer: True X-Degree: 32 X-Query-Routing: 0.1 X-Ultrapeer-Query-Routing: 0.1 X-Max-TTL: 3 X-Dynamic-Querying: 0.1 X-Locale-Pref: en GGEP: 0.5 Bye-Packet: 0.1 GNUTELLA/0.6 200 OK Pong-Caching: 0.1 X-Ultrapeer-Needed: false Accept-Encoding: deflate X-Requeries: false X-Locale-Pref: en X-Guess: 0.1 X-Max-TTL: 3 Vendor-Message: 0.2 X-Ultrapeer-Query-Routing: 0.1 X-Query-Routing: 0.1 Listen-IP: 76.164.224.103:15649 X-Ext-Probes: 0.1 Remote-IP: x.x.x.x GGEP: 0.5 X-Dynamic-Querying: 0.1 X-Degree: 32 User-Agent: LimeWire/4.18.7 X-Ultrapeer: True X-Try-Ultrapeers: 121.54.32.36:3279,173.19.233.80:3714,65.182.97.15:5807,115.147.231.81:9751,72.134.30.181:15810,71.59.97.180:24295,74.76.84.250:25497,96.234.62.221:32344,69.44.246.38:42254,98.199.75.23:51230 GNUTELLA/0.6 200 OK So it seems that the malware has hooked into explorer.exe and hidden its self quite well as a Norton Scan doesn't pick anything up. I have looked in Windows firewall and it shouldn't be letting this traffic through. I have had a look into the messages explorer.exe is sending in Spy++ and the only related ones I can see are socket connections etc... My question is what can I do to look into this deeper? What does malware achieve by sending p2p traffic? I know to fix the problem the easiest way is to reinstall Windows but I want to get to the bottom of it first, just out of interest. Edit: Had a look at Deoendency Walker and Process Explorer. Both great tools. Here is a image of the TCP connections for explorer.exe in Process Explorer http://img210.imageshack.us/img210/3563/61930284.gif

    Read the article

  • How to store wiki sites (vcs)

    - by Eugen
    Hello, as a personal project I am trying to write a wiki with the help of django. I'm a beginner when it comes to web development. I am at the (early) point where I need to decide how to store the wiki sites. I have three approaches in mind and would like to know your suggestion. Flat files I considered a flat file approach with a version control system like git or mercurial. Firstly, I would have some example wikis to look at like http://hatta.sheep.art.pl/. Secondly, the vcs would probably deal with editing conflicts and keeping the edit history, so I would not have to reinvent the wheel. And thirdly, I could probably easily clone the wiki repository, so I (or for that matter others) can have an offline copy of the wiki. On the other hand, as far as I know, I can not use django models with flat files. Then, if I wanted to add fields to a wiki site, like a category, I would need to somehow keep a reference to that flat file in order to associate the fields in the database with the flat file. Besides, I don't know if it is a good idea to have all the wiki sites in one repository. I imagine it is more natural to have kind of like a repository per wiki site resp. file. Last but not least, I'm not sure, but I think using flat files would limit my deploying capabilities because web hosts maybe don't allow creating files (I'm thinking, for example, of Google App Engine) Storing in a database By storing the wiki sites in the database I can utilize django models and associate arbitrary fields with the wiki site. I probably would also have an easier life deploying the wiki. But I would not get vcs features like history and conflict resolving per se. I searched for django-extensions to help me and I found django-reversion. However, I do not fully understand if it fit my needs. Does it track model changes like for example if I change the django model file, or does it track the content of the models (which would fit my need). Plus, I do not see if django reversion would help me with edit conflicts. Storing a vcs repository in a database field This would be my ideal solution. It would combine the advantages of both previous approaches without the disadvantages. That is; I would have vcs features but I would save the wiki sites in a database. The problem is: I have no idea how feasible that is. I just imagine saving a wiki site/source together with a git/mercurial repository in a database field. Yet, I somehow doubt database fields work like that. So, I'm open for any other approaches but this is what I came up with. Also, if you're interested, you can find the crappy early test I'm working on here http://github.com/eugenkiss/instantwiki-test

    Read the article

  • Xsd recursion of complexTypes

    - by Hatch
    I am just learning XML/XSD and am struggling with the implementation of an XML-schema which models a folder structure. What I had in mind was defining a complexType for the folder which can have additional folder instances that represent subfolders. Using the xsd schema validator here always returns that the schema is invalid. I tried defining the complexType up front and then using the ref keyword for subfolders: <xs:complexType name="tFolder"> <xs:sequence> <xs:element name="Path" type="tFolderType" msdata:Ordinal="0" /> <xs:element ref="Folder" minOccurs="0" maxOccurs="unbounded" /> <xs:element name="File" nillable="true" minOccurs="0" maxOccurs="unbounded"> <xs:complexType> <xs:simpleContent msdata:ColumnName="File_Text" msdata:Ordinal="0"> <xs:extension base="xs:string"> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> </xs:sequence> <xs:attribute name="Type" type="tFolderType" /> As for the element itself: <xs:element name="Folder" type="tFolder" /> The error returned by the validator is: "Cannot resolve the name 'Folder' to a(n) 'element declaration' component." and the error occurs at the line <xs:element ref="Folder" minOccurs="0" maxOccurs="unbounded" /\> Defining the complexType within the element itself yields the exact same error: <xs:element name="Folder"> <xs:complexType> <xs:sequence> <xs:element name="Path" type="tFolderType" msdata:Ordinal="0" /> <xs:element ref="Folder" minOccurs="0" maxOccurs="unbounded" /> <xs:element name="File" nillable="true" minOccurs="0" maxOccurs="unbounded"> <xs:complexType> <xs:simpleContent msdata:ColumnName="File_Text" msdata:Ordinal="0"> <xs:extension base="xs:string"> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> </xs:sequence> <xs:attribute name="Type" type="tFolderType" /> </xs:complexType> </xs:element> What I've read, this kind of recursion should work using ref. Can anyone tell me what I've done wrong? Maybe the xsd validator is just faulty? If so, does anyone know a better alternative? I've tried using the one from w3.org as well, but it seems to be taken offline...

    Read the article

  • Google Hybrid OpenID+OAuth with dotnetopenauth

    - by Max Favilli
    I have spent probably more than 10 hours in the last two days trying to understand how to implement user login with Google Hybrid OpenID+OAuth (Federated Login) To trigger the authorization request I use: InMemoryOAuthTokenManager tm = new InMemoryOAuthTokenManager( ConfigurationManager.AppSettings["googleConsumerKey"], ConfigurationManager.AppSettings["googleConsumerSecret"]); using (OpenIdRelyingParty openid = new OpenIdRelyingParty()) { Realm realm = HttpContext.Current.Request.Url.Scheme + Uri.SchemeDelimiter + ConfigurationManager.AppSettings["googleConsumerKey"] + "/"; IAuthenticationRequest request = openid.CreateRequest(identifier, Realm.AutoDetect, new Uri(HttpContext.Current.Request.Url.Scheme + "://" + HttpContext.Current.Request.Url.Authority + "/OAuth/google")); var authorizationRequest = new AuthorizationRequest { Consumer = ConfigurationManager.AppSettings["googleConsumerKey"], Scope = "https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/plus.me", }; request.AddExtension(authorizationRequest); request.AddExtension(new ClaimsRequest { Email = DemandLevel.Request, Gender = DemandLevel.Require }); request.RedirectToProvider(); } To retrieve the accesstoken I use: using (OpenIdRelyingParty openid = new OpenIdRelyingParty()) { IAuthenticationResponse authResponse = openid.GetResponse(); if (authResponse != null) { switch (authResponse.Status) { case AuthenticationStatus.Authenticated: HttpContext.Current.Trace.Write("AuthenticationStatus", "Authenticated"); FetchResponse fr = authResponse.GetExtension<FetchResponse>(); InMemoryOAuthTokenManager tm = new InMemoryOAuthTokenManager(ConfigurationManager.AppSettings["googleConsumerKey"], ConfigurationManager.AppSettings["googleConsumerSecret"]); ServiceProviderDescription spd = new ServiceProviderDescription { spd.RequestTokenEndpoint = new DotNetOpenAuth.Messaging.MessageReceivingEndpoint("https://accounts.google.com/o/oauth2/token", HttpDeliveryMethods.AuthorizationHeaderRequest | HttpDeliveryMethods.GetRequest); spd.AccessTokenEndpoint = new DotNetOpenAuth.Messaging.MessageReceivingEndpoint("https://accounts.google.com/o/oauth2/token", HttpDeliveryMethods.AuthorizationHeaderRequest | HttpDeliveryMethods.GetRequest); spd.UserAuthorizationEndpoint = new DotNetOpenAuth.Messaging.MessageReceivingEndpoint("https://accounts.google.com/o/oauth2/auth?access_type=offline", HttpDeliveryMethods.AuthorizationHeaderRequest | HttpDeliveryMethods.GetRequest); spd.TamperProtectionElements = new ITamperProtectionChannelBindingElement[] { new HmacSha1SigningBindingElement() }; WebConsumer wc = new WebConsumer(spd, tm); AuthorizedTokenResponse accessToken = wc.ProcessUserAuthorization(); if (accessToken != null) { HttpContext.Current.Trace.Write("accessToken", accessToken.ToString()); } else { } break; case AuthenticationStatus.Canceled: HttpContext.Current.Trace.Write("AuthenticationStatus", "Canceled"); break; case AuthenticationStatus.Failed: HttpContext.Current.Trace.Write("AuthenticationStatus", "Failed"); break; default: break; } } } Unfortunatelly I get AuthenticationStatus.Authenticated but wc.ProcessUserAuthorization() is null. What am I doing wrong? Thanks a lot for any help.

    Read the article

  • jquery ajax error cannot find url outside of debug mode

    - by John Orlandella Jr.
    I inherited some code two weeks ago that is using the jquery.ajax method to connect to a .NET web service. Here is the piece of code give me the trouble... if (MSCTour.AppSettings.OFFLINE !== 'TRUE') { $.ajax({ url: url, data: json, type: "POST", contentType: "application/json", timeout: 10000, dataType: "json", // not "json" we'll parse success: function(res){ if (!callback) { return; } /* // *** Use json library so we can fix up MS AJAX dates */ var result = ""; if (res !== "") { try { result = $.evalJSON(res); } catch (e) { result = {}; bare = true; } } /* // *** Bare message IS result */ if (bare) { callback(result); return; } /* // *** Wrapped message contains top level object node // *** strip it off */ for (var property in result) { callback(result[property]); break; } }, error: function(xhr,status,error){ if (status === 'parsererror') {} else {return error;} }, complete: function(res, status){ if (callback) { if ((status != 'success' && status != 'error') || status === 'parsererror' || (status === 'timeout' && res !== '')) { try { result = $.secureEvalJSON(res); } catch (e) { result = {}; bare = true; } callback(res); } } return; } }); } The url variable at this point equals /testsite/service.svc/GetItems Now here is where my problem lies... When running this site out of debug mode through visual studio I am not having any problem connecting to the database through the web service and seeing all my data, for both viewing and updating. When I go through the normal web server for the same site, on the same page, no data is showing up. When I put a break on the error portion of the code above in firebug this is information I am getting in the image linked below. link text I am getting what appears to be a 404 error, but when I look on the server all of the files are in the right place... coupled with the fact that it works when in debug mode, I think I am slowly going crazy staring at these same lines of code trying to find the needle in the haystack. Any help or just a direction to look in would be greatly appreciated.

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >