Search Results

Search found 7960 results on 319 pages for 'distributed cache'.

Page 199/319 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • Facebook doesn't work on computer, but work on mobile device, both use the same router

    - by sasa
    I have a very strange problem and I'm thinking that can be problem with dns or something similar, but not sure and don't know how to solve. My computer is connected to router and every site works fine except facebook (Chrome and Firefox). Chrome shows "Error 101 (net::ERR_CONNECTION_RESET): The connection was reset." But, on mobile device witch is connected to the same router facebook works fine (Fb application and Delphin browser). Pinging facebook works fine. Clearing cookies and cache didn't help. Also, I performed antivirus and antimalware scan and there is nothing. What can be a problem? Update: I'm also connect notebook on that wifi router, and on it works fine. nslookup facebook.com Server: UnKnown Address: 192.168.1.1 Non-authoritative answer: Name: facebook.com Addresses: 2a03:2880:2110:3f01:face:b00c:: 2a03:2880:10:1f02:face:b00c:0:25 2a03:2880:10:8f01:face:b00c:0:25 69.171.224.37 69.171.229.11 69.171.242.11 66.220.149.11 66.220.158.11

    Read the article

  • Xdebug 2.0.5 with Zend Server CE PHP5.2.12 possible?

    - by notbrain
    I'm using Zend Server CE with PHP 5.2.12 on OSX Snow Leopard and want to use Xdebug. I've turned off Zend Data Cache, Zend Optimizer+, and Zend Debugger in the console. When I run $ cd ~/Downloads/xdebug-2.0.5 $ /usr/local/zend/bin/phpize I get Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 The PHP API Version, 20041225, seems to be off from the documentation (aka wrong). When I continue installation with $ ./configure ---with-php-config=/usr/local/zend/bin/php-config $ make $ sudo make install The installed xdebug.so seems to be the wrong one. Which version of xdebug do I need for this PHP API version? The Zend API numbers are ok. I'm just confused at why the PHP API Version doesn't match. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/zend/lib/php_extensions/xdebug.so' - (null) in Unknown on line 0 PHP 5.2.12 (cli) (built: Feb 17 2010 13:39:36)

    Read the article

  • Can't see any YouTube videos

    - by André
    I have a problem with watching YouTube videos. It says: "An error occurred, please try again later". I've tried loading different videos and that's what it says to all the videos I try to watch. I've tried using another browser, clearing cache + cookies etc, but none of that really worked out. My operating system is Windows 7 Home Premium, I use Google Chrome as my browser. And the YouTube videos was able to be watched earlier. I suspect that it has something to do with the PC, since I've got YouTube working on my laptop earlier. Not sure if it still works on my laptop though. Hope I've given enough information for you to help me out with this problem. Feel free to ask if there's anything else you need to know. Anyone who can help me out?

    Read the article

  • f12 and ctrl + f5 not working correctly

    - by ComatoseDuck
    When I came back to my work computer after being away for a week I found out when I try to clear the cache and refresh the page via ctrl + f5 (or just f5) I get the prompt "Type the Internet address of a document, and Internet Explorer will open it for you" with a drop down list in IE. When I try f5 in Chrome and FF it opens the "Open file" dialog box. When I try to f12 for Dev tools in IE, Chrome & FF it opens up the print dialog box. Why is this happening and what can I do to revert it back to the way it was?

    Read the article

  • Rails time stamps on images in CSS

    - by brad
    Just posted this on Stack but realized it may be more appropriate here: So Rails time stamping is great. I'm using it to add expires headers to all files that end in the 10 digit timestamp. Most of my images however are referenced in my CSS. Has anyone come across any method that allows for timestamps to be added to CSS referenced images, or some funky re-write rule that achieves this? I'd love for ALL images in my site, both inline and in css to have this timestamp so I can tell the browser to cache them, but refresh any time the file itself changes. I couldn't find anything on the net regarding this and I can't believe this isn't a more frequently discussed topic. I don't think my setup will matter because the actual expiring will hopefully happen the same way, based on the 10 digit timestamp, but I'm using apache to serve all static content if that matters

    Read the article

  • Server 2012 R2 DNS Conditional forwarding not working reliably, possible caching issue?

    - by Matt
    I have a bit of a home lab setup with a domain controller that is acting as the DNS server for my network. For everything, it's working fine and forwards external DNS requests to my ISP. The household recently wanted to get Netflix going and it seemed a DNS option was better than a VPN to get around the region locking, so I signed up for unblock-us.com Since I have a Windows DNS server I thought I'd be clever and make use of conditional forwarders and added the Netflix domain to the list. Initially this worked well and all devices on the network could now access Netflix, however after about an hour going to the Netflix site would result in a page cannot be found. Doing an nslookup of Netflix.com from my PC resulted in it not returning any IP addresses. As a test, I deleted the Netflix domain from the DNS servers cache and things started working again - devices could get to the site again however the same thing happens again after around half an hour to an hour. Have I missed something here that's causing it to stop working?

    Read the article

  • What is the best way to password protect folder/page using php without a db or username

    - by Salt Packets
    What is the best way to password protect folder using php without a database or user name but using. Basically I have a page that will list contacts for organization and need to password protect that folder without having account for every user . Just one password that gets changes every so often and distributed to the group. I understand that it is not very secure but never the less I would like to know how to do this. In the best way. It would be nice if the password is remembered for a while once user entered it correctly.

    Read the article

  • Memory is free, but still swapping?

    - by japancheese
    Hello, I'm sure this is a pretty basic question, but I'm just trying to get a grasp of what's going on with my Ubuntu (Hardy Herron) server (running a Rails-based site). It seems that I have free memory available, yet the system is reporting that it is still swapping memory (unless I'm reading this incorrectly?). Here is the "free -m" output total used free shared buffers cached Mem: 1024 905 118 0 33 409 -/+ buffers/cache: 462 561 Swap: 2047 95 1952 Could anyone explain to me some possible reasons that it is maintaining 95mb of swap at all times (it is never less)? I'm just looking for some leads on things I could check out that would explain to me exactly how memory is utilized in Linux.

    Read the article

  • Load balancer - how to write one for a custom application?

    - by Poni
    Hi! I've written a simple server application which will run distributed on several machines. My question is how does a network load balancer works, in general? I've heard of round-robin and other algorithms, but what I haven't got answer to is how does the process really goes? In socket terms. The client connects to one of the load balancer machines, asks for a "free-to-connect-to" server and simply connects to it? That's the simpliest way I can think of. .. or, does it use the load balancer as a proxy (that implies that all the NBs must be always connected to the application servers, and data is transferred through them)? It's more of a general question. How would you do this? Thank you all!

    Read the article

  • What are the mandatory Linux kernel modules to run inside of ESXi

    - by Marcin
    I'm used to rolling my own kernels for servers, as it nicely minimizes the number of exploits (and the resulting patches) to take care of. In a traditional (bare metal) world, the whole process is about knowing what you have (hardware), and what you need (Ethernet, IPv4, iptables, etc.) In a virtualized environment, some things stay the same (still need Ethernet and IPv4), some things go away (power management), and then there are some new needs (vxnet3, or vmware-tools, even though that's compiled outside of the kernel). So my question mostly concerns itself with the last two categories: what can I remove completely, and what new stuff do I want? For example, what IO scheduler do I want, if all my disk operations are going through another filesystem/scheduler/cache to get to the virtual disk? Do I need hyper-threading enabled, or is the VM going to show them to me anyway as a CPU anyway? Do I need Large Receive Offload turned on, or is that something that the hypervisor's network drivers are going to do for me?

    Read the article

  • Making nginx withstand flood attacks

    - by Tiffany Walker
    How can I make it stand stand against attacks better? Are their plugins. Looking for a way to RATE LIMIT and remain up and not slow down. My Setup: user nobody; # no need for more workers in the proxy mode worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; worker_priority -2; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 40480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 9; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 194.145.208.19:80; server_name ipxnow.in www.ipxnow.in; access_log /usr/local/apache/domlogs/ipxnow.in-bytes_log bytes_log; access_log /usr/local/apache/domlogs/ipxnow.in combined; root /home/ipxnowin/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://194.145.208.19:8081; include proxy.inc; } location @backend { internal; proxy_pass http://194.145.208.19:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://194.145.208.19:8081; include proxy.inc; } location ~ /\.ht { deny all; } } and proxy.inc: proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • Plugin architecture in .net: unloading

    - by henchman
    Hello everybody, I need to implement a plugin architecture within c#/.net in order to load custom user defined actions data type handling code for a custom data grid / conversion / ... from non-static linked assembly files. Because the application has to handle many custom user defined actions, Iam in need for unloading them once executed in order to reduce memory usage. I found several good articles about plugin architectures, eg: ExtensionManager PluginArchitecture ... but none of them gave me enough sausage for properly unloading an assembly. As the program is to be distributed and the user defined actions are (as the name states) user defined: how to i prevent the assembly from executing malicious code (eg. closing my progra, deleting files)? Are there any other pitfalls one of you has encountered?

    Read the article

  • List of private iPhone APIs?

    - by diego nunes
    . . Hi there, everybody. . . I need to do an app to be distributed ad hoc (it doesn't need to go to the store) but I need to get the information about the "data usage" (gprs/3g traffic). It is available on the system, but there is no official API call to get that info. One app made it through Apple testing (it's called "Download Meter"), though, and I emailed the guys to see if they would share the call, but they were not in that mood. . . Is there any list of private APIs or anything like that? Does anyone have any ideas of how could I get that info? Again: the app doesn't need to go to the store, but I need to install it on stock iPhone (ad hoc will do). . Thanks.

    Read the article

  • Would Mercurial help me work from 2 PCs?

    - by rikh
    I currently use Perforce for source control, but want to start working on the code from 2 different PCs at the same time (desktop and laptop). The laptop would not be able to access the perforce server very often, which makes Perforce a poor choice in this setup. Distributed source control tools like Mercurial seem better suited to the task, but I am still not clear if this would work or not. Does anyone have any experience of using Mercurial to work on 2 machines at once (eg desktop in the week, laptop in evening and weekends). Does it help, or is it still a pain the butt keeping everything in sync and knowing what is going on.

    Read the article

  • git: having 2 push/pull repos in sync (or 1 push/pull and 1 pull in sync)

    - by xavjuan
    Hello, We work on multiple geographically seperate sites. Today I have our git clones all live on one site A. Then users from site B have to ssh over to do a git clone or to push in changes. These are bare repos where the update is through pushes. Ideally, for git clone/push performance, I'd like to limit having to go over ssh. I'd like to have a copy of git repo X live on site A and site B... and have some syncing mechanism between them. OR to have X live on both sites, but only allow pushing to A (and have that setup correctly at clone time on B) I'm worried about the case where someone on site A pushes changes to the repo at site A at the same time that someone on site B pushes a truely conflicting change to the repo at site B. Is there some 'sync'ing solution built into git for distributed open repos like this? Or a way to have a clone from X set the origin/parent to the X from the other site? thanks, -John

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • Justifying a memory upgrade, take 2

    - by AngryHacker
    Previously I asked a question on what metrics I should measure (e.g. before and after) to justify a memory upgrade. Perfmon was suggested. I'd like to know which specific perfmon counters I should be measuring. So far I got: PhysicalDisk/Avg. Disk Queue Length (for each drive) PhysicalDisk/Avg. Disk Write Queue Length (for each drive) PhysicalDisk/Avg. Disk Read Queue Length (for each drive) Processor/Processor Time% SQLServer:BufferManager/Buffer cache hit ratio What other ones should I use?

    Read the article

  • Host a subdomain from an IIS7 server on another server

    - by brennanmur
    I am running a network with two server, one running SBS 2008 which hosts a majority of the network services (DNS, DHCP, Exchange, IIS, DC, AD...), including the primary webpage at mydomain.com. The other server is a linux box running an apache web server. I'm trying to set the apache server to host the site site2.mydomain.com. I have changed the DNS settings on the first server so that site2.mydomain.com can be accessed from the internal network, but I cannot access site2.mydomain.com from the internet. I have tried using server farms and application request routing cache, as described on this page, http://forums.iis.net/t/1156458.aspx, but I still can't access site2.mydomain.com from the internet. I don't have much experience with web servers, I usually just administer local networks, so I'm not sure if I'm going about this the right way. Any help or suggestions would be much appreciated.

    Read the article

  • Building Cocoa UIs for OS X with C# and Mono

    - by Antony Perkov
    Has anyone spent any time comparing the various Objective C bridges and associated Cocoa wrappers for Mono? I want to port an existing C# application to run on OS X. Ideally I'd run the application on Mono, and build a native Cocoa UI for it. I'm wondering which bridge would be the best choice. In case it's useful to anyone, here are some links to bridges I've found so far: CocoSharp - distributed with Mono on OS X - www.cocoa-sharp.com Monobjc - better documentation than the others (in my opinion) - www.mono-project.com/CocoaSharp and www.monobjc.net NObjective - (apparently) faster than the others - code.google.com/p/nobjective MObjc / MCocoa - code.google.com/p/mobjc and code.google.com/p/mcocoa ObjC# - www.mono-project.com/ObjCSharp

    Read the article

  • Is it possible to partition more than one way at a time in SQL Server?

    - by meeting_overload
    I'm considering various ways to partition my data in SQL Server. One approach I'm looking at is to partition a particular huge table into 8 partitions, then within each of these partitions to partition on a different partition column. Is this even possible in SQL Server, or am I limited to definining one parition column+function+scheme per table? I'm interested in the more general answer, but this strategy is one I'm considering for Distributed Partitioned View, where I'd partition the data under the first scheme using DPV to distribute the huge amount of data over 8 machines, and then on each machine partition that portion of the full table on another parition key in order to be able to drop (for example) sub-paritions as required.

    Read the article

  • What hardware makes a good MongoDB Server ? Where to get it ?

    - by João Pinto Jerónimo
    Suppose you're on dell.com right now and you're buying a server to run your MongoDB database for your small startup. You will have to handle literally tens of thousands of writes and reads per minute (but small objects). Would you go for 2 processors ? Invest more on RAM ? I've heard (correct me if I'm wrong) MongoDB handles the most it can on the RAM and then flushes everything to the disk, in that case I should invest on a CPU with a large L2 cache, probably 40GB of RAM and a solid state drive.. right ? Would I be better off with a high end (~$11,309, 2 expensive processors, 96GB of RAM) server or 2x(~$6,419, 2 expensive processors, 12GB of RAM) servers ? Is Dell ok or do you have better sugestions ? (I'm outside the US, on Portugal)

    Read the article

  • XML over HTTP with JMS and Spring

    - by Will Sumekar
    I have a legacy HTTP server where I need to send an XML file over HTTP request (POST) using Java (not browser) and the server will respond with another XML in its HTTP response. It is similar to Web Service but there's no WSDL and I have to follow the existing XML structure to construct my XML to be sent. I have done a research and found an example that matches my requirement here. The example uses HttpClient from Apache Commons. (There are also other examples I found but they use java.net networking package (like URLConnection) which is tedious so I don't want to use them). But it's also my requirement to use Spring and JMS. I know from Spring's reference that it's possible to combine HttpClient, JMS and Spring. My question is, how? Note that it's NOT in my requirement to use HttpClient. If you have a better suggestion, I'm welcome. Appreciate it. For your reference, here's the XML-over-HTTP example I've been talking about: /* * $Header: * $Revision$ * $Date$ * ==================================================================== * * Copyright 2002-2004 The Apache Software Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * ==================================================================== * * This software consists of voluntary contributions made by many * individuals on behalf of the Apache Software Foundation. For more * information on the Apache Software Foundation, please see * <http://www.apache.org/>. * * [Additional notices, if required by prior licensing conditions] * */ import java.io.File; import java.io.FileInputStream; import org.apache.commons.httpclient.HttpClient; import org.apache.commons.httpclient.methods.InputStreamRequestEntity; import org.apache.commons.httpclient.methods.PostMethod; /** * * This is a sample application that demonstrates * how to use the Jakarta HttpClient API. * * This application sends an XML document * to a remote web server using HTTP POST * * @author Sean C. Sullivan * @author Ortwin Glück * @author Oleg Kalnichevski */ public class PostXML { /** * * Usage: * java PostXML http://mywebserver:80/ c:\foo.xml * * @param args command line arguments * Argument 0 is a URL to a web server * Argument 1 is a local filename * */ public static void main(String[] args) throws Exception { if (args.length != 2) { System.out.println( "Usage: java -classpath <classpath> [-Dorg.apache.commons."+ "logging.simplelog.defaultlog=<loglevel>]" + " PostXML <url> <filename>]"); System.out.println("<classpath> - must contain the "+ "commons-httpclient.jar and commons-logging.jar"); System.out.println("<loglevel> - one of error, "+ "warn, info, debug, trace"); System.out.println("<url> - the URL to post the file to"); System.out.println("<filename> - file to post to the URL"); System.out.println(); System.exit(1); } // Get target URL String strURL = args[0]; // Get file to be posted String strXMLFilename = args[1]; File input = new File(strXMLFilename); // Prepare HTTP post PostMethod post = new PostMethod(strURL); // Request content will be retrieved directly // from the input stream // Per default, the request content needs to be buffered // in order to determine its length. // Request body buffering can be avoided when // content length is explicitly specified post.setRequestEntity(new InputStreamRequestEntity( new FileInputStream(input), input.length())); // Specify content type and encoding // If content encoding is not explicitly specified // ISO-8859-1 is assumed post.setRequestHeader( "Content-type", "text/xml; charset=ISO-8859-1"); // Get HTTP client HttpClient httpclient = new HttpClient(); // Execute request try { int result = httpclient.executeMethod(post); // Display status code System.out.println("Response status code: " + result); // Display response System.out.println("Response body: "); System.out.println(post.getResponseBodyAsString()); } finally { // Release current connection to the connection pool // once you are done post.releaseConnection(); } } }

    Read the article

  • Is there any way to search within OneNote 2007 attachments

    - by jtolle
    I'm starting to use OneNote (2007) more. One thing I'd like to do is take notes on papers I have read. That is, I attach, say, a PDF file, and then type in some notes about it. Sometimes I do other stuff like copy some key text or figures from the paper, so OneNote is great for this because all that plus my own notes plus the file itself can all be in one place. However, the OneNote search doesn't seem to be able to search within said PDF files. Windows search finds things, but just in the OneNote cache, not the actual Onenote .one files. (Presumably that will only work for recently accessed stuff, and in any case doesn't take me to my actual notes.) Is there a way to do what I want? If not, does anyone have a suggestion (or link) as to how to best use OneNote to store (and later search for!) this kind of content and notes?

    Read the article

  • clojure.algo.monad strange m-plus behaviour with parser-m - why is second m-plus evaluated?

    - by Mark Fisher
    I'm getting unexpected behaviour in some monads I'm writing. I've created a parser-m monad with (def parser-m (state-t maybe-m)) which is pretty much the example given everywhere (here, here and here) I'm using m-plus to act a kind of fall-through query mechanism, in my case, it first reads values from a cache (database), if that returns nil, the next method is to read from "live" (a REST call). However, the second value in the m-plus list is always called, even though its value is disgarded (if the cache hit was good) and the final return is that of the first monadic function. Here's a cutdown version of the issue i'm seeing, and some solutions I found, but I don't know why. My questions are: Is this expected behaviour or a bug in m-plus? i.e. will the 2nd method in a m-plus list always be evaluated if the first item returns a value? Minor in comparison to the above, but if i remove the call _ (fetch-state) from checker, when i evaluate that method, it prints out the messages for the functions the m-plus is calling (when i don't think it should). Is this also a bug? Here's a cut-down version of the code in question highlighting the problem. It simply checks key/value pairs passed in are same as the initial state values, and updates the state to mark what it actually ran. (ns monods.monad-test (:require [clojure.algo.monads :refer :all])) (def parser-m (state-t maybe-m)) (defn check-k-v [k v] (println "calling with k,v:" k v) (domonad parser-m [kv (fetch-val k) _ (do (println "k v kv (= kv v)" k v kv (= kv v)) (m-result 0)) :when (= kv v) _ (do (println "passed") (m-result 0)) _ (update-val :ran #(conj % (str "[" k " = " v "]"))) ] [k v])) (defn filler [] (println "filler called") (domonad parser-m [_ (fetch-state) _ (do (println "filling") (m-result 0)) :when nil] nil)) (def checker (domonad parser-m [_ (fetch-state) result (m-plus ;; (filler) ;; intitially commented out deliberately (check-k-v :a 1) (check-k-v :b 2) (check-k-v :c 3))] result)) (checker {:a 1 :b 2 :c 3 :ran []}) When I run this as is, the output is: > (checker {:a 1 :b 2 :c 3 :ran []}) calling with k,v: :a 1 calling with k,v: :b 2 calling with k,v: :c 3 k v kv (= kv v) :a 1 1 true passed k v kv (= kv v) :b 2 2 true passed [[:a 1] {:a 1, :b 2, :c 3, :ran ["[:a = 1]"]}] I don't expect the line k v kv (= kv v) :b 2 2 true to show at all. The first function to m-plus (as seen in the final output) is what is returned from it. Now, I've found if I pass a filler into m-plus that does nothing (i.e. uncomment the (filler) line) then the output is correct, the :b value isn't evaluated. If I don't have the filler method, and make the first method test fail (i.e. change it to (check-k-v :a 2) then again everything is good, I don't get a call to check :c, only a and b are tested. From my understanding of what the state-t maybe-m transformation is giving me, then the m-plus function should look like: (defn m-plus [left right] (fn [state] (if-let [result (left state)] result (right state)))) which would mean that right isn't called unless left returns nil/false. I'd be interested to know if my understanding is correct or not, and why I have to put the filler method in to stop the extra evaluation (whose effects I don't want to happen). Apologies for the long winded post!

    Read the article

  • Passenger, Apache and avoiding page caching

    - by user38382
    I'm hosting a rack application with passenger and apache. The application is setup to cache the content of each request to the public directory after each request. This allows apache to serve the content directly as a static page for future requests. I would like to tell Apache, presumably through some rewrite rules that any requests with query parameters should not be cached, but instead passed down to the rack application. With a mongrel setup I would just redirect it to the balancer if it meets my rewrite conditions. How do you do the same with passenger?

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >