Search Results

Search found 7128 results on 286 pages for 'httpcontext cache'.

Page 155/286 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • What is a good robots.txt for WP ?

    - by Steven
    What is the "best" setup for robots.txt? I'm using the following permalink structure in Wordpress: /%category%/%postname%/. My robots.txt currently looks like this (copied from somewhere a long time ago): User-agent: * Disallow: /cgi-bin Disallow: /wp-admin Disallow: /wp-includes Disallow: /wp-content/plugins Disallow: /wp-content/cache Disallow: /wp-content/themes Disallow: /trackback Disallow: /comments Disallow: /category/*/* Disallow: */trackback Disallow: */comments I want my comments to be indext. So I can remove this, right? Do I want to disallow indexing categories because of my permalinkstructure? An article can have several tags and be in multiple categories. This may cause duplicates in google. How should I work around this? Would you change anything else here?

    Read the article

  • How to determine the path of the current web site

    - by Velika2
    I wanted to create a function which would return the path of the current web site. This is what I thought was working while running in the IDE: Public Shared Function WebsiteAbsoluteBaseUrl() As String Dim RequestObject As System.Web.HttpRequest = HttpContext.Current.Request Return "http://" & RequestObject.Url.Host & ":" & _ RequestObject.Url.Port & "/" & _ RequestObject.Url.Segments(1) End Function Does this seem like it should work? Is there a more straight forward way?

    Read the article

  • ASP.NET MVC: Route to URL

    - by JamesBrownIsDead
    What's the easiest way to get the URL (relative or absolute) to a Route in MVC? I saw this code here on SO but it seems a little verbose and doesn't enumerate the RouteTable. Example: List<string> urlList = new List<string>(); urlList.Add(GetUrl(new { controller = "Help", action = "Edit" })); urlList.Add(GetUrl(new { controller = "Help", action = "Create" })); urlList.Add(GetUrl(new { controller = "About", action = "Company" })); urlList.Add(GetUrl(new { controller = "About", action = "Management" })); With: protected string GetUrl(object routeValues) { RouteValueDictionary values = new RouteValueDictionary(routeValues); RequestContext context = new RequestContext(HttpContext, RouteData); string url = RouteTable.Routes.GetVirtualPath(context, values).VirtualPath; return new Uri(Request.Url, url).AbsoluteUri; } What's a better way to examine the RouteTable and get a URL for a given controller and action?

    Read the article

  • How to debug a WCF Service with an HTTP Context?

    - by JL
    I need to debug a WCF service but it needs to have an HTTP Context. Currently I have a solution with a WCF service web site, when I click on debug it starts up and then fires up an html page that contains no test form. While the project is running I tried starting the wcftestclient manually, then provided the address of my service, it finds the service but when I invoke it, it bypasses the IIS layer (or development server), so the httpContext is null... What is the correct way to debug a WCF service through an IIS context?

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • Migrating Virtual Iron guest to Oracle VM 3.x

    - by scoter
    As stated on the official site, Oracle in 2009, acquired a provider of server virtualization management software named Virtual Iron; you can find all the acquisition details at this link. Into the FAQ on the official site you can also view that, for the future, Oracle plans to fully integrate Virtual Iron technology into Oracle VM products, and any enhancements will be delivered as a part of the combined solution; this is what is going on with Oracle VM 3.x. So, customers started asking us to migrate Virtual Iron guests to Oracle VM. IMPORTANT: This procedure needs a dedicated OVM-Server with no-guests running on top; be careful while execute this procedure on production environments. In these little steps you will find how-to migrate, as fast as possible, your guests between VI ( Virtual Iron ) and Oracle VM; keep in mind that OracleVM has a built-in P2V utility ( Official Documentation )  that you can use to migrate guests between VI and Oracle VM. Concepts: VI repositories.  On VI we have the same "repository" concept as in Oracle VM; the difference between these two products is that VI use a raw-lun as repository ( instead of using ocfs2 and its capabilities, like ref-links ). The VI "raw-lun" repository, with a pure operating-system perspective, may be presented as in this picture: Infact on this "raw-lun" VI create an LVM2 volume-group. The VI "raw-lun" repository, with an hypervisor perspective, may be presented as in this picture: So, the relationships are: LVM2-Volume-Group <-> VI Repository LVM2-Logical-Volume <-> VI guest virtual-disk The first step is to present the VI repository ( raw-lun ) to your dedicated OVM-Server. Prepare dedicated OVM-Server On the OVM-Server ( OVS ) you need to discover new lun and, after that, discover volume-group and logical-volumes containted in VI repository; due to default OVS configuration you need to edit lvm2 configuration file: /etc/lvm/lvm.conf     # By default for OVS we restrict every block device:     # filter = [ "r/.*/" ] and comment the line starting with "filter" as above. Now you have to discover the raw-lun presented and, next, activate volume-group and logical-volumes: #!/bin/bash for HOST in `ls /sys/class/scsi_host`;do echo '- - -' > /sys/class/scsi_host/$HOST/scan; done CPATH=`pwd` cd /dev for DEVICE in `ls sd[a-z] sd?[a-z]`;do echo '1' > /sys/block/$DEVICE/device/rescan; done cd $CPATH cd /dev/mapper for PARTITION in `ls *[a-z] *?[a-z]`;do partprobe /dev/mapper/$PARTITION; done cd $CPATH vgchange -a yAfter that you will see a new device:[root@ovs01 ~]# cd /dev/6000F4B00000000000210135bef64994[root@ovs01 6000F4B00000000000210135bef64994]# ls -l 6000F4B0000000000061013* lrwxrwxrwx 1 root root 77 Oct 29 10:50 6000F4B00000000000610135c3a0b8cb -> /dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cb By your OVM-Manager create a guest server with the same definition as on VI:same core number as VI source guestsame memory as VI source guestsame number of disks as VI source guest ( you can create OVS virtual disk with a small size of 1GB because the "clone" will, eventually, extend the size of your new virtual disks )Summarizing:source-virtual-disk path ( VI ):/dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cbdest-virtual-disk path ( OVS ):/OVS/Repositories/0004fb00000300006cfeb81c12f12f00/VirtualDisks/0004fb000012000055e0fc4c5c8a35ee.img ** ** = to identify your virtual disk you have verify its name under the "vm.cfg" file of your new guest.Clone VI virtual-disk to OVS virtual-diskdd if=/dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cb of=/OVS/Repositories/0004fb00000300006cfeb81c12f12f00/VirtualDisks/0004fb000012000055e0fc4c5c8a35ee.img Clean unsupported parameters and changes on OVS.1. Restore original /etc/lvm/lvm.conf    # By default for OVS we restrict every block device:     filter = [ "r/.*/" ]    and uncomment the line starting with "filter" as above.2. Force-stop lvm2-monitor service  # service lvm2-monitor force-stop 3. Restore original /etc/lvm directories ( archive, backup and cache )  # cd /etc/lvm  # rm -fr archive backup cache; mkdir archive backup cache4. Reboot OVSRefresh OVS repository and start your guest.By OracleVM Manager refresh your repository:By OracleVM Manager start your "migrated" guest: Comments and corrections are welcome.  Simon COTER 

    Read the article

  • Do PHP-FPM (and other PHP handlers) need execute permissions on the PHP files they're serving?

    - by Andrew Cheong
    I read in a post at Server Fault that PHP-FPM needs execute permissions. However, the answer in When creating a website, what permissions and directory structure? only grants read and write permissions to PHP-FPM. Maybe I don't quite understand how PHP handlers (or CGI in general) work, but the two claims seem contradictory to me. As I understand, when Apache / Nginx gets a request for foobar.php, it "passes" the file to an appropriate handler. That is, I imagine it's as if www-root (or apache or whomever the webserver's running as) were to run some command, /usr/sbin/php-fpm foobar.php Actually, no, that's naive, I just realized. PHP-FPM must be a running instance (if it's to be performant, and cache, etc.), so probably PHP-FPM is just being told, "Hey, quick, process this file for me!" In either case, I don't see why execute permissions are necessary. It's not like the webserver needs to literally execute the file, i.e. ./foobar.php Is the Server Fault answer simply mistaken?

    Read the article

  • Switching to WIF SessionMode in ASP.NET

    - by Your DisplayName here!
    To make it short: to switch to SessionMode (cache to server) in ASP.NET, you need to handle an event and set a property. Sounds easy – but you need to set it in the right place. The most popular blog post about this topic is from Vittorio. He advises to set IsSessionMode in WSFederationAuthenticationModule_SessionSecurityTokenCreated. Now there were some open questions on forum, like this one. So I decided to try it myself – and indeed it didn’t work for me as well. So I digged a little deeper, and after some trial and error I found the right place (in global.asax): void WSFederationAuthenticationModule_SecurityTokenValidated( object sender, SecurityTokenValidatedEventArgs e) {     FederatedAuthentication.SessionAuthenticationModule.IsSessionMode = true; } Not sure if anything has changed since Vittorio’s post – but this worked for me. While playing around, I also wrote a little diagnostics tool that allows you to look into the session cookie (for educational purposes). Will post that soon. HTH

    Read the article

  • Web Services Example - Part 2: Programmatic

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 2 of our Web Service examples.  In this posting we'll take a look at using a SOAP Web Service but calling it programmatically in code and parsing the return into a bean. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part1 Defining our Web Service: Just like our first installment, we are using the same public weather forecast web service provided free by CDYNE Corporation.  Sometimes this service goes down so please ensure you know it's up before reporting this example isn't working. We're going to concentrate on the same two web service methods, GetCityForecastByZIP and GetWeatherInformation. Defing the Application: The application setup is identical to the Weather1 version.  There are some improvements to the data that is displayed as part of this example though.  Now we are able to show the associated image along with each forecast line when using the Forecast By Zip feature.  We've also added the temperature Hi/Low values into the UI. Summary of Fundamental Changes In This Application The most fundamental change is that we're binding the UI to the Bean Data Controls instead of directly to the Web Service Data Controls.  This gives us much more flexibility to control the shape of the data and allows us to do caching of the data outside of the Web Service.  This way if your application is, say offline, your bean could still populate with data from a local cache and still show you some UI as opposed to completely failing because you don't have any connectivity. In general we promote this type of programming technique with ADF Mobile to insulate your application from any issues with network connectivity. What's different with this example? We have setup the Web Service DC the same way but now we have managed beans to process the data.  The following classes define the "Model" of our application:  CityInformation-CityForecast-Forecast, WeatherInformation-WeatherDescription.  We use WeatherBean for UI interaction to the model layer.  If you look through this example, we don't really do that much with the java code except use it to grab the image URL from the weather description.  In a more realistic example, you might be using some JDBC classes to persist the data to a local database. To have a good architecture it is always good to keep your model and UI layers separate.  This gets muddied if you start to use bindings on a page invoked from Java code and this java code starts to become your "model" layer.  Since bindings are page specific, your model layer starts to become entwined with your UI.  Not good!  To help with this, we've added some utility functions that let you invoke DC methods without having a binding and thus execute methods from your "model" layer without requiring a binding in your page definition.  We do this with the invokeDataControlMethod of the AdfmfJavaUtilities class.  An example of this method call is available in line 95 of WeatherInformation.java and line 93 of CityInformation.Java. What's a GenericType? Because Web Service Data Controls (and also URL Data Controls AKA REST) use generic name/value pairs to define their structure and don't have strongly typed objects, these are actually stored internally as GenericType objects.  The GenericType class is simply a property map of name/value pairs that can be hierarchical.  There are methods like getAttribute where you supply the index of the attribute or it's string property name.  Why is this important to know?  Because invokeDataControlMethod returns GenericType objects and developers either need to parse these GenericType objects themselves or use one of our helper functions. GenericTypeBeanSerializationHelper This class does exactly what it's name implies.  It's a helper class for developers to aid in serialization of GenericTypes to/from java objects.  This is extremely handy if you have a large GenericType object with many attributes (or you're just lazy like me!) and you just want to parse it out into a real java object you can use more easily.  Here you would use the fromGenericType method.  This method takes the class of the Java object you wish to return and the GenericType as parameters.  The method then parses through each attribute in the GenericType and uses reflection to set that same attribute in the Java class.  Then the method returns that new object of the class you specified.  This is obviously very handy to avoid a lot of shuffling code between GenericType and your own Java classes.  The reverse method, toGenericType is also available when you want to go the other way.  In this case you supply the string that represents the package location in the DataControl definition (Example: "MyDC.myParams.MyCollection") and then pass in the Java object you have that holds the data and a GenericType is returned to you.  Again, it will use reflection to calculate the attributes that match between the java class and the GenericType and call the getters/setters on those. Issues and Possible Improvements: In the next installment we'll show you how to make your web service calls asynchronously so your UI will fill dynamically when the service call returns but in the meantime you show the data you have locally in your bean fed from some local cache.  This gives your users instant delivery of some data while you fetch other data in the background.

    Read the article

  • Unable to download microsoft excel files from a IIS SSL site

    - by Jeffrey
    The web master at my corporation added SSL to the web site and now none of my users can download Microsoft word and xcel files the sites generates. According to Microsoft the following must be down. Web sites that want to allow this type of operation should remove the no-cache header or headers. Typical of MS they don't tell you what to do, how to do it, or what the best practice is. The web master says its a web config setting. But all i can finds is <configuration> <appSettings/> <connectionStrings/> <system.web> <httpRuntime sendCacheControlHeader="false"/> and I don't know if this is the best way to achieve the result. I would greatly appreciate some advice on this subject.

    Read the article

  • What norms/API for monitoring my servers?

    - by dystroy
    I have a dozen server applications installed on my customers intranets (they can send http requests over the internet but cannot be called from outside). They're written in various technologies, mainly java and Go. I want them to regularly push information about their state towards a central server which is visible on internet. Some of this information is generic (is it ON ?), some is specific (size of a cache in an application for example). The main goal is to be able to make a small web page on which I could instantly check the state of every servers. And maybe later add some kind of notification in case of problem. Obviously I can do this by writing a few dozen lines of code each side (or a little more if I put this data in a database) but in order to ease future evolution, it could be interesting to use some existing norms or libraries. So, what are the current opensource free and light solutions to do this, preferably with no central configuration when I add a server ? I'd prefer a norm over a library.

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Lean/Kanban *Inside* Software (i.e. WIP-Limits, Reducing Queues and Pull as Programming Techniques)

    - by Christoph
    Thinking about Kanban, I realized that the queuing-theory behind the SW-development-methodology obviously also applies to concurrent software. Now I'm looking for whether this kind of thinking is explicitly applied in some area. A simple example: We usually want to limit the number of threads to avoid cache-thrashing (WIP-Limits). In the paper about the disruptor pattern[1], one statement that I found interesting was that producer/consumers are rarely balanced so when using queues, either consumers wait (queues are empty), or producers produce more than is consumed, resulting in either a full capacity-constrained queue or an unconstrained one blowing up and eating away memory. Both, in lean-speak, is waste, and increases lead-time. Does anybody have examples of WIP-Limits, reducing/eliminating queues, pull or single piece flow being applied in programming? http://disruptor.googlecode.com/files/Disruptor-1.0.pdf

    Read the article

  • Any way for ubuntu to use more than one core of i7 cpu on my Asus laptop?

    - by G. He
    Newly installed ubuntu 11.10 on a new Asus U46E laptop. /proc/cpuinfo correctly identified the cpu but shows only one core: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz stepping : 7 cpu MHz : 800.000 cache size : 4096 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc up arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 x2apic popcnt aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5587.63 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: I search here and found the answer to one post suggesting remove boot parameter 'nolapic'. However, on my particular laptop, ubuntu won't boot without this nolapic parameter. Is there anyway for ubuntu correly utility the full cpu power?

    Read the article

  • Accessing HttpRequest from Global.asax via a page

    - by Polymorphix
    I'm trying to get a property (ImpersonatePersonId) from a page in global.asax but get a HttpException saying 'Request is not available in this context'. I've been searching for some documentation on where in the pipeline the request is accessible, but as far as I can see all Microsoft can produce of documentation is one-liners like "PostRequestHandlerExecute: Occurs when the ASP.NET event handler finishes execution." which really doesn't give me much... I've tried placing the call in both Pre and PostRequestHandlerExceute but with the same result. I wonder if anyone with experience in this would be so kind as to tell me where the Request object is available. My code from global asax is below. ICanImpersonate page = HttpContext.Current.Handler as ICanImpersonate; ImpersonatedUser impersonatePerson = page != null ? page.ImpersonatePersonId : null; Response.Filter = new TagRewriter(Response, new TagProcessor(Context, impersonatePerson).Process); What I want to do is rewrite some HTML based on some request parameters.

    Read the article

  • Install a Mirror without downloading all the packages in the official repository

    - by Sam
    I first gonna explain the situation : ( The two PCs are running Ubuntu 12.04 ) I have a Laptop which is connected to a wifi connection, and a Desktop which can not be connected to Internet ( the modem is too far from it ), and i want to install some software to the last one. ( the two PCs are connected with an Ethernet cable ) I've already searched for a solution, but all i found was the use of some softwares that should have been already installed on the "Internet-less PC". ( Keryx, APTonCD ... ) What I want to do is to create a mirror in my laptop which contain the packages i have in this one ( situated in /var/cache/apt/archive ) and i don't want to download all the packages from the official repository, I don't need them. Can someone tell me if this is possible ? Thank you.

    Read the article

  • Calling a WCF from ASP.NET with same the single-signon user LogonUserIdentity

    - by Dennis Cheung
    I have a ASP.NET MVC page, which call WCF logic. The system is single-signon using NTML. Both the ASP page and the WCF will use the UserIdentity to get user login information. Other then NTML, I will also have a Form based authorization (with AD) in same system. The ASP page, is it simple and I can have it from HttpContext.Current.Request.LogonUserIdentity. However, it seem it is missing from the WCF which call by the ASP, not from browser. How to configure to pass the ID pass from the ASP to the WCF?

    Read the article

  • Install a Mirror without downloading all packages from official repository

    - by Sam
    I first gonna explain the situation: I have a Laptop which is connected to a wifi connection, and a Desktop which can not be connected to Internet (the modem is too far from it), and i want to install some software on the last one. (The two PCs are running Ubuntu 12.04, and are connected with an Ethernet cable) I've already searched for a solution, but all I've found was the use of some softwares that should have been already installed on the "Internet-less PC". ( Keryx, APTonCD ... ) What I want to do is to create a mirror in my laptop which contain the packages i have in this one ( situated in /var/cache/apt/archive ) and i don't want to download all the packages from the official repository, I actually don't need them. Can someone tell me if it is possible to do that ? Thanks,

    Read the article

  • Is MUMPS alive?

    - by ern0
    At my first workplace we were using Digital Standard MUMPS on a PDP 11-clone (TPA 440), then we've switched to Micronetics Standard MUMPS running on a Hewlett-Packard machine, HP-UX 9, around early 90's. Is still MUMPS alive? Are there anyone using it? If yes, please write some words about it: are you using it in character mode, does it acts as web server? etc. (I mean Caché, too.) If you've been used it, what was your feelings about it? Did you liked it?

    Read the article

  • Why is it taking so long to open the Ubuntu Help Center?

    - by Agmenor
    When I click on the Help Center Icon in the 'System' menu, it takes more than a minute to launch the program. More than a minute, for a text only program seeming like a website! All my other programs work fine, and I saw this problem also on other computers. Is there a reason for this? Will it be fixed? I think it is an important issue for beginners. As a response to Scaine, the result of the command software-center is the following: Traceback (most recent call last): File "/usr/share/software-center/update-software-center-agent", line 72, in <module> db = xapian.WritableDatabase(pathname, xapian.DB_CREATE_OR_OVERWRITE) File "/usr/lib/python2.6/dist-packages/xapian.py", line 3195, in __init__ _xapian.WritableDatabase_swiginit(self,_xapian.new_WritableDatabase(*args)) xapian.DatabaseLockError: Unable to acquire database write lock on /home/agmenor/.cache/software-center/software-center-agent.db.tmp: already locked 2011-01-11 19:57:24,495 - softwarecenter.app - INFO - software-center-agent finished with status 1

    Read the article

  • Ask the Readers: How Do You Score Free Wi-Fi While Traveling?

    - by Jason Fitzpatrick
    The holiday season is in full swing and that means many of us will be traveling–and searching for Wi-Fi nodes in the process. Help your fellow readers out by sharing your best Wi-Fi finding tips and tricks. Internet access is a necessity for the modern traveler but finding it is a bit more difficult than simply plugging into your home Wi-Fi. This week we want to hear all about your tips, tricks, and methods for scoring free Wi-Fi service in your interstate (and even international) travels. How do you keep the bounty of the internet flowing to your laptops, netbooks, tablets, and smart phones as you traverse the world? Sound off in the comments with your best tips and then check back on Friday for the What You Said roundup. HTG Explains: Understanding Routers, Switches, and Network Hardware How to Use Offline Files in Windows to Cache Your Networked Files Offline How to See What Web Sites Your Computer is Secretly Connecting To

    Read the article

  • Advertising cookies ( ad sense ) on google chrome [closed]

    - by zack
    ok first I'm not sure if this is right stackexchange site to ask but I'm not sure if there is better one as question is quite general When I visit one site, and if this site runs google adsense campaign, I can see its ads on other sites. I suppose this is cookie from this site and than it's showing ads using the cookie However, I removed this specific cookie and cleared cache, and I can still see this same ad showing on other sites running adsense My question is, is there is something wrong with google chrome, is there a possibility that Google stores data within google chrome itself? If not, how else they could trace which ads to show?

    Read the article

  • In which directory to write game save files/data?

    - by Klaim
    I need a definite list of directories, one or more per platform, where to put game save files and other game generated data. Either based no the OS developer specification, or because it is common usage if there is no recommandation. Please provide one answer per platform, with different directories. Also, example of how to get the directory location in C++ or C is best, as it's the language you'll have more hard time. Locations: Player's game data (saved games, config). Shared game data (like high-score or config for all computer users). Temporary game data (aka cache directory).

    Read the article

  • Caching strategies - LRU, MRU, Clock-Pro

    - by golgofa
    I am going to write a bachelor's science work on caching strategies and really, can't find any links to specifications or full descriptions of some of them. Only something like summaries from wikipedia. Please, help with some links on LRU, MRU caching and new-one - Clock Pro. Thanks a lot. All links are very useful for me. The purpose of work - is to compare different cache strategies to get more effiency. It based on WebApplication with ejb 2.0, so algorithm's will be implemented there, espesially in ejbLoad() and ejbFindByPrimarKey(). Also, one of aspects of this application - it will use not common scheme of tables in database - it based on metamodel. So, if you had any experience on this topic, i would be grateful to take some of your knowledge)

    Read the article

  • How can I utilize or mimic Application OnStart in an HttpModule?

    - by Sailing Judo
    We are trying to remove the global.asax from our many web applications in favor of HttpModules that are in a common code base. This works really well for many application events such as BeginRequest and PostAuthentication, but there is no Application Start event exposed in the HttpModule. I can think of a couple of smelly ways to overcome this deficit. For example, I can probably do this: protected virtual void BeginRequest(object sender, EventArgs e) { Log.Debug("Entered BeginRequest..."); var app = HttpContext.Current.Application; var hasBeenSet app["HasBeenExecuted"] == null ? false : true; if(!hasBeenSet) { app.Lock(); // ... do app level code app.Add("HasBeenSet", true); app.Unlock(); } // do regular begin request stuff ... } But this just doesn't smell well to me. What is the best way to invoke some application begin logic without having a global.asax?

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >