Search Results

Search found 7128 results on 286 pages for 'httpcontext cache'.

Page 154/286 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Building lirc package from source with patches

    - by joystick
    I'd like to build latest lirc package for 12.04 with two patches from http://bit.ly/17779VW to make USB Infrared toy v2 work: Running sudo apt-build source lirc gave me ? build ll total 960 drwxr-xr-x 10 root root 4096 Nov 5 07:07 lirc-0.9.0 -rw-r--r-- 1 root root 113909 May 5 2011 lirc_0.9.0-0ubuntu1.debian.tar.gz -rw-r--r-- 1 root root 1553 May 5 2011 lirc_0.9.0-0ubuntu1.dsc -rw-r--r-- 1 root root 857286 May 5 2011 lirc_0.9.0.orig.tar.bz2 in /var/cache/apt-build/build. Running sudo apt-build build-source lirc then gave me Some error occured building package which is not really informative. I have successfully built patched lirc from source but now I would like to get a deb package. Where can I look for this 'some errors' in detail? Thank you, Alexei

    Read the article

  • Adoption of Exadata - Gartner research note

    - by Javier Puerta
    Independent research note by Gartner acknowledges Oracle Exadata Database Machine has achieved significant early adoption and acceptance of its database appliance value proposition. Analyst Merv Adrian looks at some of the main issues that IT professionals have solved as they assess or deploy the Oracle Exadata solution, including: OLTP and DSS workload support workload consolidation increasing performance and scalability demands data compression improvements  Gartner reports clients using Oracle Exadata experienced the following: report significant performance improvements substantial amounts of cache memory which greatly improves processing speed Oracle Advanced Compression providing 2-4X data compression delivering significant reductions in storage requirements and driving shorter times for backup operations Tables compressed with Oracle Advanced Compression automatically recompress as data is added/updated. One client specifically reported consolidating more than 400 applications onto the Oracle Exadata platform Read the full Gartner note

    Read the article

  • How to install Chrome browser properly via command line?

    - by Bad Learner
    Setting up and managing an Ubuntu server all by myself, in coming months, is a part of my current plans. Hence, I am planning a swtich from Windows to Linux - - Ubuntu. I now need to get some grip on the command line, since I am all used to Windows' GUI. Anyway... the most obvious start is installing apps on my computer, and I thought I should learn to do it via CLI. And this is what I did: $ apt-cache search chrome browser the results showed that the proper term is "chrome-browser," so... $ sudo apt-get install chrome-browser And then "Y" for the Y/n question. But the installation threw errors. (I do not have my PC at hand, so can't mention what error exactly.) Does someone see anything wrong with the commands I issued? I am probably missing some command(s) in between, I think.

    Read the article

  • SVG images grow and create scrollbars when on the server

    - by zuko
    Okay so I embedded some SVG images into my page and opened it locally on Chrome and it looked fine. I upload the same file to the server and look at the page online and the SVG images have grown by maybe 5-10% and are surrounded by scroll bars like they are overflowing. I think it probably has to do with my lack of knowledge on how SVG and Embed work. What's really puzzling me though, is that it works fine locally. (I have cache disabled.) Help? Thanks. Edit: code HTML: <embed type="image/svg+xml" src="content/web-logo.svg"/> There's no CSS on the image. I'm not sure if I was just wrong before or if I changed something I'm not aware of, but it doesn't appear to be actually changing size anymore. It just decides to stuff it into a scrollbox. pic: https://www.dropbox.com/s/wt1aufi7nl1fpyi/svg-problem.png

    Read the article

  • Scala and HttpClient: How do I resolve this error?

    - by Benjamin Metz
    I'm using scala with Apache HttpClient, and working through examples. I'm getting the following error: /Users/benjaminmetz/IdeaProjects/JakartaCapOne/src/JakExamp.scala Error:Error:line (16)error: overloaded method value execute with alternatives (org.apache.http.HttpHost,org.apache.http.HttpRequest)org.apache.http.HttpResponse <and> (org.apache.http.client.methods.HttpUriRequest,org.apache.http.protocol.HttpContext)org.apache.http.HttpResponse cannot be applied to (org.apache.http.client.methods.HttpGet,org.apache.http.client.ResponseHandler[String]) val responseBody = httpclient.execute(httpget, responseHandler) Here is the code with the error and line in question highlighted: import org.apache.http.client.ResponseHandler import org.apache.http.client.HttpClient import org.apache.http.client.methods.HttpGet import org.apache.http.impl.client.BasicResponseHandler import org.apache.http.impl.client.DefaultHttpClient object JakExamp { def main(args : Array[String]) : Unit = { val httpclient: HttpClient = new DefaultHttpClient val httpget: HttpGet = new HttpGet("www.google.com") println("executing request..." + httpget.getURI) val responseHandler: ResponseHandler[String] = new BasicResponseHandler val responseBody = httpclient.execute(httpget, responseHandler) // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ println(responseBody) client.getConnectionManager.shutdown } } I can successfully run the example in java...

    Read the article

  • Ubuntu 12.04 Install Problems.. Installation Type screen. No options [closed]

    - by Zaffiro
    Possible Duplicate: Only ‘sdb’ shows up when installing 12.04 on a new Dell inspiron 14z I am new to Linux and trying to install Ubuntu 12.04 on a new HP Pavilion DV6TQE Ivy Bridge and being presented with the below screen which I believe is incorrect. My disk is set up as a basic disk (not dynamic) and I tried with a single C:\ partition and by creating a second partition in windows with no luck. Any ideas? UPDATE: I think I know what the problem is but I don't know how to fix it yet.. My hard drive has a 32gb mSSD cache which is listed as dev/sdb. for some reason this is causing the installation trouble.

    Read the article

  • Installing Ubuntu

    - by Mister AR
    i got a problem when I wanted to installing ubuntu 12.04 on a VMWare system on my Windows 7 x64 system ... in the end of installing after retrieving Files it stopped and didn't move forward... additionally i got a another problem there where i wanted to installing packages i updated. and gave me error below : installArchives() failed: Error in function: Setting up libssl1.0.0 (1.0.1-4ubuntu5.2) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing libssl1.0.0 (--configure): subprocess installed post-installation script returned error exit status 1 PLz help me soon ! tY all...

    Read the article

  • How can I add the version of a file to the file name with Tortoise-SVN?

    - by Eric Belair
    I would like to start giving unique names to "cache-able" files - i.e. *.css and *.js - in order to prevent caching, without requiring changes to the web-server settings (as is currently done in IIS). For instance, let's I have a JavaScript file called global.js. Going forward I would like it to have the name global.123.js when revision 123 is checked in. This would also require the following: The previous version of the file - perhaps it was global.115.js - is removed when the file is deployed. All references to the file are updated with the new file name How do I go about doing this? What concerns do I need to consider?

    Read the article

  • Cannot install nautilus elementary.

    - by coklatua
    when I try apt-cache policy nautilus it shows this, Installed: 1:2.32.0-0ubuntu1-ppa1 Candidate: 1:2.32.0-0ubuntu1-ppa1 Version table: *** 1:2.32.0-0ubuntu1-ppa1 0 100 /var/lib/dpkg/status 1:2.32.0-0ubuntu6~ppa160 0 500 http://ppa.launchpad.net/am-monkeyd/nautilus-elementary-ppa/ubuntu/ maverick/main amd64 Packages 1:2.32.0-0ubuntu1.1 0 500 http://archive.ubuntu.com/ubuntu/ maverick-updates/main amd64 Packages 1:2.32.0-0ubuntu1 0 500 http://archive.ubuntu.com/ubuntu/ maverick/main amd64 Pack As you can see I allready add the am-monkeyd ppa but when i'm update & upgrade nothing change.

    Read the article

  • Dynamic MMap ran out of room when trying to sudo apt-get anything

    - by user1610406
    I was having an error in Update Manager that asks me to do a partial upgrade and it fails. Now I can't sudo apt-get install anything. I tried to fix it, and now I can't sudo apt-get anything. Every time, I get this output: Reading package lists... Error! E: Dynamic MMap ran out of room. Please increase the size of APT::Cache-Limit. Current value: 25165824. (man 5 apt.conf) E: Error occurred while processing libuptimed0 (NewVersion1) E: Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_lucid_universe_binary-i386_Packages W: Unable to munmap E: The package lists or status file could not be parsed or opened. I have no idea why this is happening or how to fix it, and I fear that if I try something that probably doesn't work that it will make my problem worse. (Just for reference I am currently running 10.04 (Lucid) on my machine.)

    Read the article

  • How can I delay dropbox from starting, but not disable it?

    - by jgbelacqua
    When I log into my user account on Ubuntu 10.10, there is a unsatisfying delay before my system becomes usable. Even launching a terminal, I have to wait a few seconds before the bash prompt appears. During this start-up period, the top process seems to be dropbox. I'm not sure what it's doing exactly (functionality is still fine as far as I can see), but I do know it really doesn't need to be doing it while I'm waiting for desktop to appear. (This is the standard Ubuntu with Gnome desktop, by the way.) What I would like to do is to be able to have a static or even dependency-based delay for dropbox to start. It would be nice if it waited for, e.g., 10 minutes, or for my browser tabs to load and a typing pause. Then it could churn away on file status or cache-chewing, and I would be happy. Is there a way to do this? Thanks!

    Read the article

  • Installing Cairo to get FastRWeb working for R gWidgetsWWW2 -pkg

    - by hhh
    I want to install FastRWeb for R but it requires some Cairo. How can I install the Cairo? compilation terminated. make: *** [xlib-backend.o] Error 1 ERROR: compilation failed for package ‘Cairo’ * removing ‘/home/xfz/R/i686-pc-linux-gnu-library/2.13/Cairo’ ERROR: dependency ‘Cairo’ is not available for package ‘FastRWeb’ * removing ‘/home/xfz/R/i686-pc-linux-gnu-library/2.13/FastRWeb’ The downloaded packages are in ‘/tmp/Rtmpno8hhF/downloaded_packages’ Warning messages: 1: In install.packages("FastRWeb", , "http://rforge.net/", type = "source") : installation of package 'Cairo' had non-zero exit status 2: In install.packages("FastRWeb", , "http://rforge.net/", type = "source") : installation of package 'FastRWeb' had non-zero exit status I cannot find what the Cairo is here, 16 entries with this search term below. It is apparently some library. $ apt-cache search libcairo|wc 16 132 996 Perhaps related http://stackoverflow.com/questions/9826128/r-making-r-rook-program-into-rscript-program-r http://stackoverflow.com/questions/9812547/r-gui-vizualiser-with-command-line-access-browser-based-letting-users-to-s Some related packages FastRWeb and RServe for the gWidgetsWWW2 -pkg.

    Read the article

  • How to determine the path of the current web site

    - by Velika2
    I wanted to create a function which would return the path of the current web site. This is what I thought was working while running in the IDE: Public Shared Function WebsiteAbsoluteBaseUrl() As String Dim RequestObject As System.Web.HttpRequest = HttpContext.Current.Request Return "http://" & RequestObject.Url.Host & ":" & _ RequestObject.Url.Port & "/" & _ RequestObject.Url.Segments(1) End Function Does this seem like it should work? Is there a more straight forward way?

    Read the article

  • Writing an image to ResponseBase.OutputStream does not work anymore with ASP.NET MVC2 RC2

    - by labilbe
    The following code worked nice on ASP.NET MVC1 public class ImageResult : ActionResult { public Image Image { get; set; } public override void ExecuteResult(ControllerContext context) { if (Image == null) { return; } HttpResponseBase response = context.HttpContext.Response; response.ContentType = "image/png"; Image.Save(response.OutputStream, ImageFormat.Png); } } I spent some time searching answers but I didn't find anyone. The error thrown is OutputStream is not available when a custom TextWriter is used.

    Read the article

  • deserialization on client sied in Domain Service

    - by ankit
    i have 2 classes. Person and Contact. Person class has property named "ContactNumber" which returns the Contact type, and this property is marked as "Datamember" for serialization. i have marked Contact type as "DAtaContract". on client side i am able to get the values, but when i try to insert a value and then do submit, i get the below exception. Failed to deserialize change-set. Failed to convert value of type 'Dictionary`2' to type 'Contact' Stack Trace is: at System.Web.Ria.DataServiceSubmitRequest.GetChangeSet(DomainService domainService) at System.Web.Ria.DataServiceSubmitRequest.Invoke(DomainService domainService) at System.Web.Ria.DataService.System.Web.IHttpHandler.ProcessRequest(HttpContext context) can anyone give me the solution ?

    Read the article

  • How can I determine which GPU card is running at PCI Express 2.0 x16 & which is using x8?

    - by M. Tibbits
    Is there a way to determine the speed of the PCI Express connection to a specific card? I have three cards plugged in: two Nvidia GTX 480's (one at x16 & and one at x8) one Nvidia GTX 460 running at x8 Is there some way, either by a function call in C or an option to lspci that I can determine the bus speed of the graphics cards? When I only use one of the cards for my CUDA program, I'd like to use the one which is running at x16. Thanks! Note: lspci -vvv dumps out For the two GTX 480s. I don't see any differences that pertain to bus speed. 03:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at df00 [disabled] [size=128] [virtual] Expansion ROM at b8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 03:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Interrupt: pin B routed to IRQ 5 Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] Capabilities: <access denied> 04:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at cf00 [size=128] [virtual] Expansion ROM at c8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 04:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin B routed to IRQ 5 Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> And the only differences I see relate specifically to the memory mapping: myComputer:~> diff card1 card2 3c3 < Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 7,11c7,11 < Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] < Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] < Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] < Region 5: I/O ports at df00 [disabled] [size=128] < [virtual] Expansion ROM at b8000000 [disabled] [size=512K] --- > Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] > Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] > Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] > Region 5: I/O ports at cf00 [size=128] > [virtual] Expansion ROM at c8000000 [disabled] [size=512K] 18c18 < Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 19a20 > Latency: 0, Cache Line Size: 64 bytes 21c22 < Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] --- > Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K]

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • All video thumbnails fail to be generated

    - by Forage
    Not a single video thumbnail is being generated and shown in Nautilus. The folder ~/.cache/thumbnails/fail/gnome-thumbnail-factory/ keeps getting filled for all of them. I tried removing all the thumbnails from the folder, reinstalling the gstreamer-plugins-... and totem packages, changing the thumbnail settings (Always, 4 GB) in the Preview section of the Nautilus preferences. All to no avail. Some recommend to install packages like libxine1 and ffmpegthumbnailer but this didn't not solve it either. There used to be an .xsession-errors error log file generated in the home folder in previous version of Ubuntu but that doesn't seem to be the case any more. I'm running Ubuntu 12.10 GNOME remix (x64) with the GNOME3 ppa packages installed. What could be the cause of the problem and how can I fix it?

    Read the article

  • how to check session upon start in masterpage or in global.asax

    - by user572276
    i am new in asp.net form authentication and sessions i would like to know how to save session in masterpage or in global.asax and how to clear session how to better handle session timeout by redirecting to a page this is my web.config session settings <sessionState mode="InProc" cookieless="false" timeout="1"></sessionState> code in my masterpage if (Request.Url.AbsolutePath.EndsWith("SessionExpired.aspx", StringComparison.InvariantCultureIgnoreCase)) { HtmlMeta meta = new HtmlMeta(); meta.HttpEquiv = "Refresh"; meta.Content = "7; URL=./Login.aspx"; Page.Header.Controls.Add(meta); } else HttpContext.Current.Response.AppendHeader("Refresh", Convert.ToString((Session.Timeout * 60)) + "; Url=./Public/SessionExpired.aspx");

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Black screen after select installing on a Dell Inspiron 14z

    - by Rodrigo
    I'm trying to do a dual boot on my Dell Inspiron 14z notebook, but I always get a black screen after selecting Install Ubuntu. I've tried to add nomodeset and acpi_osi="Linux" to the boot options, but it doesn't change anything. The hardware: 3rd Generation Intel® Core™ i7-3517U processor (4M Cache, up to 3.0 GHz) 8GB2 Dual Channel DDR3 SDRAM at 1600MHz 500GB 5400 RPM SATA HDD and 32GB mSATA SSD AMD Radeon HD7570M 1GB This question isn't duplicated. I've already tested all tips in the following question! My computer boots to a black screen, what options do I have to fix it?

    Read the article

  • Any way for ubuntu to use more than one core of i7 cpu on my Asus laptop?

    - by G. He
    Newly installed ubuntu 11.10 on a new Asus U46E laptop. /proc/cpuinfo correctly identified the cpu but shows only one core: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz stepping : 7 cpu MHz : 800.000 cache size : 4096 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc up arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 x2apic popcnt aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5587.63 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: I search here and found the answer to one post suggesting remove boot parameter 'nolapic'. However, on my particular laptop, ubuntu won't boot without this nolapic parameter. Is there anyway for ubuntu correly utility the full cpu power?

    Read the article

  • Deserialization on client side in Domain Service

    - by ankit
    I have 2 classes: Person and Contact. Person class has a property named ContactNumber which returns the Contact type, and this property is marked as a DataMember for serialization. I have marked Contact type as a DataContract. On the client side I am able to get the values, but when I try to insert a value and then do submit, I get the following exception: Failed to deserialize change-set. Failed to convert value of type 'Dictionary`2' to type 'Contact' Stack Trace is: at System.Web.Ria.DataServiceSubmitRequest.GetChangeSet(DomainService domainService) at System.Web.Ria.DataServiceSubmitRequest.Invoke(DomainService domainService) at System.Web.Ria.DataService.System.Web.IHttpHandler.ProcessRequest(HttpContext context) Can anyone give me the solution ?

    Read the article

  • My colleague can't visit our website through her provider after long downtime

    - by Peter Westerlund
    We did a frontpage update some days ago that caused the site to crash. The site was down for several hours. After troubleshooting, we concluded that we needed to cache more content. It had been run too many queries. After solving that and rebooting of server, we here in Sweden and Norway were again able to visit the site. But a colleague in Tunisia couldn't. It seems to work from another internet provider but not her own. What could have happened? And what should we do? Edit: I should add: She is able to visit the site through tunnel at anonymouse.org.

    Read the article

  • Which hidden files and directories do I need?

    - by Sammy Black
    In a previous question, I explained my situation/plan: backing up home directory on external drive, reformatting laptop drive, installing 14.04, putting home directory back. (It hasn't happened yet because I can't seem to find the down time, in case things aren't working right away.) It occurred to me that maybe I don't want all of those hidden files and directories (e.g. .local/share/ubuntuone/syncdaemon/, .cache/google-chrome/, etc.) Just judging by the amount of time in copying, I can tell that some of these hidden directories are large. Question: Are there any hidden directories that I obviously don't need/want when I have the laptop running an updated distribution? Will they cause conflicts? (I plan on copying the backed-up directory tree back onto the laptop with the --no-clobber option.)

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >