Search Results

Search found 5046 results on 202 pages for 'satoru logic'.

Page 85/202 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Server Error in '/' Application. - The resource cannot be Found.

    - by Bigced_21
    I am new to ASP.NET MVC 2. I do not understand why I am receiving this error. Is there something missing that i'm not referencing correctly. I'm trying to create a simple jquery autocomplete online search textbox and view the details of the person that i select using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Mvc.Ajax; using DOC_Kools.Models; namespace DOC_Kools.Controllers { public class HomeController : Controller { private KOOLSEntities _dataModel = new KOOLSEntities(); // // GET: /Home/ public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; return View(); } // // GET: /Home/ public ActionResult getAjaxResult(string q) { string searchResult = string.Empty; var offenders = (from o in _dataModel.OffenderSet where o.LastName.Contains(q) orderby o.LastName select o).Take(10); foreach (Offender o in offenders) { searchResult += string.Format("{0}|r\n", o.LastName); } return Content(searchResult); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Search(string searchTerm) { if (searchTerm == string.Empty) { return View(); } else { // if the search contains only one result return detials // otherwise a list var offenders = from o in _dataModel.OffenderSet where o.LastName.Contains(searchTerm) orderby o.LastName select o; if (offenders.Count() == 0) { return View("not found"); } if (offenders.Count() > 1) { return View("List", offenders); } else { return RedirectToAction("Details", new { id = offenders.First().SPN }); } } } // // GET: /Home/Details/5 public ActionResult Details(int id) { return View(); } // // GET: /Home/Create public ActionResult Create() { return View(); } // // POST: /Home/Create [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(FormCollection collection) { try { // TODO: Add insert logic here return RedirectToAction("Index"); } catch { return View(); } } // // GET: /Home/Edit/5 public ActionResult Edit(int id) { return View(); } // // POST: /Home/Edit/5 [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(int id, FormCollection collection) { try { // TODO: Add update logic here return RedirectToAction("Index"); } catch { return View(); } } public ActionResult About() { return View(); } } } using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Routing; namespace DOC_Kools { // Note: For instructions on enabling IIS6 or IIS7 classic mode, // visit http://go.microsoft.com/?LinkId=9394801 public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); routes.MapRoute( "OffenderSearch", "Offenders/Search/{searchTerm}", new { controller = "Home", action = "Index", searchTerm = "" } ); routes.MapRoute( "OffenderAjaxSearch", "Offenders/getAjaxResult/", new { controller = "Home", action = "getAjaxResult" } ); } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); } } } <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<DOC_Kools.Models.Offender>" %> $(document).ready(function() { $("#searchTerm").autocomplete("/Offenders/getAjaxResult/"); }); Home Page <%= Html.Encode(ViewData["Message"]) % <h2>Look for an offender</h2> <form action="/Offenders/Search" method="post" id="searchForm"> <input type="text" name="searchTerm" id="searchTerm" value="" size="10" maxlength="30" /> <input type="submit" value="Search" /> </form> <br /> what do i have to do in order for the textbox search to display on the index page? What else do i have to do for the autocomplete to function correctly. i have the autocomplete.js & jquery.js added to the index.aspx view Any help will be appreciated so that i can get this working. Thanks!

    Read the article

  • Exposing BL as WCF service

    - by Oren Schwartz
    I'm working on a middle-tier project which encapsulates the business logic (uses a DAL layer, and serves a web application server [ASP.net]) of a product deployed in a LAN. The BL serves as a bunch of services and data objects that are invoked upon user action. At present times, the DAL acts as a separate application whereas the BL uses it, but is consumed by the web application as a DLL. Both the DAL and the web application are deployed on different servers inside organization, and since the BL DLL is consumed by the web application, it resides in the same server. The worst thing about exposing the BL as a DLL is that we lost track with what we expose. Deployment is not such a big issue since mostly, product versions are deployed together. Would you recommend migrating from DLL to WCF service? if so, why ? Do you know anyone who had a similar experience ?

    Read the article

  • Exposing BL as WCF service

    - by Oren Schwartz
    I'm working on a middle-tier project which encapsulates the business logic (uses a DAL layer, and serves a web application server [ASP.net]) of a product deployed in a LAN. The BL serves as a bunch of services and data objects that are invoked upon user action. At present times, the DAL acts as a separate application whereas the BL uses it, but is consumed by the web application as a DLL. Both the DAL and the web application are deployed on different servers inside organization, and since the BL DLL is consumed by the web application, it resides in the same server. The worst thing about exposing the BL as a DLL is that we lost track with what we expose. Deployment is not such a big issue since mostly, product versions are deployed together. Would you recommend migrating from DLL to WCF service? if so, why ? Do you know anyone who had a similar experience ? Thank you !

    Read the article

  • Linux DNS Multi tenant

    - by spicyramen
    I need to setup a multi-tenant DNS solution in Linux DNS Server. Currently I serve multiple companies: Company ABC, Company XYZ, etc... I need to create a) Forwarder zone b) Reverse Forward Zone. I can easily create a Forward Zone with domain abc.com The challenge I have is that each of my customer components share the same IP address. Hence If I create the Reverse Forward Zone I end up with something like this: abc.com 1.1.1.1 host.abc.com xyz.com 1.1.1.1 host.xyz.com If I perform a reverse lookup on host.abc.com it works fine...but if I do a reverse lookup on 1.1.1.1 I get a load balance response of: attempt: host.abc.com attempt: host.xyz.com attempt: host.abc.com Any ideas? I want to add logic to the DNS configuration to handle DNS reverse lookup based on source machine and respond with right hostname. Workaround: Create multiple DNS but this is not scalable.

    Read the article

  • 8051 MCU debug board function

    - by b-gen-jack-o-neill
    Hi, in school I have written many programs for 8051 compatible CPU. But I never actually knew how our "debug" sets worked. I mean, we test our programs in special sets, which actually allow you to very simply load program to CPU via PC serial port. But I thing you know this musch more better than I. But how it works? I mean, I know there is chip which adjusts signal level from PC serial port to TTL logic, and than connected to serial line of 8051. But thats all I know. Actually even my teacher doesen´t know how it works, since school bought it all. So, I suspect there is some program already running in the 8051 which handles communication and stores your program into memory, am I right? But, how can you make 8051 to process instructions from different location than ROM? Becouse if I am right, you cannot write into ROM memory by any instruction, as well as 8051 can only read instructions from ROM?

    Read the article

  • VMWare ESX Updates - Which to Apply?

    - by Aaron Alton
    Wondering what more experienced ESX admins typically do... I just brought our ESX hosts up to 3.5 Update 5 (Yes, I know we're behind still). I then applied the "Critical Host Updates" baseline in VMWare update manager, and found that we're still short on 14 "critical updates". My question is, do most people go ahead and apply any update flagged as critical, or do they evaluate each update one-by-one to determine whether or not the issue that has been addressed is likely to affect them. In the SQL Server world (my alma mater, so to speak), we regularly apply service packs, and sometimes cumulative updates, but we only apply hotfixes when the issue that they are targeted towards affects us. Does the same logic hold fast in VMWare land?

    Read the article

  • MacBook Pro Boot Camp SPDIF passthrough?

    - by Ryan Zink
    I'm using Windows 7 through Boot Camp on a unibody Macbook Pro and am having problems using the SPDIF output. I get the expected Dolby Digital or DTS in some movies, but in other movies and in games (Source engine, StarCraft 2) where the output is enabled to 5.1, the output invariably shows up as Dolby Pro Logic, which means (I think) that passthrough is not enabled. The boot camp drivers for the sound card don't have any sort of control panel, and the Windows settings for enabling DTS and Dolby seem to work when I test those outputs in the sound settings. Is there some other setting or utility I can use to enable SPDIF passthrough for all programs?

    Read the article

  • Outlook plugin/macro - "Got the wrong bob"

    - by tumchaaditya
    I want to develop a feature in outlook similar to 'Got the wrong bob' feature in Gmail. I want my plugin/macro to check the addressees when I hit send and want to alert me if I have included wrong person. I need some ideas about how can I build the logic for this feature? Or how should I go about it? If such plugin is already there then even better.. P.S.: I have already checked SendGuard and it is not of much use in this case.

    Read the article

  • tcpdump filter that excludes private ip traffic

    - by Kyle Brandt
    For a generic filter to exclude all traffic in my dump that is between private IP address, I came up with the following: sudo tcpdump -n ' (not ( (src net 172.16.0.0/20 or src net 10.0.0.0/8 or src net 192.168.0.0/16) and (dst net 172.16.0.0/20 or dst net 10.0.0.0/8 or dst net 192.168.0.0/16) ) ) and (not ( (dst net 172.16.0.0/20 or dst net 10.0.0.0/8 or dst net 192.168.0.0/16) and (src net 172.16.0.0/20 or src net 10.0.0.0/8 or src net 192.168.0.0/16) ) )' -w test2.dump Seems pretty excessive, but it also seems to work, is this filter a lot longer than it needs to be and there is better way to express this logic, or is there anything wrong with the filter?

    Read the article

  • Why is this LTO4 Tape-drive not working

    - by Tim Haegele
    # modprobe mptsas # dmesg [ 4274.796796] scsi target7:0:0: mptsas: ioc1: delete device: fw_channel 0, fw_id 0, phy 0, sas_addr 0x50050763124b29ac [ 4274.939579] mptsas 0000:01:00.0: PCI INT A disabled [ 4280.934531] Fusion MPT SAS Host driver 3.04.12 [ 4280.934552] mptsas 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 4280.934692] mptbase: ioc2: Initiating bringup [ 4281.490183] ioc2: LSISAS1064E B3: Capabilities={Initiator} [ 4281.490203] mptsas 0000:01:00.0: setting latency timer to 64 [ 4293.555274] scsi8 : ioc2: LSISAS1064E B3, FwRev=011e0000h, Ports=1, MaxQ=277, IRQ=16 [ 4293.574906] mptsas: ioc2: attaching ssp device: fw_channel 0, fw_id 0, phy 0, sas_addr 0x50050763124b29ac [ 4293.576471] scsi 8:0:0:0: Sequential-Access IBM ULTRIUM-HH4 B6W1 PQ: 0 ANSI: 6 [ 4293.578549] st 8:0:0:0: Attached scsi tape st0 [ 4293.578550] st 8:0:0:0: st0: try direct i/o: yes (alignment 512 B) [ 4293.578577] st 8:0:0:0: Attached scsi generic sg5 type 1 # mt -f /dev/st0 status mt -f /dev/st0 status mt: /dev/st0: rmtopen failed: Input/output error # dd if=/dev/zero of=/dev/nst0 bs=1024 count=10 dd: opening `/dev/nst0': Input/output error I am running debian squeeze 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux Server is Fujitsu TX140 with Controller Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS Tape+Hardware is new.

    Read the article

  • Windows 2008 CAL vs RDS CAL

    - by g8keepa82
    Looking at the Win2k8 licensing page here and it appears to me that if I want to have a server to accept Remote Desktop Connections from say 30 users concurrently, I would require: Windows 2008 Server License & Windows 2008 CAL Is this correct logic? Or would I require RDS CALs instead? Or would I actually require RDS CALs on top of that? From what I can gather the RDS CALs are only required if I was to use the additional RDS services like App-V, etc. This question may have been answered here before but just wanted to clarify. Can anyone help?

    Read the article

  • Why are they putting "processors" on hard drives?

    - by Celeritas
    What does it mean when they have a processor on the hard drive, how does it work, and what benfit does it have? I don't understand - the CPU is the processor and the hard drive transfers it's contents to RAM. Do have additional processors, preprocess the data some how? Here's some examples Western Digital WD Black WD1002FAEX 1TB "Dual processor speed" NETGEAR ReadyNAS 312 2-Bay Diskless Network Attached Storage "Dual-core Intel 2.1GHz processor and 2GB on-board memory" and routers now have processors too, why's that nescecary? I guess it sort of makes sense - some logic needs to happen for the packets to be read in to know which ports to send them out on, but why did old routers not need them? Example or wireless router with processor: "Dual-core processor"

    Read the article

  • Why not open a PDF file in the browser but first save it to the harddisk?

    - by Lernkurve
    Question Is it correct that saving a PDF to the harddisk first, and then opening it from there with some PDF reader (not the browser) is safer than opening it directly with the browser plugin? My current understanding I know that the PDF browser plugin might have a security leak and a manipulated PDF file might exploit it and get access to the user's computer. I recently heard that saving the PDF file frist and opening it then was safer. I don't understand why that should be safer. Can anyone explain? My logic would suggest that a manipulated file started from the harddisk can just as well exploit a security leak, say for instance, of Adobe Acrobat Reader.

    Read the article

  • Struts 2 TypeConversion problem

    - by Parhs
    Hello... i have a big problem... i have this map HashMap<Long, String> examValues; It gets populated by textboxes automatically with name="examValues[id]" Although i explicily defined the element as String , it doesnt care!!! I want everything to be a String... The result is to get a java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.String when i try to System.out.print(examValues.get(examValue.getExam().getId()).getClass().getName()); INFO: java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.String at gr.medilab.logic.action.exams.Exams.fill_edit(Exams.java:204) at gr.medilab.logic.action.exams.Exams.fill(Exams.java:95) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:441) at com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:280) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:243) at com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:165) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:252) at org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.ConversionErrorInterceptor.intercept(ConversionErrorInterceptor.java:122) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:195) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:195) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.StaticParametersInterceptor.intercept(StaticParametersInterceptor.java:179) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at org.apache.struts2.interceptor.MultiselectInterceptor.intercept(MultiselectInterceptor.java:75) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at org.apache.struts2.interceptor.CheckboxInterceptor.intercept(CheckboxInterceptor.java:94) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at org.apache.struts2.interceptor.FileUploadInterceptor.intercept(FileUploadInterceptor.java:235) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.ModelDrivenInterceptor.intercept(ModelDrivenInterceptor.java:89) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.ScopedModelDrivenInterceptor.intercept(ScopedModelDrivenInterceptor.java:130) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at org.apache.struts2.interceptor.debugging.DebuggingInterceptor.intercept(DebuggingInterceptor.java:267) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.ChainingInterceptor.intercept(ChainingInterceptor.java:126) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.PrepareInterceptor.doIntercept(PrepareInterceptor.java:138) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.I18nInterceptor.intercept(I18nInterceptor.java:165) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at org.apache.struts2.interceptor.ServletConfigInterceptor.intercept(ServletConfigInterceptor.java:164) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.AliasInterceptor.intercept(AliasInterceptor.java:179) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at com.opensymphony.xwork2.interceptor.ExceptionMappingInterceptor.intercept(ExceptionMappingInterceptor.java:176) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) at org.apache.struts2.impl.StrutsActionProxy.execute(StrutsActionProxy.java:52) at org.apache.struts2.dispatcher.Dispatcher.serviceAction(Dispatcher.java:488) at org.apache.struts2.dispatcher.ng.ExecuteOperations.executeAction(ExecuteOperations.java:77) at org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:91) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:277) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Any idea?

    Read the article

  • How to disable Mac OS X from using swap when there still is "Inactive" memory?

    - by Motin
    A common phenomena in my day to day usage (and several other's according to various posts throughout the internet) of OS X, the system seems to become slow whenever there is no more "Free" memory available. Supposedly, this is due to swapping, since heavy disk activity is apparent and that vm_stat reports many pageouts. (Correct me from wrong) However, the amount of "Inactive" ram is typically around 12.5%-25% of all available memory (^1.) when swapping starts/occurs/ends. According to http://support.apple.com/kb/ht1342 : Inactive memory This information in memory is not actively being used, but was recently used. For example, if you've been using Mail and then quit it, the RAM that Mail was using is marked as Inactive memory. This Inactive memory is available for use by another application, just like Free memory. However, if you open Mail before its Inactive memory is used by a different application, Mail will open quicker because its Inactive memory is converted to Active memory, instead of loading Mail from the slower hard disk. And according to http://developer.apple.com/library/mac/#documentation/Performance/Conceptual/ManagingMemory/Articles/AboutMemory.html : The inactive list contains pages that are currently resident in physical memory but have not been accessed recently. These pages contain valid data but may be released from memory at any time. So, basically: When a program has quit, it's memory becomes marked as Inactive and should be claimable at any time. Still, OS X will prefer to start swapping out memory to the Swap file instead of just claiming this memory, whenever the "Free" memory gets to low. Why? What is the advantage of this behavior over, say, instantly releasing Inactive memory and not even touch the swap file? Some sources (^2.) indicate that OS X would page out the "Inactive" memory to swap before releasing it, but that doesn't make sense now does it if the memory may be released from memory at any time? Swapping is expensive, releasing is cheap, right? Can this behavior be changed using some preference or known hack? (Preferably one that doesn't include disabling swap/dynamic_pager altogether and restarting...) I do appreciate the purge command, as well as the concept of Repairing disk permissions to force some Free memory, but those are ways to painfully force more Free memory than to actually fixing the swap/release decision logic... Btw a similar question was asked here: http://forums.macnn.com/90/mac-os-x/434650/why-does-os-x-swap-when/ and here: http://hintsforums.macworld.com/showthread.php?t=87688 but even though the OPs re-asked the core question, none of the replies addresses an answer to it... ^1. UPDATE 17-mar-2012 Since I first posted this question, I have gone from 4gb to 8gb of installed ram, and the problem remains. The amount of "Inactive" ram was 0.5gb-1.0gb before and is now typically around 1.0-2.0GB when swapping starts/occurs/ends, ie it seems that around 12.5%-25% of the ram is preserved as Inactive by osx kernel logic. ^2. For instance http://apple.stackexchange.com/questions/4288/what-does-it-mean-if-i-have-lots-of-inactive-memory-at-the-end-of-a-work-day : Once all your memory is used (free memory is 0), the OS will write out inactive memory to the swapfile to make more room in active memory. UPDATE 17-mar-2012 Here is a round-up of the methods that have been suggested to help so far: The purge command "Used to approximate initial boot conditions with a cold disk buffer cache for performance analysis. It does not affect anonymous memory that has been allocated through malloc, vm_allocate, etc". This is useful to prevent osx to swap-out the disk cache (which is ridiculous that osx actually does so in the first place), but with the downside that the disk cache is released, meaning that if the disk cache was not about to be swapped out, one would simply end up with a cold disk buffer cache, probably affecting performance negatively. The FreeMemory app and/or Repairing disk permissions to force some Free memory Doesn't help releasing any memory, only moving some gigabytes of memory contents from ram to the hd. In the end, this causes lots of swap-ins when I attempt to use the applications that were open while freeing memory, as a lot of its vm is now on swap. Speeding up swap-allocation using dynamicpagerwrapper Seems a good thing to do in order to speed up swap-usage, but does not address the problem of osx swapping in the first place while there is still inactive memory. Disabling swap by disabling dynamicpager and restarting This will force osx not to use swap to the price of the system hanging when all memory is used. Not a viable alternative... Disabling swap using a hacked dynamicpager Similar to disabling dynamicpager above, some excerpts from the comments to the blog post indicate that this is not a viable solution: "The Inactive Memory is high as usual". "when your system is running out of memory, the whole os hangs...", "if you consume the whole amount of memory of the mac, the machine will likely hang" To sum up, I am still unaware of a way of disabling Mac OS X from using swap when there still is "Inactive" memory. If it isn't possible, maybe at least there is an explanation somewhere of why osx prefers to swap out memory that may be released from memory at any time?

    Read the article

  • Zabbix - Some of the monitored items don't refreh

    - by Niro
    I'm experiencing a strange issue with Zabbix monitoring a MySQL server. Most of the data from the server such as MySQL queries per second and MySQL uptime , Buffers memory etc. update nicely while some data like CPU iowait time (avg1) , Host local time ,MySQL number of threads and other items which were monitored in the past has last check time of about a week ago. I can't find any logic in this, for example Mysql number of threads and Mysql queries per second are obtained in a similar way so it does not make sense one of them is monitored and one is not. Please help- how can I fix this? Update - I used zabbix_get from the zabbix server to check one of the items on the zabbix client and it works so the problem must be on the zabbix server side

    Read the article

  • How do I make Linux recognize a new SATA /dev/sda drive I hot swapped in without rebooting?

    - by Philip Durbin
    Hot swapping out a failed SATA /dev/sda drive worked fine, but when I went to swap in a new drive, it wasn't recognized: [root@fs-2 ~]# tail -18 /var/log/messages May 5 16:54:35 fs-2 kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen May 5 16:54:35 fs-2 kernel: ata1: SError: { PHYRdyChg CommWake } May 5 16:54:40 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:45 fs-2 kernel: ata1: device not ready (errno=-16), forcing hardreset May 5 16:54:45 fs-2 kernel: ata1: soft resetting link May 5 16:54:50 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:55 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:54:55 fs-2 kernel: ata1: soft resetting link May 5 16:55:00 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:05 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:05 fs-2 kernel: ata1: soft resetting link May 5 16:55:10 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:40 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:40 fs-2 kernel: ata1: limiting SATA link speed to 1.5 Gbps May 5 16:55:40 fs-2 kernel: ata1: soft resetting link May 5 16:55:45 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:45 fs-2 kernel: ata1: reset failed, giving up May 5 16:55:45 fs-2 kernel: ata1: EH complete I tried a couple things to make the server find the new /dev/sda, such as rescan-scsi-bus.sh but they didn't work: [root@fs-2 ~]# echo "---" > /sys/class/scsi_host/host0/scan -bash: echo: write error: Invalid argument [root@fs-2 ~]# [root@fs-2 ~]# /root/rescan-scsi-bus.sh -l [snip] 0 new device(s) found. 0 device(s) removed. [root@fs-2 ~]# [root@fs-2 ~]# ls /dev/sda ls: /dev/sda: No such file or directory I ended up rebooting the server. /dev/sda was recognized, I fixed the software RAID, and everything is fine now. But for next time, how can I make Linux recognize a new SATA drive I have hot swapped in without rebooting? The operating system in question is RHEL5.3: [root@fs-2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) The hard drive is a Seagate Barracuda ES.2 SATA 3.0-Gb/s 500-GB, model ST3500320NS. Here is the lscpi output: [root@fs-2 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) Update: In perhaps a dozen cases, we've been forced to reboot servers because hot swap hasn't "just worked." Thanks for the answers to look more into the SATA controller. I've included the lspci output for the problematic system above (hostname: fs-2). I could still use some help understanding what exactly isn't supported hardware-wise in terms of hot swap for that system. Please let me know what other output besides lspci might be useful. The good news is that hot swap "just worked" today on one of our servers (hostname: www-1), which is very rare for us. Here is the lspci output: [root@www-1 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:19.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 04)

    Read the article

  • www a-record vs cname-record

    - by Sorin Buturugeanu
    Hi! I have a website that I will be hosting DNS for (testing purposes at first and then it will have some limited traffic). I have set up DNS so that site.tld has an A record to the actual IP but I don't know what to do about www.site.tld. Both site.tld and www.site.tld will point to the same server / application so my logic tells me to add a cname record so that www.site.tld becomes an alias for site.tld, BUT, I've been checking my settings with intodns.com and if I only add a CNAME for the www.site.tld it gives me the following error: ERROR: I could not get any A records for www.cexa.ro! (the error clears once I do an A record for www.site.tld to point to the actual IP) I don't know if there is a "rule" that "www." should always be an A record even though it's actually pointing to the same IP / application. Thanks for helping me understand this!

    Read the article

  • OpenVZ multiple networks on CTs

    - by picca
    I have Hardware Node (HN) which has 2 physical interfaces (eth0, eth1). I'm playing with OpenVZ and want to let my containers (CTs) have access to both of those interfaces. I'm using basic configuration - venet. CTs are fine to access eth0 (public interface). But I can't get CTs to get access to eth1 (private network). I tried: # on HN vzctl set 101 --ipadd 192.168.1.101 --save vzctl enter 101 ping 192.168.1.2 # no response here ifconfig # on CT returns lo (127.0.0.1), venet0 (127.0.0.1), venet0:0 (95.168.xxx.xxx), venet0:1 (192.168.1.101) I believe that the main problem is that all packets flows through eth0 on HN (figured out using tcpdump). So the problem might be in routes on HN. Or is my logic here all wrong? I just need access to both interfaces (networks) on HN from CTs. Nothing complicated.

    Read the article

  • Predictive vs Least Connection Load Balancing Techniques

    - by Mani
    I have a windows based desktop application that communicates via TCP to the application servers. (windows 2003). No sticky sessions between client calls. We have exactly 2 servers to load balance and we are thinking to use a F5 hardware NLB. The application is a heavy load types, doing not much bussiness logic in the services but retrieving quite a big amount of data at most of the times. May be on an average 5000 to 10000 records at all times. Used mainly for storing and retirieving data and no special processing of data or calculations running on the server side. I am favouring 'predictive' considering my services take a while at times to return data and hence tracking the feedback would yield some better routing as in predictive. I am not sure if the given data is sufficient enough to suggest some ideas but considering these, what would be some suggestions\things to consider\best between Predictive and Least Connections ? Thanks.

    Read the article

  • Does HDMI cable "quality" actually affect transmission?

    - by TheDeeno
    I really don't want to pay a ridiculous price for a "name brand" HDMI cable if it doesn't really do anything for me. I'm just curious: now that most transmission is digital (packetized) is there such a thing as a "quality" cable? I suspect that if the cable works at all, I'm safe saying I have a quality connection. I just want to double check. Some of these reviewers complain that generic cables "create noise, lack bandwidth, can't handle X, etc". I'm skeptical of these reviews. If the logic for HDMI cables and quality can be applied to cables in general, please elaborate on that as well.

    Read the article

  • What value does SenderID provide over SPF and DKIM?

    - by makerofthings7
    I understand that SPF "binds" a message envelope to a set of permitted IP addresses. SenderID (with the default pra option) "binds" the message header to a set of permitted IPs in addition to the SPF logic. DKIM "binds" the from address header (and any additional header the sender chooses), and the body to a DNS Domain name I'm using the word "bind" above instead of "authorized" because it makes more sense (to me) Questions: If SPF is already verifies a message FROM in the envelope, why is there a need to check the headers? When would the need to verify the envelope (SPF) need to be different than the headers (SenderID) If I'm already verifying the headers with DKIM, why do I need SenderID? Most large companies I've checked don't disable SenderID with an explicit record. EBay is a notable example of one that does. What is the rationale for disabling SenderID "pra" processing of outbound messages?

    Read the article

  • Customizable mail server - what are my options? [closed]

    - by disappearedng
    This question was originally on SO but it was closed since it is considered off topic. I am interested to build a mail service that allows you to incorporate custom logic in the your mail server. For example, user A can reply to [email protected] once and subsequent emails from user A to [email protected] will not go through until certain actions are taken. I am looking for something simple and customizable, preferably open-sourced. I am fluent in most modern languages. What email servers do you guys recommend for this? Mailgun looks promising, but are there any simpler options?

    Read the article

  • Cisco FC SAN switch decision

    - by Chopper3
    I've got to buy a bunch of FC SAN switches in the next week or so, I have to, and want to, buy Cisco MDSs. Servers are HP BL490c G6's in C7000 chassis with Virtual-Connect Flex-10 ethernet interconnects and VC FC interconnects (Emulex HBAs btw), all running ESX 3.5U4 (for now). I think I've only really got two choices; MDS 9509's with dual-supervisors with a single 48-port 4Gb FC card MDS 9222i's with single supervisor and the built-in 18-FC-port/4-GigE-FCIP-port option Both have the same functionality (I think, buying the enterprise licence btw), both have plenty enough performance and adequate ports for now and the next three years. The 9222i's are about 55% the price of the 9509's - logic says get the 'i's but will I really miss the dual-supervisors? I've got lots of 9509's with dual-supervisors that I'm very happy with but I'm not sure I've every benefitted from the dual-sups in the past and they are nearly twice the price - but if I don't buy them and miss them I can't retrofit them later. What are your thoughts?

    Read the article

  • EXSLT date:format-date template without document() XSLT 1.0

    - by DashaLuna
    Hello guys, I'm using date:format-date template EXSLT file I'm using XSLT 1.0 and MSXML3.0 as the processor. My date:format-date template EXSLT file's declaration is: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" xmlns:tui="http://www.travelbound.co.uk/" xmlns:date="http://exslt.org/dates-and-times" exclude-result-prefixes="msxsl tui date" version="1.0"> ... </xsl:stylesheet> I cannot use document() function due to the 3rd party restrictions. So I have changed the months and days (similarly) from XML snippet: <date:months> <date:month length="31" abbr="Jan">January</date:month> <date:month length="28" abbr="Feb">February</date:month> <date:month length="31" abbr="Mar">March</date:month> <date:month length="30" abbr="Apr">April</date:month> <date:month length="31" abbr="May">May</date:month> <date:month length="30" abbr="Jun">June</date:month> <date:month length="31" abbr="Jul">July</date:month> <date:month length="31" abbr="Aug">August</date:month> <date:month length="30" abbr="Sep">September</date:month> <date:month length="31" abbr="Oct">October</date:month> <date:month length="30" abbr="Nov">November</date:month> <date:month length="31" abbr="Dec">December</date:month> </date:months> to the variable: <xsl:variable name="months"> <month length="31" abbr="Jan">January</month> <month length="28" abbr="Feb">February</month> <month length="31" abbr="Mar">March</month> <month length="30" abbr="Apr">April</month> <month length="31" abbr="May">May</month> <month length="30" abbr="Jun">June</month> <month length="31" abbr="Jul">July</month> <month length="31" abbr="Aug">August</month> <month length="30" abbr="Sep">September</month> <month length="31" abbr="Oct">October</month> <month length="30" abbr="Nov">November</month> <month length="31" abbr="Dec">December</month> </xsl:variable> And correspondingly, I've changed the code that originally uses document() function from: [from month processing bit of EXSLT stylesheet] <xsl:variable name="month-node" select="document('')/*/date:months/date:month[number($month)]" /> to use MSXML3.0 node-set function: <xsl:variable name="month-node" select="msxsl:node-set($months)/month[number($month)]" /> So I assumed that this would work. According to the EXLT instructions "The format pattern string is interpreted as described for the JDK 1.1 SimpleDateFormat class." [I used current version]. I'm specifing the month in accordance to SimpleDateFormat class as 'dd MMMMM yyyy' so that the month will be the full month's name, for example January. But it doesn't work :( I've looked in EXSLT stylesheet and it has got the logic to do that. Also there is logic to display the name of the week for a day using 'E' pattern, which doesn't work for me. Maybe changing from using document() to variables broke it. Would really appreciate any help. Many thanks!

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >