Search Results

Search found 17336 results on 694 pages for 'developer events'.

Page 566/694 | < Previous Page | 562 563 564 565 566 567 568 569 570 571 572 573  | Next Page >

  • 412 Precodition Failed error only occurs on certain networks

    - by Andy
    One of my favorite websites: http://jessiejofficial.com (yes, I'm a Jessie J Fan :')) has recently started displaying the error message "412 Precondition Failed" whenever I visit it from my home network, even when I use Tor Browser. At first I thought that this was an issue with the whole website, however I have contacted the web developer and he has said that they has been plenty of hits within the last 48 hours. Plus, I discovered tonight that I can access the website from my phone, through the mobile network. So it appears to just be my network as all of the devices in my house connected to the WiFi display the same error when I try to visit any page of the site. However there have been no changes that we are aware of or are noticeable to our network since the website was accessible, and I have just heard that another person in a different part of the country is experiencing the same difficulties also. Any help/advice/suggestions would be appreciated greatly Update: When trying to ping 'jessiejofficial.com' in Windows command prompt the request times out on all four attempts, on any computer connected to the wireless network. I can now also confirm that the same thing occurs on my MacBook Pro.

    Read the article

  • Is the Windows VPN secure?

    - by Tor Haugen
    I have used a few VPN solutions over the years. Most are hard to set up, slow to connect and/or rather ill-behaved (replacing system drivers, disrupting each other etc). One solution I have never used earlier is the one built into Windows. This is mostly because the infrastructure guys always refuse to use it because they claim it's 'not secure'. Now I have finally had the chance to use it (on Windows 7), and wow, it's a breeze! Easy to set up, well-behaved, it connects almost instantly, automatically authenticates with my logged-in credentials, and integrates excellently with the UI. I have to say, unless it really isn't secure, I'll be happy if I never have to use another VPN product ever again. I gather the Windows VPN used to rely on PPTP, which is not considered secure. But in Windows 7/2008, it supports L2TP/IPSec, SSTP and IKEv2, and authenticates with EAP or CHAP/CHAPv2. That seems pretty up-to-date to me. But I'm just a lowly developer. Can someone in the know give me the lowdown on this?

    Read the article

  • Server 2008 email on Event variables

    - by Jeff Miles
    One of the new features of Server 2008 is the ability to attach a task to a specific event in the event logs. One of the actions available is to send an email through a SMTP server. This is working great, however it would be ideal if in the message body, the Event contents could be placed. I have tried using $eventdescription and %eventdescription%, but those are just shots in the dark. Any amount of googling produces no results. Does anyone know if this is possible? Update: Sparks' suggestion below is a step in the right direction I believe, however that method doesn't seem to work for all values. For example, I can pull the RecordID, Severity and Channel as shown, but I can't use the same method to retreive the EventID, or most importantly the description. Here's the raw XML from one event: [Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"] [System] [Provider Name="DFSR" /] [EventID Qualifiers="16384"]4412[/EventID] [Level]4[/Level] [Task]0[/Task] [Keywords]0x80000000000000[/Keywords] [TimeCreated SystemTime="2009-05-14T18:18:09.000Z" /] [EventRecordID]45692[/EventRecordID] [Channel]DFS Replication[/Channel] [Computer]servername.domain.com[/Computer] [Security /] [/System] [EventData] [Data]9046C3F4-843E-4A53-B941-4B20764072E5[/Data] [Data]D:\departments\Geomatics\Plan Quality\Data Processing\CG3533017 2009-05-13 KT FIXED[/Data] [Data]D:\departments[/Data] [Data]{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [Data]Departments[/Data] [Data]swg.ca\files\departments[/Data] [Data]B8242CE2-F5EB-47DA-BA5B-1DD2F7EE3AB9[/Data] [Data]DFAA7A54-66CB-4C31-81A0-0F861382C32C[/Data] [Data]CG3533017 2009-05-13-{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [/EventData] [/Event] I have tried using a ValueQuery for EventData, but it returns no data.

    Read the article

  • New to server admin, Diagnosing Memory and CPU issues on DV

    - by G Thompson
    Sorry for my ignorance and lack of knowledge. I'm a PHP/Front-end developer just now venturing into very minor server management/diagnostics. I have a Media Temple DV account. I have 2 sites that run a PHP script through a subscription service to an API. Basically API hits site with said script. Script runs, gathers data from api, saves data to SQL database. I noticed that these sites seemed to causing memory overages on my server (not sure why). So I temporarily disabled them. The memory overage alerts stopped but my CPU still sits really high, like at 115% and above. I'm trying to diagnos this with tutorials and resources but just can't seem to find a solution. I'll attach screenshots(screenshots are without the PHP scripts I assume are responsible for the memory issues) I'm assuming are important to the diagnosis, but if anyone can point me in the right direction to start A. figuring out if and why the PHP script may be causing memory overages and B. Why my CPU is always over 100%. Thanks guys! Links to screen shots... can't post with low points. http://i.stack.imgur.com/A64k4.png http://i.stack.imgur.com/qm1rV.png

    Read the article

  • Laptop seemingly randomly "freezes" to the point of no longer executing applications

    - by Aierou
    After upgrading to Windows 8 pro on my Samsung Series 7 Chronos NP700Z5C-S04US (may be relevant, I'm not sure), my computer began to stop allowing the execution of any service or application, as well as discontinuing the update of the clock until a hard shutdown was performed. This seems to occur randomly after periods of inactivity and I've no idea the cause. These are measures I have already taken in order to attempt to stop this: -Obviously Googling potential answers to this problem -Updating all drivers -Researching all events that have occurred around the time of the failure to respond (with no results) -I tried applying "bcdedit /set disabledynamictick no" which was a hotfix for what seemed to be the same error but was not. Here is some more, potentially related, information about the error: -No BSOD (actually, I haven't at all experienced a BSOD with Windows 8) -Computer seems to have a problem shutting down/restarting most of the time (Hangs at the point where it should completely turn off) -New sound instances are not able to play, but previously loaded containers function properly -As mentioned before, the clock freezes at the time of the error -USB devices function properly -Servers that I was running fail to respond on my end, but stay online. If you require more information, please request it specifically and I will be happy to oblige. Thanks.

    Read the article

  • Kerberos issues after new server of same name joined to domain

    - by MentalBlock
    Environment: Windows Server 2012, 2 Domain Controllers, 1 domain. A server called Sharepoint1 was joined to the domain (running Sharepoint 2013 using NTLM). The fresh install for Sharepoint1 (OS and Sharepoint) is performed and set up for Kerberos and joined to the domain using the same name. Two SPNs added for HTTP/sharepoint1 and HTTP/sharepoint1.somedomain.net for account SPFarm. Active Directory shows a single, non-duplicate computer account with a create date of the first server and a modify date of the second server creation. A separate server also on the domain has the server added to All Servers in Server Manager. This server shows a local error in the events exactly like This from Technet (Kerberos error 4 - KRB_AP_ERR_MODIFIED). Question: Can someone help me understand if the problem is: The computer account is still the old account and causing a Kerberos ticket mismatch (granted some housekeeping in AD might have prevented this) (In my limited understanding of Kerberos and SPNs) that the SPFarm account used for the SPNs is somehow mismatched with HTTP calls made by the remote server management tools services in Windows Server 2012 Something completely different? I am leaning towards the first one, since I tested the same SPNs on another server and it didn't seem to cause the same issue. If this is the case, can it be easily and safely repaired? Is there a proper way to either reset the account or better yet, delete and re-add the account? Although it sounds simple enough with some powershell or clicking around in AD Users and Computers, I am uncertain what impact this might have on an existing server, particularly one running SharePoint. What is the safest and simplest way to proceed? Thanks!

    Read the article

  • Access an external SSH server through a restrictive proxy [on hold]

    - by Cyrille
    I'm a software developer. I wish to access my computer at home through SSH. For example, I sometime need to access my personal projects source code to check how I handled specific problems. Unfortunately, I currently work under an over-restrictive and anti-productive proxy that waste a hell of a lot of everyone's time (We often have to visit websites from our smartphones or use a web proxy to check very legitimates websites for answers, and don't get me started on other "security" overkill features we have to cope with...). Well, back to the subject, I can access my home computer from my phone (SSH, port 22 and 80 both redirected by router on port 22). It works, but it's quite uncomfortable. From my office computer, this is what I tried so far: export http_proxy=http://user:pass@proxyip:8080 echo "user:pass" > ~/.corkscrew-auth echo "ProxyCommand corkscrew proxyip 8080 %h %p /home/me/.corkscrew-auth" > ~/.ssh/config ssh 82.23.34.56 -l me -p 80 Proxy could not open connnection to 82.23.34.56: Forbidden ssh_exchange_identification: Connection closed by remote host (same without -p 80) Without corkscrew: ssh: connect to host 82.23.34.56 port 80: Connection timed out ssh: connect to host 82.23.34.56 port 22: Connection timed out Any other idea ?

    Read the article

  • Application that will identify percentage of your system disk bandwidth used on a user-application by user-application basis?

    - by Warren P
    I always (subjectively) feel my computer is far too slow (however fast it is), and so I'm always looking for ways to measure and understand what my computer is actually doing, that is making it seem "slow" to me. It has been my observation that my software-developer workload is most often disk-bound (I am waiting for Disk I/O) more than CPU bound. What has made it worse, is that I am using a corporate PC that has in-memory active-scanning anti-virus software that I do not have control over, and also some IT department mandated services that seem to suck up a lot of available hard-disk bandwidth. The best tool I have seen (in Windows 7) is the Resource Monitor which I usually acess from the button in the task Manager. The disk IO page, however, seems to label Disk Activity at a very low level (for example, showing the Volume Shadow Storage, which is flushing information obviously written by something ELSE other than VSS itself, and then writes to Pagefile.sys, which are obviously due to Virtual Memory faults in some application). What I would like to know is if a utility exists that can add up all direct disk input and output by user-level process, or find the process or service that caused VM or VSS activity. In that way, I hope, you could establish a real idea of how much of your computer's precious disk subsystem bandwidth is attributable to a particular application. here's a scenario: MyApp.exe writes 100k/s and reads 100k/s directly. VSS ends up writing another 100k/s. pagefaults caused inside MyApp.exe cause another 100k/s of writes. So the total "cost" of MyApp.exe running, during a period of time (let's say 1 second) is 400k/s, whereas you can only directly observe half of that, in Resource Monitor. Is there a smarter disk-IO watching piece of software I can use?

    Read the article

  • How to setup bindings for development IIS 7.5 with lot of sites

    - by Antonio Bakula
    I am a programmer in a small ASP.NET shop with very little expirience in server administration, and I have to setup IIS 7.5 to host lot of sites on newly installed windows server 2008 R2, these sites are test "clones" for sites on "real" web server and they should be accessible only in local network (domain). Developers should add new sites for our new customers. Project managers use this server to check progress and test new sites and new features, QA people have to have access to this site and test before we copy it to the "real" web server. Developers only have access to IIS console, in fact they can use RDP to test server with their developer domain credentials and permissions, also developers are local admins on that machine (tester). On our previous server I used different port numbers for each site. That worked but don't like this solution, I would prefer to use subdomains. But here are the problems: manually adding DNS records is not an option because we do not wont that developers have to administer domain DNS server, and currently this had to be done with domain administrator credentials. Is there a some way to add DNS record automatically ? I tried to add DNS record for subdomains on test server with wildcard (*.tester) and that seems to work for some time but that change coused some bad problems in our domain network and admin forbid me to mess with DNS, he said that I have to add DNS record for every subdomain manually and that I can not use wildcards, and there is nothing that I can do about it, mainly for "politicall" reasons :( obviously our admin is pretty much uncooperative, outsorced from different organization and I can't do anything about that. can I add another DNS server on that machine ? What must be setup on clients machines to "tell" them to use domain DNS server and tester domain server ? So please I need someone to give me some advice, what should I do ? Is different port numbers only option left ? Thanks !

    Read the article

  • Push, parse & import "selected" data, text, info blobs from Webpages/ Emails as Event/ Appointment to standard Calendar directly or as .ics file?

    - by Alex S
    Any tool, plugin, extension, script/ code to push "selected" data, text, information blobs from Web pages, Emails etc, then parsed and imported to structured Event, Appointment (e.g. .ics) on a standard Calendar like Outlook, Google, iCal? If not, what and how could I use some scripting, coding or existing tools, extensions to add on top and do this. I come across a lot of unstructured information on Webpages, Emails, FB events etc. where I just want to add that information to my Calendar. Instead of entering all the information by hand all the time, there should be an easy enough way to have the information get parsed, organized and imported to a Calendar... Either directly to a calendar from source or Translated to a standard format such as .ICS that can be imported & saved easily. Would love to see some suggestions for this incorporating one or more of the following: on Windows with Chrome & Outlook on iPhone/ iPad to its Calendar PS: I'll come back and see if I can add more information to this question and to answer it as well. I have not found a solution yet.

    Read the article

  • OpenOffice Vs Microsoft Office 2007/2010

    - by Moody Tech
    I have been asked to summarise the pros and cons in connection with the choices between Microsoft Office Vs OpenOffice. I have a broad idea of what needs to be said. However I would like to open a discussion here and have a single place to go to when the time comes to give the summary to management. There are obvious points of contention: For me the lack of compliance with Group Policy is a major concern [Default save location/visibility of C:/Visibility of files and folders on the HDD] However I am sure that functionality and compatibility will be the prime mover. We are looking at making major savings by reducing our commitment to Microsoft licensing. So what are your experiences? What happens when there are no direct equivalents? [Word has a close match in OpenOffice, but a database solution match is not as close, neither is an Outlook [connecting to Exchange Server and downloading all calendars, shared calendars, scheduled events, for Exchange will still exist after the move to OpenSource solutions] In summary then: What do you see as: The benefits of this plan? How do you see the problems being manifest? Discuss.... Many thanks.

    Read the article

  • How to set an executable white list?

    - by izabera
    Under Linux, is it possible to set a white-list of executables for a certain group of users? I need them to be unable to use, for example, make, gcc and executables on removable disks. How can this be done? Edit, let me explain better. I'm dealing with a high school IT system, young geeks that (during the lessons) want to play, surf the net, damage those computer however they can. The major step to achieve this goal was to remove the system they're familiar with and install Ubuntu in all the computers. This actually works quite well, but recent events proved that this is not enough. I want to allow them to execute certain safe programs, like Open Office, and to deny any other program, whether it is preinstalled software, something they carry in usb drives, a downloaded program or a script they program on site. It's possible to remove the 'x' permission on any file on the pc, but of course it would be impractical. Furthermore, they would be able to run anything they download. I thought the best solution would be to make a white-list of safe programs and to deny anything else, but I don't really know how to do it. Any idea is helpful.

    Read the article

  • Does Google sometimes ignore "special" characters, possibly depending on your location or font type settings? [closed]

    - by RLH
    TLDR Google tends to ignore special characters in my search strings. Is there anything that I can do about it and is it, possibly, happening because Google makes certain assumptions based off of my default text-encoding settings and my location? I just posted this question over at StackOverflow. I had found a C preprocessor that I'd never seen before. As I should have done, I Googled it and tried to find out further information. I attempted various search terms which were all variations of "C Operator ##" (some times with and some times without the double-quotes.) Google didn't bring back anything of use so I posted my question on SO. As you can see from the comments, someone mentioned a search string (ironically one which I did try to search) and stated that I could have even hit the "I'm feeling lucky" button and have gotten my answer. The problem is I did search that, and the results that I received were far more basic and even after following the top results and searching the resulting pages, I could find nothing referencing the string "##". I'm not posting this question to complain but it does provide an empirical example of something I've seen before that really bugs me-- Google often ignores special characters in my search strings and the results are often useless. As a developer I often need to search for string values containing non-alphanumeric characters. Some characters (like the underscore or hyphen) can be used without trouble. However, other characters (such as the ampersand, carat, tilde and pound sign) are often ignored in my query strings. Is there a way to prevent this from happening so that I can get meaningful results from Google? NOTE I stay logged into Google and I live in the US. I wonder if Google detects some form of text-encoding setting or derives my results based off of certain, localized text-based assumptions. Regardless, I would like to for Google to search for what I give it. Is there anything that I can do to improve my results?

    Read the article

  • Developing high-performance and scalable zend framework website

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand in terms of servers approch (MySQL, NGINX, MongoDB or pgsql) for a scalable application expected to have a lot of traffic using PHP?...I would be interested in your approch. Note: I'm a Zend Framework developer and don't have to much experience with the servers part (to determin what would be best solution for my scalable application)

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • What needs to be considered when setting up for Linux Development? [closed]

    - by user123586
    I want to set up a box for Linux development. I have a working linux install with the usual toolchain and an IDE. I'm looking for advice on how to approach structuring accounts and folders for development. As the Perl folks say "There's always more than one way to do it." Left to my own devices, I'll come up with several unproductive ways of doing it before figuring out what an experienced Linux programmer would think obvious. I'm not looking for instructions to follow for a specific set of tools or a specific software package. Instead, I'm looking for insight into what decisions need to be made and how to make them, with understanding of the advantages and disadvantages of each individual choice. These are some of the questions that come up: Where to put sources Where to put built object files and libraries where to install what to set in environment variables what compiler flags matter and how do you manage them across several types of builds what configuration entries to make in an IDE how to manage libraries to support multiple environments how to handle different build versions such as debug vs release, or cross platform builds If you are an experienced Linux developer, the answers to these questions may seem trivial and obvious. I'd like to learn to make decisions about these questions that result in as little manual configuration as possible, given some existing sources, a particular IDE, or no IDE at all, a paticular set of development libraries etc. At this point you're probably thinking: Can you be more specific? Sure. But remember that I'm trying to learn how to think about this stuff, not just follow a recipie for a specific set of results. Example: Setup a project that uses CMake for some of its components, autogen.sh followed by configure for others and just configure for a few more: debug builds without an IDE debug builds in NetBeans debug builds in Eclipse debug build in Visual Studio all of the above with release builds for Linux, Mac and Windows. So... **What are your thoughts on an approach that works for all four? Do you have any advice on what to read?**

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • Which components should I invest in.. for a backup machine.

    - by Senthil
    I am a freelance developer. I have a PC, a laptop and an old testing and file server machine. I might add one or two in future. I want to have an on-site backup machine that can handle backups of ALL these machines - file backups, MySQL backups, backup of subversion repository, etc.. When building the machine, which components should I invest more in? Examples: The cabinet should have lots of room for expansion. Hard disk size should be large. But I guess hard disk speed need not be high (?) But other components like, RAM, PSU, Processor, Network card, Cooling, etc.. how much relative importance do these have in a backup machine? Which of these components should be high-end or large, and which ones need not be? Some Idea of the load: There will TBs of data. File backups and subversion repository backups will at least be done daily. MySQL backups done weekly. assume 3 machines at the moment and somewhere around 10 machines in the future.

    Read the article

  • IIS Web Farm Framework servers are automatically set to "unavailable" even when they are healthy... And they never return to the available state!

    - by JohannesH
    I have 2 web farm configurations, one with 2 member servers and one with 3 member servers. I have health monitoring set up on both farms and the monitoring tool reports all servers as being healthy. However after a while all the servers are marked as being "Unavailable" and "Healthy" in the "Monitoring and Management" screen (in the "Servers" screen they are all listed with "Yes" in the "Ready for Load Balancing" column). Viewing the event log on both the web farm controller or any of farm servers doesn't reveal anything interesting. there are no warnings or errors in the period where the servers became unavailable. There are a couple of informational events about the worker process getting shut down due to inactivity but I don't hope this is the cause since that would mean that the farms will die during the night when the load is low. Am I missing something? EDIT: Btw, I think its very odd that the application pool shuts down on the servers since the health monitoring system is polling an aspx page on each server. Shouldn't that keep them going? EDIT2: Now I've also experienced this problem with the RTW version of Web Farm Framework 2.

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • "merging" multiple internet connections

    - by Spencer R
    I've seen this question asked several times here on SF, but I'm looking for some updated information; specifically concerning Server 2012. I'm in the process of buying a home so I'm trying to get some plans together on how I want to structure my network. Internet speeds aren't the greatest and connections can be unreliable where the house is so I was thinking of having two DSL lines installed. My question is, how could I leverage those two connections to create the best network I can, in terms of speed and reliability. My parents will be moving in with me - they consume a lot of bandwidth as it is, but then add my internet traffic to it, and I'm headed for a lot of frustration. I thought I remember reading somewhere that Server 2012 has some new functionality to utilize multiple connections on multiple NICs in a way that wasn't possible in earlier versions of Server. Not sure if Windows will work but, I'm an application developer and spend the majority of my time in Windows environments. However, I've only recently returned to the Windows world, so I'd like my main server at home to run Win Server 2012 so that I can become more familiar with it.

    Read the article

  • How to delete removed devices from a mdadm RAID1?

    - by Kabuto
    I had to replace two hard drives in my RAID1. After adding the two new partitions the old ones are still showing up as removed while the new ones are only added as spare. I've had no luck removing the partitions marked as removed. Here's the RAID in question. Note the two devices (0 and 1) with state removed. $ mdadm --detail /dev/md1 mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. /dev/md1: Version : 00.90 Creation Time : Thu May 20 12:32:25 2010 Raid Level : raid1 Array Size : 1454645504 (1387.26 GiB 1489.56 GB) Used Dev Size : 1454645504 (1387.26 GiB 1489.56 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Nov 12 21:30:39 2013 State : clean, degraded Active Devices : 1 Working Devices : 3 Failed Devices : 0 Spare Devices : 2 UUID : 10d7d9be:a8a50b8e:788182fa:2238f1e4 Events : 0.8717546 Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 8 34 2 active sync /dev/sdc2 3 8 18 - spare /dev/sdb2 4 8 2 - spare /dev/sda2 How do I get rid of these devices and add the new partitions as active RAID devices? Update I seem to have gotten rid of them. My RAID is resyncing, but the two drives are still marked as spares and are number 3 and 4, which looks wrong. I'll have to wait for the resync to finish. All I did was to fix the metadata error by editing my mdadm.conf and rebooting. I tried rebooting before, but this time it worked for whatever reason. Number Major Minor RaidDevice State 3 8 2 0 spare rebuilding /dev/sda2 4 8 18 1 spare rebuilding /dev/sdb2 2 8 34 2 active sync /dev/sdc2

    Read the article

  • Relevance and Necessity of SNMP

    - by Adam Tannon
    Edit: I am in the process of designing a Java-based monitoring tool that will send back periodic "health checks" of a Java app deployed to a cluster of GlassFish servers. I am trying to figure out the best protocol for this monitoring tool to send information back to the monitoring server on. After an initial research effort on my part, it seems like SNMP is just a protocol for monitor-type applications to communicate the "health status" of something (a part of a network, a server, a cluster, an application, etc.) to the rest of the network. If the above is incorrect, please correct me!!! Assuming the generalization is more or less accurate, my next question is: why is this a protocol!?!? In the age of REST/SOAP/TCP protocols, why is there the need for a standardized protocol that only fits one type of application (monitoring)? In other words, if I'm a developer assigned to building a new monitoring tool that periodically polls a server and reports on its CPU and available memory, what advantages does SNMP give me over just POSTing to a RESTful API via plain 'ole HTTP? I'm sure I'm missing something here - I just need someone to help connect the dots! Thanks in advance!

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

< Previous Page | 562 563 564 565 566 567 568 569 570 571 572 573  | Next Page >