Search Results

Search found 30414 results on 1217 pages for 'project failure'.

Page 522/1217 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • Monitoring System for the cloud?

    - by Maxim Veksler
    I need a monitoring system, much like ganglia / nagios that is build for the cloud. I need it to support : Adding / removing nodes dynamically. (Node shuts down, dose not imply node failure...) Dynamic node based categorization, meaning node can identify them self as being part of group X (ganglia gets this almost right, but lacks the dynamic part...) Does not require multicast support (generally not allowed in cloud based setups) Plugins for recent cool stuff such as Hadoop, Cassandra, Mongo would be cool. More features include: External API, web interface and co. I've looked at Ganglia, munin and they both seem be almost there (but not exactly). I would also go for reasonably priced Software as Service solution. I'm currently doing research, so Suggestions are highly appreciated. Thank you, Maxim

    Read the article

  • Altering policies in policy based management to look at even which happened only in last 24 hours

    - by Manjot
    Hi, I am using SQL server 2008 Standard edition. I am using Policy based management with policies which come with SQL server during installation. I want the policies to only look at events that happened in last 24 hours. For example for "Windows Event Log System Failure Error" policy if system restarted unexpectedly 5 days ago, i don't want to be alerted daily. Is there any way by which I can restrict a policy to look at events which happened in last 24 hours not older? Any help please? Thanks in advance.

    Read the article

  • Getting rsync to move file from source to destination ?

    - by fabien-barbier
    Is rsync is a good choice for my project ? I have to : - copy files from source to destination folder via SSH, - be sure all files are copied, - delete source files after copy. - if I have conflict name, I have to rename files. It looks like I can use option : --remove-source-files (to delete source files) But how rsync manage conflict, can I had rules ? Use case on my project : I run scientific calculation on server A and results are inserted in folder "process", for each calculation I have a repository like this : /process/calc1. Now I would like to transfer repository "/calc1" to server B (I get /process/calc1), and delete "calc1" from server A. ...During another calculation I get "/process/calc2" on server A, the idea is also to move "calc2" in "/process/" directory on server B, then I have now on server B : - /process/calc1 - /process/calc2 (and /process/ on server A is empty). How rsync will manage conflict (on server B) if I have another folder like "/process/calc1" in server A after a new calculation (if "/process/calc1" already exist on server B) ? Is it possible to add rules with rsync, and rename "/process/calc1" by "process/calc1R2" in server B ? And so on (ex:calc1R3) ? Thanks.

    Read the article

  • How to backup a remote VPS machine?

    - by morpheous
    I am considering opting for a VPS solution, with the server running Ubuntu server. I am pretty new to this, and I need to come up with a backup policy for my server data. Initial data is likely to be about 80Mb, and I expect the data to grow at approximately 5Mb to 10 Mb a day. Can anyone recommend: A backup/restore policy (best practises for a small startup) Which tools to use for backup? Another thing that is not clear to me is - where are the files backed up to normally (in the case of remote servers). If the files are backed up to the same machine (or even to another machine but with the same host), there is potentially, a single point of failure). How do people normally backup their server data, and is the probability of machine meltdown or the host company server farm "catching fire" so remote as not to be worth worrying about - especially for a small (read one man) startup like me?

    Read the article

  • Uploading file > 1 MB on Django admin gives 400 Bad Request response.

    - by ayaz
    I have a small Django (1.2.x) project deployed on Apache (2.x) via mod_wsgi (2.x). In the admin, if I upload a file < 1MB, I can get it through; however, for a file, say, 1.2MB in size, I get a 400 response from the server with "Error 400" in the body only. I am wondering why this is happening. As far as I can see, there is no LimitRequestBody set in Apache configuration. I have tried uploading with several browsers including: Firefox, Chrome, and Safari. In the log file for Apache, there is apparently no entry for requests that gave the 400 error response. This is strange. I should point out that the scenario where this is happening is thus: The project in question is deployed on two identical Apache servers (completely identical setup) that are behind a load balancer. On my development setup, of course, the problem does not surface. Any help with this will be very much appreciated.

    Read the article

  • Microsoft Visual C++ Runtime error when viewing JPEG files in Explorer

    - by Prince Kitts
    I am getting the following error each time I view JPG files in the windows explorer in Detailed view. Microsoft Visual C++ Runtime Library Assertion failed! Program: C:\Windows\Explorer.EXE File: multimedia\photos\metadatahandler\util.cpp Line: 4706 Expression: MinutesFraction < 1.0 For information on how your program can cause an assertion failure, see the Visual C++ documentation on asserts (Press Retry to debug the application - JIT must be enabled) These pictures were taken from a Nikon Coolpix AW110 camera. I think it is related to some EXIF data that is a date/time. I have tried reinstalling the Visual C++ 2013 and 2008 Run time library and restarting and the problem is still there. I uploaded a sample file here: https://anonfiles.com/file/e346e174708714a88d372e295265a03f (click topmost download button and not the ad below it or just save the opened image)

    Read the article

  • Why can't I renew my dynamic IP address?

    - by qwerty
    So, I'm going to explain this from the start. I've started a project with a friend of mine which includes a webspider, that crawls through all pages on a site and stores them in a DB. Since I've never done this before, I didn't think about the amount of requests I was actually sending to the site, and after a day or two I finally got my IP blocked. I need to be able to visit that site as it's very important to me. Not only for my project, but for other reasons too. (and if I'm able to renew my IP I'm going to set a delay on the crawler so I don't get blocked & DDOS the site) I have a dynamic IP address, at least that's what my router settings say. I've tried ipconfig /flushdns, ipconfig /release, restart computer. No result. I end up with the same IP address. I've also tried renewing it from the router, however, I think it uses the same method which isn't working. Is it possible that site has blocked my mac address? Can a site even access my mac address?

    Read the article

  • Is it safe to delete "Account Unknown" entries from Windows ACLs in a domain environment?

    - by Graeme Donaldson
    It's not uncommon to see entries in Windows ACLs (NTFS files/folders, registry, AD objects, etc.) with the name "Account Unknown (SID)". Obviously these are because of old AD users or groups which at some point had permissions manually configured on the relevant object and have since been deleted. Does anyone know if it is safe to remove these "Account Unknown" ACEs? My gut feeling is that it should be just fine, but I'm wondering if anyone has any past experiences where doing this has caused trouble? Normally I just ignore these, but the company I'm working at now seems to have an abnormal number of these, most likely due to past admins' inexperience with AD/Windows and assigning permissions to user accounts rather than groups in all sorts of weird places. FWIW, our environment is not complex, a single domain forest, 4 DCs in 3 sites, with all network connectivity and replication healthy, so I'm certain that these "Account Unknown" entries are really old accounts, and not just because of some failure to resolve the SID to a human-readable name.

    Read the article

  • How to back up a network volume to my Time Capsule?

    - by Mike
    I have a Time Capsule that I'm using for my backups. I have a network volume (coincidentally on the same time capsule) that I'd like to back up as well. How can I tell Time Machine to back up network volumes in addition to my main laptop hard drive? PS: yes, I know this setup isn't ideal. It'll incur 2x network overhead when backing up the network volume, plus my data won't be safe in the event of a drive failure since both copies will be on the same disk. However, it will give me some small amount of safety in the event I accidentally delete files on the network volume, among other things.

    Read the article

  • gitweb- fatal: not a git repository

    - by Robert Mason
    So I have set up a simple server running debian stable (squeeze), and have configured git. Using gitolite, I have all functionality (at least the basic clone/push/pull/commit) working. Installation of gitweb went without any issues. However, when I access gitweb, I get a gitweb screen without any repos listed. # tail -n 1 /var/log/apache2/error.log [DATE] [error] [client IP_ADDRESS] fatal: Not a git repository: '/var/lib/gitolite/repositories/testrepo.git' # cd /var/lib/gitolite/repositories/testrepo.git # ls branches config HEAD hooks info objects refs Here is what I see in /var/lib/gitolite/projects.list: testrepo.git And in /etc/gitweb.conf: # path to git projects (<project>.git) $projectroot = "/var/lib/gitolite/repositories"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages #$home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = "/var/lib/gitolite/projects.list"; # stylesheet to use $stylesheet = "gitweb.css"; # javascript code for gitweb $javascript = "gitweb.js"; # logo to use $logo = "git-logo.png"; # the 'favicon' $favicon = "git-favicon.png"; What is missing?

    Read the article

  • Where are task scheduler event files stored on Windows Server 2008?

    - by MacGyver
    I tried setting up triggers in Task Scheduler Windows Server 2008, but Microsoft needs to fix a bug that I documented on Stack Overflow. So I can't use triggers until they fix this bug. Basically the Task Scheduler doesn't trigger an event that has a Result Code of 2147942402. http://stackoverflow.com/questions/10933033/windows-server-2008-task-scheduler-trigger-xml-syntax-for-email-notification-ta In the mean time, I'd like to write a task that runs .NET code that queries the log/event files programmatically every 15 minutes and sends success/failure emails based on the events that occured for the given tasks in task scheduler. Here's where the XPath is stored for the Task Trigger XML tab (that I can't rely on): C:\Windows\System32\Tasks\ I cannot find where the events (or log files containing the event ids) of each task are stored. Does anyone know where to find these log files? The log name is "Microsoft-Windows-TaskScheduler/Operational".

    Read the article

  • Windows 7 Laptops "unable to find logon server"

    - by tombull89
    In the school I work in we have a large number (500+) of Windows 7 laptops on a Meru Wireless system and Server 2008 R2 Servers. We have an appalingly large failure rate where a student or staff member tries to logon (to the laptop joined to the domain) and is met by "There are currently no logon servers available to service the logon request". I've seen this question and this one and while removing and re-joining to the domain works, it's highly time consuming, irritating, and we (I) have more important things to concentrate on. The only MS KB article I can find is regarding a read-only domain controller (which ours isn't) and I'm at my wits end. Ultimately, is there a definitve cause and answer to this or am I stuck removing and re-joining the laptops?

    Read the article

  • Unable to force Debian to do unattended install... libc6 wants interactive confirm

    - by JD Long
    I'm trying to create a script that forces a Debian Lenny install to install the latest version of CRAN R. During the install it appears libc6 is upgraded and the install wants interactive confirm that it's OK to restart three services (mysql, exim4, cron). This process HAS to be unattended as it runs on Amazon's Elastic Map Reduce (EMR) machines. But I'm running out of options. Here's a few things I've tried: This previous question appears to be exactly what I'm looking for. So I set up my install script as follows: # set my CRAN repos... yes, I know there's a new convention where to put these. echo "deb http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list echo "deb-src http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list # set the dpkg.cfg options per the previous SuperUser question echo "force-confold" | sudo tee -a /etc/dpkg/dpkg.cfg echo "force-confdef" | sudo tee -a /etc/dpkg/dpkg.cfg export DEBIAN_FRONTEND=noninteractive # add key to keyring so it doesn't complain gpg --keyserver pgp.mit.edu --recv-key 381BA480 gpg -a --export 381BA480 > jranke_cran.asc sudo apt-key add jranke_cran.asc sudo apt-get update # install the latest R sudo apt-get install --yes --force-yes r-base But this script hangs with the following request for input: OK, so I tried stopping the services using the following script: sudo /etc/init.d/mysql stop sudo /etc/init.d/exim4 stop sudo /etc/init.d/cron stop sudo apt-get install --yes --force-yes libc6 This does not work and the interactive screen comes back, but this time with only cron listed as the service that must be restarted. So is there a way to make libc6 just restart these services with no user input? Or is there a way to stop cron so it does not cause an interactive prompt? Maybe a creative option I've never thought of? Keep in mind that this system is brought up, some Hadoop code is run, and then it's torn down. So I can put up with side effects and bad behavior that we might not want in a production desktop machine or web server.

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • How can I find out which driver/file is being loaded when the system hangs during the Windows 7 boot

    - by user24247
    My desktop computer (1 OS, 1 drive, 1 partition) hangs during the Windows 7 boot process. When selecting F8 I can select Safe Boot which allows me to see the files processed during the boot process. I know that the last line displayed is the last file that was SUCCESFULLY loaded. How do I find out what the next line, and the potential candidate driver/file/program would have been? The unusual thing, at least in my experience, is that the freezing up of the system also happens when I boot from the Windows 7 install disks, which is preventing me from any repair options. With a failure of both, I cannot not restore Windows 7 to a previous date or uninstall drivers/programs that may be the cause of the hanging. Thanks for your responses. Marc

    Read the article

  • What's the best way to run Drupal and Django sites behind the same Varnish server?

    - by Alexis Bellido
    I have a high traffic website running with Drupal and Apache, five web servers behind a Varnish server load balancing. Let's say this site is example.com. I'm using five backends and a director like this in my default.vcl: director balancer round-robin { { .backend = web1; } { .backend = web2; } { .backend = web3; } { .backend = web4; } { .backend = web5; } } Now I'm working on a new Django project that will be a new section of this site running on example.com/new-section. After checking the documentation I found I can do something like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.backend = newbackend; } else { set req.backend = default; } } That is, using a different backend for a subdirectory /new-section under the same domain. My question is, how do I make something like this work with my director and load balancing setup? I'm probably going to run two or more web servers (backends) with my new Django project, each one with a mix of Gunicorn, Nginx, and a few Python packages, and would like to put all of those in their own Varnish director to load balance. Is it possible to do use the above approach to decide which director to use?, like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.director = newdirector; } else { set req.director = balancer; } } All suggestions welcome. Thanks!

    Read the article

  • How reliable is HDD SMART data?

    - by andahlst
    Based on SMART data, you can judge the health of a disk, at least that is the idea. If I, for instance, run sudo smartctl -H /dev/sda on my ArchLinux laptop, it says that the hard drive passed the self tests and that it should be "healthy" based on this. My question is how reliable this information is, or more specifically: If according to the SMART data this disk is healthy, what are the odds of the disk suddenly failing despite this? This assumes the failure is not due to some catastrophic event that impossibly could have been predicted, such as the laptop falling down on the floor causing the drive heads to hit the disk. If the SMART data does not say the disk is in good shape, what are the odds of the disk failing within some amount of time? Is it possible that there will be false positives and how common are these? Of course, I keep backups no matter what. I am mostly curious.

    Read the article

  • Prevent Sending Spam with my Yahoo Account?

    - by Mohammad
    Today morning I found my Yahoo Mail account sent several emails with Subject re: and this body : this is crazy you should give it a look http://www.wanews15.net/biz/?news=9080205 to all my contact emails !!! I have Eset Nod32(Up to date) Anti Virus, but it couldn't find any thing. I've installed Microsoft Security Essential and it could find a virus. I remove it and I change my mail password. What happened to my Yahoo Mail Account and what should I do ? Edit : My email password was 12 characters and I couldn't remember it , so I'm using RoboForm. Is it possible that my Roboform password divulge. !!! Edit 2 : I've changed my yahoo mail password, but this morning I received Failure Notice for sending mail to some invalid mail addresses again, but there is a different this time, the sent mails didn't exist in my Sent Box. Is it a good thing or I have to do something else ?

    Read the article

  • ZFS SAS/SATA controller recommendations

    - by ewwhite
    I've been working with OpenSolaris and ZFS for 6 months, primarily on a Sun Fire x4540 and standard Dell and HP hardware. One downside to standard Perc and HP Smart Array controllers is that they do not have a true "passthrough" JBOD mode to present individual disks to ZFS. One can configure multiple RAID 0 arrays and get them working in ZFS, but it impacts hotswap capabilities (thus requiring a reboot upon disk failure/replacement). I'm curious as to what SAS/SATA controllers are recommended for home-brewed ZFS JBODs. In addition, how does battery-backed write cache (BBWC) play into the solution?

    Read the article

  • Moving a VM from MS Virtual Server R2 to VirtualBox

    - by AngryHacker
    I have a Windows 2000 Professional VM happily running under MS Virtual Server R2 on WindowsXP. I need to move it to a different PC and the only one available is a Mac with VirtualBox on it. Theoretically, VirtualBox accepts .VHD files out of the box, but my attempts to run my VHD on VirtualBox resulted in failure. The Guest OS just keeps rebooting. I've attempted to get help on the VirtualBox forums, and they said that the VM image must first be prepped for the move, but could not give me any specifics. Anyone here know what is specifically involved in moving a Win2k .VHD to VirtualBox on the Mac?

    Read the article

  • Too many processes?

    - by Mophily
    VMware Fusion v3.0.1 One vm for Windows XP, converted and recently expanded the disk size. Mac OS X 10.5.8, 2 x 3 GHz dual-core intel xeon machine. Immediately after booting, and before VMware Fusion is launched, the Activity Monitor shows eight processes associated with "vm". What caught my eye is the duplicates: netifup and dhcpd. I noticed this while trying to re-establish network connectivity after the upgrade to 3.0.1. I am not sure when the network connection was clobbered, so I cannot say it happend during the upgrade. Is eight processes typcial? I expect about six, as listed in other notes and documents on the web site. Could this be related to the failure to connect to the network?

    Read the article

  • Shared FC LVM VG with LVs for each KVM VM. Clvm required?

    - by Cocoabean
    I have 2 virtual machine hosts running Ubuntu 12.04 and KVM managed with libvirt. They are both connected to the same VG which is a LUN on my SAN over FC. I provision LVs on this shared VG for each VM. I don't think I need HA or failover, but I do want live migration between the hosts. Do I need clvm in this case? As long as I don't try to start the same VM on each host should this work? Clvm requires lots of overhead with clustering tools that I don't think I need. I can deal with manually restarting VMs on other hosts in the event of a hardware failure.

    Read the article

  • Problem setting up DL360G5 with scsi RAID

    - by ernelli
    I have a problem with reinstalling OS on a DL360G5. The BIOS [F9] do not detect any disc controllers and the HP SmartSetup did not find any compatible controllers. Inside the server, the two SCSI disks are conncted to a RAID controller using BCM8603 chipset. How is disc contoller supposed to be setup? I have tried to do a full BIOS reset. EDIT At the moment we suspect that the Smart Array controller E200i/412205-001 is broken. Are there any status LED's that indicate failure or success during start up? At the moment all LED's are off.

    Read the article

  • Get the "source network address" in Event ID 529 audit entries on Windows XP

    - by Make it useful Keep it simple
    In windows server 2003 when an Event 529 (logon failure) occures with a logon type of 10 (remote logon), the source network IP address is recorded in the event log. On a windows XP machine, this (and some other details) are omitted. If a bot is trying a brute force over RDP (some of my XP machines are (and need to be) exposed with a public IP address), i cannot see the originating IP address so i don't know what to block (with a script i run every few minutes). The DC does not log this detail either when the logon attempt is to the client xp machine and the DC is only asked to authenticate the credentials. Any help getting this detail in the log would be appreciated.

    Read the article

  • How to prepare WiFi for an on-stage demo?

    - by Jeremy White
    Today at WWDC, Steve Jobs gave his keynote and ended up having a failure on-stage when connecting to WiFi. Google had a similar issue a few weeks ago in the same conference center. Please reference the following article for more information. http://news.cnet.com/8301-31021_3-20007009-260.html I am looking for information on how to best prepare a demo which uses a closed wireless network in front of a large audience. Note that the network will be closed, and will not require internet access. What steps can I take to prevent interference from existing WiFi, Bluetooth, etc? How can I best prevent curious/malicious people from trying to intrude on my WiFi network? I am open to recommendations on specific models of routers.

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >