Search Results

Search found 6524 results on 261 pages for 'the ever kid'.

Page 181/261 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Windows file locks allowing multiple users to write to open file over network

    - by JPbuntu
    I have 6 windows computers (xp,vista,7) that need to access a samba share (Ubuntu 12.04). I am trying to make it so only one client can open a file at a given time. I thought this was pretty standard behavior of file locks, but I can't get it to work. The way it is right now a file can be open by two users, and changed and saved by either one of them. The last file saved overwrites what ever changes the other user made. At first I thought this was a Samba configuration problem, but I get this behavior even between two windows machines. So far I have only tested: Windows Xp Windows Vista Windows XP Samba << Windows Vista and both give the same behavior. When I tested the Samba configuration, I had set strict locking = yes and get errors logged like this: close_remove_share_mode: Could not get share mode lock for file _prod/part_number_list_COPY.xlsx Eventually all of the files are going to be moved onto the Samba share, so that is the configuration I am most concerned about fixing. Any ideas? Thanks in advance. EDIT: I tested an excel file again, and it is now working properly in both above mentioned cases, I am also no longer getting the above mentioned error. I don't know what happened, perhaps a restart fixed it? (also works with strict locking = no) Although I still need to find a solution for the CAD/CAM files we use, the software is Vector and it does not seem to be using file locks. Is there any software that I can use to manage these files, so two people can't open/edit them at a time? Maybe a windows application that forces file locks? Or a dirt simple version control system? (the only ones I have seen at are too complicated for our needs).

    Read the article

  • Verifying SMTP “MAIL FROM:” Matches “From:” Header in DATA

    - by dkovacevic
    Is there ever a legitimate reason for the SMTP “MAIL FROM:” field to not match the “From:” field in the DATA section of a message, besides mailing lists? From http://stackoverflow.com/questions/1750194/smtp-why-does-email-needs-envelope-and-what-does-the-envelope-mean: “But, to continue your snail mail metaphor, most professional letters will contain the sender's and recipient's addresses printed on the letter itself. Those addresses are not necessary for the postman, but are instead a courtesy to the recipient. So it's sensible that email would work the same way.” The problem with this line of logic lies here: “courtesy to the recipient”. Including the “From:” address in an email via SMTP is not a courtesy; it is required if the recipient is to be able to send a reply. From: How to limit the From header to match MAIL FROM in postfix?: “But if you really want to ensure From: and MAIL FROM then you have to apply header_checks so that Return-Path: matches From:” What are the implications of doing this? Mailing lists would obviously be a problem. Are there any other legitimate uses of differing “MAIL FROM:” and “From:” header information?

    Read the article

  • How does enterprise failover, such as with google.com, actually work?

    - by Alex Regan
    We have a few fedora systems that are configured for web, FTP, and email services. We'd like to mirror these services, so that we can provide near 100% reliability for our users. I'm a fairly experienced Linux administrator, but don't have much experience with redundant systems. What is the best way to do this? How does google and amazon do it? Google.com resolves to multiple IP addresses, but if my local desktop caches one of the IPs that are unreachable, I'm going to get a failed connection message. How do they prevent that from happening? If one of their servers goes down, how is it automatically redirected to another system, without the end-user ever knowing it? I understand there are failover devices, but they're only for failing over the system itself, not a complete network. Let's say we have the worst-case scenario, such as my primary system becomes inaccessible. What are the fundamental components that are used on Linux systems to provide this capability? I'm looking for concepts, or approaches, not answers like "check out openstack". What are the actual pieces that make up the solution? What has to be done to implement this capability? Hopefully my question is clear. I'd like to know what the pieces are that make up a failover system and what approach is taken by successful organizations that implement it. Thanks again, Alex

    Read the article

  • Is TrueCrypt truly safe?

    - by Alfred
    Hi. I have been using TrueCrypt for a long time now. However, someone pointed me to a link that described the problems with the license. IANAL and so it really didn't make much sense to me, however I wanted my encryption software to be open source - not because I could hack into it but because I could trust it. Some of the issues with it I have noticed: There is no VCS for the source code. Is this OK? There are no change logs. The forums are a bad place to be. They ban even if you ask a genuine question. Who really owns TrueCrypt? There were some reports of tinkering with the md5 checksums. To be honest, the only reason why I used TrueCrypt was because it was open source. But however, somethings are just not right. Has anyone ever validated the security of TrueCrypt? Should I really be worried? Yes I am paranoid; if I use an encryption software, I trust it with all my life. If all my concerns are genuine, is there any other open source alternative to TrueCrypt?

    Read the article

  • Server 2003 crashing intermittently, want to transfer function to other DC

    - by user1305332
    I have a Win2003R2 server that is intermittently crashing after some virus were introduced. I'm sure all virus have been cleaned thanks to Malwarebytes (were using McAfee - useless). When it crashes you can't login (local or remote) but can still access files remotely and ping it. After a while even file sharing stops and have to kill power to restart it (no BSOD) I need to either fix it (tried to reinstall SP2 and I tried to reinstall windows in repair mode but the repair option was not available when I booted from installation disks) or move it's functionality to another DC (another 2003R2 server). The server that's crashing is old with SCSI drives while the new server uses SATA drives and faster so it seems like a good idea to just transfer roles and ditch the old box. Finding replacement SCSI drives looks expensive if they ever fail. What would I need to transfer roles. If I just move the 5 FSMO roles and copy over the file shares. Would the new server have enough to run without the old server? Never done something like this, just want some tips. Thanks.

    Read the article

  • Intermittent PHP error: Undefined function <core function>

    - by Daniel
    In the last week I've been coming across an incredibly annoying error on one of Slicehost slices. It appears that every now and then PHP will fail with a fatal error, saying a certain function is undefined. The function changes, but is always a core PHP function e.g. defined(), version_compare(), etc. This problem has occurred while using several different PHP applications - PHPMyAdmin, my own custom built apps, etc, leading me to believe that the problem is not specific to the running code. Here are some details: - Debian Lenny - Apache 2.2.9 - PHP 5.2.6-1+lenny4 with Suhosin-Patch (running eAccelerator 0.9.6) Apache and PHP are installed from Debian packages. Error logs show nothing out of the ordinary. I thought memory might be an issue, but free -m reports upwards of 100MB free almost all the time. Another thing I'm trying to investigate is if the problem might be related to eAccelerator, but testing this theory out is incredibly hard because the issue doesn't appear very often and I've been using eAccelerator for months on this install without any problems up until now. Has anyone ever come across anything like this? Why would PHP report undefined core functions?

    Read the article

  • puppet master --compile logs errors to stdout

    - by danny
    I see a bug about this that was accepted and then closed a year ago: http://projects.puppetlabs.com/issues/3670 but I'm using puppet 2.7.14 and am getting the same issue. I'm trying to use "puppet solo" (i.e. just running puppet apply on each server to be configured) as I only have 2 or 3 servers in this project and adding another server as a puppetmaster would be completely overkill. Unless I'm mistaken, the best way to apply a node manually to a server is to do: puppet master --compile=mynode > catalog.json puppet apply --catalog catalog.json But the puppet master command outputs a couple of warnings and notices to stdout, mixed in with the desired json content. And it uses colored output so I can't just pipe it through egrep -v '^warning:' EDIT: I guess it's not too big of a deal to use grep - since puppet 2.7 pretty-prints the actual content and the warnings don't ever start with spaces, piping the output through egrep '^( |{|})' works So my questions are basically: Is there a better way than this to apply a puppet node without using a puppetmaster? I can't really find any good references online to using puppet without a puppetmaster, even though that seems like a perfectly reasonable thing to do for a small project. Is there a setting or flag that I'm missing that will get puppet master to stop being an asshole and send its errors to stderr instead of stdout? Or do I really have to turn off color logging, then grep to exclude warning: and notice: lines?

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • Mountain Lion overheating issue have to do with launchd/Python?

    - by Christopher Jones
    So, Ever since I installed ML, my MacBook Air has been running SUPER hot. Opened up activity monitor, and everything seemed to be pretty normal, until I had it refresh every .5 seconds... and then I started seeing some interesting things. A 'Python' process appears and is terminated several times a second, and uses TONS of CPU 70-110. It's parent process is 'launchd' - and when I sample the process, there is a lot going on with Python. http://db.tt/ovuX3hZM These appear and disappear too quickly to get one... this one only happened to be using 70 ish percent of CPU... but they consistently hit 100-110%. http://db.tt/ovuX3hZMg The parent process... launchd. lots of context switches and UNIX system calls... What is the deal here? (photo goes here when I earn the street cred) The sample of launchd. ANY help here could be of help to not only me, but possibly many others experiencing decreased battery life and warmer laps these days because of this Mountain Lion weirdness. PLEASE HELP! PS - I'd put the screen grabs inline, but i don't have enough street cred yet.

    Read the article

  • Bash wonkyness on Ubuntu versus RHEL

    - by d34dh0r53
    Fellow faulters, I'm playing around with a one liner that I've developed on a RHEL 5.4 box and I have it working perfectly: TOTAL_RAM=`free | grep Mem: | awk '{ print $2 }'`; \ ps axo rss,comm,pid | awk -v total_ram=$TOTAL_RAM \ '{ proc_list[$2] += $1; } END { for (proc in proc_list) \ { proc_pct = (proc_list[proc]/total_ram)*100; printf("%d\t%s\t%0.2f%\n", proc_list[proc],proc,proc_pct); }}' \ | sort -n | tail -n 10 Which outputs something like the following on my RHEL box: 3736 logmon 0.01% 4156 EvMgrC 0.01% 4692 hald 0.01% 5020 ntpd 0.02% 6252 sshd 0.02% 7784 cvd 0.02% 9224 snmpd 0.03% 13068 dsm_sa_datamgr3 0.04% 23320 dsm_om_connsvc3 0.07% 4249864 mysqld 12.90% However on my Ubuntu 9.04 slice I get this: awk: run time error: not enough arguments passed to printf("%d %s %0.2f% ") FILENAME="-" FNR=104 NR=104 33248 console-kit-dae 3.17 I think it has to be bash that is borking something, but I'm really not doing anything that should be that bash specific. The RHEL box is running: # yum info bash | grep -e Version -e Release Version : 3.2 Release : 24.el5 And the Ubuntu box: # apt-cache show bash | grep -e Version Version: 3.2-5ubuntu1 I haven't dug into this super deeply, and thought I'd ping my fellow johnnys to see if you've ever run across this before. /bow

    Read the article

  • AD server within another network - DNS issues

    - by Harry Muscle
    Here's a quick summary of the environment I support: we have a domain (domain A) that has about 20 client computers. The domain server for this domain and all the clients sit within the network infrastructure of a larger domain (domain B). All the computers get their network settings via DHCP from domain B's servers. I have no control and am unable to make changes to anything to do with domain B. The problem I have is that currently in order for my domain's (domain A) clients to be able to resolve the domain server and the shares on it they have their DNS server IP address set to domain A's domain server (via the default GPO). Unfortunately when a laptop (windows and mac) gets taken home, they are still looking for the domain server as their DNS server and obviously can't access the internet correctly outside of our environment. Ideally I need a solution where the machines use domain A's domain server as their DNS when inside the office and use what ever DNS server DHCP gives them when they are outside the office. However, since I have no control over the office DHCP server, I'm not sure how this can be accomplished. Any help and advice that anyone can offer is highly appreciated. Thanks, Harry P.S. The solution I'm trying to find needs to require no involvement from the user.

    Read the article

  • IIS FTP error: 426 Connection closed; transfer aborted.

    - by Jiaoziren
    Hi, I have an IIS FTP set up on Windows 2003 SP2 (S1). Everyday in the early morning, a script on another server (S2) will run and initiate FTP transfer of pulling log files from S1 to S2. The FTP client we're using is built-in FTP.exe in Windows 2000 on S2. Recently we replaced S1 with a new server however we kept the IP address. There are multiple IP addresses on new S1. Ever since the new S1 was in place, the '426 Connection closed; transfer aborted.' errors haven been occuring randomly. The log indicated that the transfer started ok however the file cannot be transferred completely, as per log below: mget access*.log 200 Type set to A. 200 PORT command successful. 150 Opening ASCII mode data connection for access02232010.log(205777167 bytes). 426 Connection closed; transfer aborted. ftp: 20454832 bytes received in 283.95Seconds 72.04Kbytes/sec. The firewall monitor suggested that the connection was setup in passive mode however I've been told that MS FTP.exe doesn't support passive mode. Though I can see the response of 'entering passive mode' from server when typing in 'quote pasv'. My network admin has told me to try the transfer in active mode however I don't know how to open active mode on client side. It's getting really frustrating. Wish someone here has the right knowledge/experience could shed me a light. Cheers.

    Read the article

  • RRAS NAT not working on a certain computer

    - by legenden
    This is driving me crazy. I have a virtualized W2K8 server running RRAS. Every other computer or server on the network can access the internet through the NAT except one. On one server, it just won't work. I can ping the ip address of the NAT gateway just fine, and everything else works. (SMB, etc) DNS, which is hosted by the same server, also works just fine. I have even reinstalled the OS on the problem server and it still doesn't work. Recap of the steps I tried: There are 3 network cards in the server, I tried every one and different switch ports. Not a hardware problem. Reinstalled W2K8 R2 on server with the problem, didn't help. Tried the IP of the internet gateway directly - this did work (!). But I need NAT to work. All firewalls are disabled. Removed computer from domain, deleted computer membership in Active Directory Users and Computers and added it back. Disabled all other network adapters and set a static ip and specified the gateway ip manually. When I tracert a public IP, the first hop (or any other hop) comes up as: C:\>tracert www.google.com Tracing route to www.l.google.com [209.85.225.106] over a maximum of 30 hops: 1 * * * Request timed out. 2 * * * Request timed out. From a different computer, on which NAT works, the first hop comes up as: tracert www.google.com Tracing route to www.l.google.com [209.85.225.105] over a maximum of 30 hops: 1 <1 ms * <1 ms xxxx [10.5.1.1] This is the most bizarre problem I ever came across, and I realize that it's a long shot asking it here given all the details, but I'm pulling my hair out. Maybe someone has an idea...

    Read the article

  • Large mailbox in Outlook 2007 takes ages to index

    - by Reado
    In our company each user has a single mailbox and all email they have ever sent/received is in that mailbox. We don't do archiving to PST and we thought that was the way forward. The problem we now have is if someone switches to another PC for the day and opens Outlook, it has to download all emails first to that PC (cached mode) but even then when they try to search for something, Outlook says items are still being indexed. One user has over 100,000 items to be indexed and it's been saying that for about a week! As a temporary workaround I have turned off instant searching which allows them to search for anything, but it takes time to filter through, and Outlook doesn't exactly indicate if it's still searching for something, so in most cases the user thinks the search isn't working when really it is and it's just taking time to populate the results. I need a solution that allows the mailbox to be indexed really quickly if the user has to login to another PC. Are we best using Online Mode instead of Cached Mode or is there another way around this? Thanks in advance.

    Read the article

  • Alternate way to connect a vpn through a MIFI

    - by questor
    This has gotten to be a major problem at our company and depending on who I ask, the problem either does not really exist (mfr. and vendor) or is insoluble ( according to most users including techs who know how to prove their point). The problem involves getting a normal Windows 7 system to connect to a normal Server 2008 R2 Server over a cellular router (usually called a Mifi). A very few brands/models appear to work but the majority cannot make the connection. Since it is a cellular device, there are many variables that come into play and I wondered if anyone had ever found a consistent way to either make one work or else prove to the providers that their equipment was at fault. They all specifically state “VPN use” on the sales brochures. But few if any work. And those that do are not reliable. From a standpoint of pure knowledge, I just wondered if anyone knew the real reason why they fail? Pptp, L2tp, IPsec doesn’t matter. I have not tried Shrew or OpenVPN and am using strictly MS Windows protocols. Plenty of Google Searches back up my complaints but none seem to be any closer to knowing "why" they fail, just that they do. This is a "quest for knowledge"question. I don't expect a solution. Just a reason for the problem if anyone has any ideas.

    Read the article

  • Server hang - data loss on reboot, post mortem analysis

    - by rovangju
    A development server I'm responsible for (ext3 on raid 5 w/Debian Squeeze) froze up over the weekend and I was forced to reset it, as in unresponsive from KVM/physical keyboard access, no eth devices responding, etc. Not even the backup process ran (Figures, the one time I don't check for confirmation) So after the reset, it turns out that every trace of disk IO activity that should have happened for a period of ~24H is completely gone. The log files have a big gap in the dates and times. As if the writes were never committed to disk, no processes seemed to have run. Luckily it was a weekend and nothing of value would have been lost and I don't suspect a hack. What can I do in post mortem to this event - to prevent it from ever happening again? I've seen this happen before on a completely different machine running FreeBSD. I am rounding up the disk checking tools right now - but there must be more going on! Mount options: /dev/sda1 on / type ext3 (rw,errors=remount-ro) Kernel: Linux dev 2.6.32-5-686-bigmem Disk/Inodes: 13%/3%

    Read the article

  • The physical working paradigm of a signal passing on wire.

    - by smwikipedia
    Hi, This may be more a question of physics, so pardon me if there's any inconvenience. When I study computer networks, I often read something like this in order to represent a signal, we place some voltage on one end of the wire and the other end will detect the voltage and thus the signal. So I am wondering how a signal exactly passes through wire? Here's my current understanding based on my formal knowledge about electronics: First we need a close circuit to constrain/hold the electronic field. When we place a voltage at somewhere A of the circuit, electronic field will start to build up within the circuit medium, this process should be as fast as light speed. And as the electronic field is being built up, the electrons within the circuit medium are moved, and thus electronic current occurs, and once the electronic current is strong enough to be detected at somewhere else B on the complete circuit, then B knows about what has happend at A and thus communication between A and B is achieved. The above is only talking about the process of sending a single voltage through wire. If there's a bitstream and we need to send a series of voltages, I am not sure which of the following is true: The 2nd voltage should only be sent from A after the 1st voltage has been detected at B, the time interval is time needed to stimulate the electronic field in the medium and form a detectable electronic current at B. Several different voltages could be sent on wire one by one, different electronic current values will exists along the wire simutaneously and arrive at B successively. I hope I made myself clear and someone else has ever pondered this question. (I tag this question with network cause I don't know if there's a better option.) Thanks, Sam

    Read the article

  • Windows 7 BSOD Crashes

    - by Shane Andrade
    I recently upgraded to Windows 7 64 doing a clean install from Vista 64 and ever since I keep getting random blue screen crashes. I have the feeling it's caused by my video card but everything has the most up-to-date drivers for Windows 7 64 bit. Here is the memory dump from my most recent crash: MEMORY_MANAGEMENT (1a) # Any other values for parameter 1 must be individually examined. Arguments: Arg1: 0000000000041790, The subtype of the bugcheck. Arg2: fffffa8001990b90 Arg3: 000000000000ffff Arg4: 0000000000000000 Debugging Details: ------------------ PEB is paged out (Peb.Ldr = 000007ff`fffd9018). Type ".hh dbgerr001" for details PEB is paged out (Peb.Ldr = 000007ff`fffd9018). Type ".hh dbgerr001" for details BUGCHECK_STR: 0x1a_41790 DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT PROCESS_NAME: csrss.exe CURRENT_IRQL: 0 LAST_CONTROL_TRANSFER: from fffff80002cff26e to fffff80002c8cf00 STACK_TEXT: fffff880`0299ae38 fffff800`02cff26e : 00000000`0000001a 00000000`00041790 fffffa80`01990b90 00000000`0000ffff : nt!KeBugCheckEx fffff880`0299ae40 fffff800`02cc05d9 : fffffa80`00000000 00000000`01e73fff 00000000`00000000 fffff960`0023653f : nt! ?? ::FNODOBFM::`string'+0x339d6 fffff880`0299b000 fffff800`02fa2e50 : fffffa80`09140c90 0007ffff`00000000 00000000`00000000 00000000`00000000 : nt!MiRemoveMappedView+0xd9 fffff880`0299b120 fffff960`002e381b : fffff900`00000000 fffffa80`07c85d10 00000000`00000001 fffff900`c1e56cd0 : nt!MiUnmapViewOfSection+0x1b0 fffff880`0299b1e0 fffff960`002b4fc1 : 00000000`00000000 fffff900`00000000 fffff900`c1e56cd0 00000000`00000000 : win32k!SURFACE::bUnMapImmediate+0x5b fffff880`0299b210 fffff960`002b527b : fffff900`c07fdd10 fffff8a0`00000000 00000000`00000000 00000000`00000000 : win32k!bMigrateSurfaceForConversion+0x5ad fffff880`0299b340 fffff960`002dc3e3 : fffff900`00000000 fffff900`c1e5c010 00000000`00000000 00000000`00000000 : win32k!pConvertDfbSurfaceToDibInternal+0x1cb fffff880`0299b420 fffff960`002b5319 : fffffa80`07c7f470 00000000`00000001 00000000`00000000 00000000`00000282 : win32k!MulConvertChildRedirectionDfbSurfaceToDib+0x53 fffff880`0299b460 fffff960`002b1267 : fffff900`c0132010 fffff900`c0132010 00000000`00000000 00000000`00000000 : win32k!pConvertDfbSurfaceToDib+0x41 fffff880`0299b490 fffff960`002b1b1f : fffff900`c0132010 00000000`00000001 fffff900`c24cc280 fffff900`c0132010 : win32k!bDynamicRemoveAllDriverRealizations+0x4f fffff880`0299b4c0 fffff960`00273bb9 : 00000000`00000000 fffff900`00000000 fffff900`00000000 00000000`00000000 : win32k!bDynamicModeChange+0x1d7 fffff880`0299b5a0 fffff960`000baa2d : 00000000`00000000 00000000`00000000 00000000`00000000 07cd8220`00000003 : win32k!DrvInternalChangeDisplaySettings+0xc7d fffff880`0299b7e0 fffff960`001a2c41 : 00000000`00000040 fffff900`c00bf010 00000000`00000000 07cd8220`00000003 : win32k!DrvChangeDisplaySettings+0x62d fffff880`0299b9c0 fffff960`001a2e9e : fffffa80`07cd8220 00000000`00000000 00000000`00000000 fffff800`02f6fec3 : win32k!xxxInternalUserChangeDisplaySettings+0x329 fffff880`0299ba80 fffff960`001a033a : 00000000`00000000 00000000`00000000 00998b21`81a100b6 00000000`00000040 : win32k!xxxUserChangeDisplaySettings+0x92 fffff880`0299bb70 fffff960`001a053a : 00000000`00000000 00000000`00000001 00000000`00000000 00000000`00000000 : win32k!xxxRemoteSetDisconnectDisplayMode+0x42 fffff880`0299bbb0 fffff960`00183ea6 : 00000000`00000000 fffffa80`06efeb60 fffff880`0299bca0 00000000`00000005 : win32k!xxxRemoteDisconnect+0x1c2 fffff880`0299bbf0 fffff800`02c8c153 : fffffa80`06efeb60 00000000`00000005 00000000`00000020 00000000`00000000 : win32k!NtUserCallNoParam+0x36 fffff880`0299bc20 000007fe`fd6b3d3a : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : nt!KiSystemServiceCopyEnd+0x13 00000000`027cf798 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : 0x7fe`fd6b3d3a STACK_COMMAND: kb FOLLOWUP_IP: win32k!SURFACE::bUnMapImmediate+5b fffff960`002e381b f6477401 test byte ptr [rdi+74h],1 SYMBOL_STACK_INDEX: 4 SYMBOL_NAME: win32k!SURFACE::bUnMapImmediate+5b FOLLOWUP_NAME: MachineOwner MODULE_NAME: win32k IMAGE_NAME: win32k.sys DEBUG_FLR_IMAGE_TIMESTAMP: 4a5bc5e0 FAILURE_BUCKET_ID: X64_0x1a_41790_win32k!SURFACE::bUnMapImmediate+5b BUCKET_ID: X64_0x1a_41790_win32k!SURFACE::bUnMapImmediate+5b Followup: MachineOwner

    Read the article

  • Safe mode boot with no change on screen but ongoing hard disk activity - why?

    - by omatai
    I have a machine with a dying hard drive - bad sectors are starting to multiply :-( The first sign (24 hours ago) was that it had an unmountable boot volume. At this time, I tried booting to safe mode with command prompt, which worked, after which I rebooted normally and ran a chkdsk. It has since been working as well as I could expect, but slowly getting less reliable. So I scheduled another chkdsk on both partitions (C: - boot, D: - data), having freed up a lot of space on both partitions to give Windows a little more scope for repairs (hopefully?). I then rebooted. On reboot, it protested about the unmountable boot volume again, so I booted to safe mode. I got the same list of drivers loaded as yesterday, and then no change to the screen for the past 2 hours. However, I see a flickering hard drive indicator light - not always on, but seldom ever off. What is happening? Is the chkdsk that runs in safe mode one which produces nothing on the screen and so chkdsk could be doing its thing... or is Windows still trying (but failing) to boot into Safe Mode?

    Read the article

  • Is it possible to detect nearby Wi-Fi enabled devices, not necessarily on the same network? [closed]

    - by Sky
    first question on StackExchange ever. I hope I got the right board. I'm trying to create a device (either from a standard AP or some other unconventional means) that will be able to detect nearby Wi-Fi enabled devices. For example, if a cellular phone (iPhone for instance) would be carried into the secured area, its MAC address will be logged. A cellular phone is a good example because it's the most common threat that should be detected. Some important points: The detection can be either active or passive, doesn't matter. The detected device might be connected to a different network, or might not be connected to anything at all. I assume most cellular phones are actively probing when not connected, but I'm not sure. It is important to not only identify the breach, but also to identify the device (MAC address). Conventional hardware is only optional. Distance of detection is at least 6 meters (20 feet). Handling one device at a time is good. Speed of detection is important, under 5 seconds is ideal. So my question is, is this even possible? If so, what can I use in order to make this a reality? Thank you for reading!

    Read the article

  • Dell Inspiron 1564 overheating but fan not switching on, how to diagnose?

    - by Smugrik
    I've got a Dell Inspiron 1564 laptop that is about one and a half years old. Since about a week, the laptop started to overheat, causing it to switch off unexpectedly... The cpu fan is working erratically, it can start to spin for a while, doing its job and cooling down the cpu before it stops, but then the temperature goes up, and the fan doesn't reacts, once the temperature reaches a critical point (over 85 celsius, checked with speedfan...), the laptop switches off... I already cleaned the vents and fan from dust, to no avail, and it was actually quite clean anyway. Drivers and bios are up-to-date, no crapware was ever installed on this machine. I don't know how to diagnose the problem, could it be the temperature sensors that sends wrong information, so the fan doesn't reacts? but then I believe the computer wouldn't detect the overheat and stop... Is there a way I can pin point the problem? Maybe some low-level diagnostic tools to check functionality of sensors and fans??? The warranty is already over so any suggestion would be welcome. Thanks!!

    Read the article

  • How do I give a user permisson to view scheduled task history on Server 2008?

    - by pplrppl
    I've set up a scheduled task on Server 2008 and want to run it as a user other than the local administrator. So I choose a domain account created specifically for this task and once I've closed the scheduled task and entered a valid password I want to run it and look a the history tab for this task. On the history tab I see: The user account does not have permission to view task history on this computer. What permission must I grant to allow this user to view history and/or how can I view the history as a local admin/domain admin instead of the user the job will run under? Steps to hopefully reproduce: I'm starting from the "Server Manager" - Configuration - Task Scheduler - Task Scheduler Library. IN the top middle pane I have tasks that have been running for several months as the local administrator. In the process of troubleshooting another issue I changed the task to run as Domain\ABCuser. Later in the process of troubleshooting I tried unchecking "run with highest privileges". I have since changed the job back to SERVERNAME\Administrator but the history tab still showed the permissions message. I may have had multiple Server Manager windows open. After Closing the Server Manager and being sure no other management consoles were open I was able to reopen the Server Manager and see the History tab without error. At this point the task works properly but should I ever need to run a task as a task specific account I'd like to know how to make the history viewable. It may be something as simple as closing all Server Manger windows to allow cached permissions to be refreshed the next time you open the Manager but at this point I don't know exactly what the solution is.

    Read the article

  • Are relative-path symlinks reliable on Rackspace Cloud Sites?

    - by Jakobud
    Rackspace's Cloud Sites have a lot of stupid limitations. For example, no SSH (in or out), no shell, no RSYNC, etc... (even through cron). Recently I learned that you can't reliably use symlinks in Cloud Sites. Apparently this is because the absolute path of your sites could change at any moment, since it's a shared host environment split up between many disks/servers. I guess different account's sites get moved from disk to disk whenever Rackspace decides to. Supposedly to increase efficiency across the board. So after talking with a Rackspace tech, he said they cannot guarantee that symlinks would always work. Obviously this is because if you have a symlink that use's an absolute path like this: //mnt/disk-34566/home/user34566/files/sites/www.mysite.com/mydir If you files go moved to a different disk (or whatever they do), then the absolute path would be different and the link would now be broken. That makes sense. So next, I asked the Rackspace tech if relative path symlinks were reliable. So if I have the following link: files/sites/www.mysite.com/mylink --> ../www.myothersite.com/anotherdir You can see that the symlink simply points to a nearby directory's sub-directory. He said they cannot guarantee that even those would always work either. Since it uses a relative path to another nearby directory I'm not sure how it could ever break from something Rackspace would do. Do relative symlinks somehow rely on absolute paths underneath? Or is Rackspace using some weird custom filesystem where they will break from absolute path changes? It seems like a relative-path symlink would be fine and would only break if the user did something to mess up the directories involved. But when the tech's say that they "don't officially support symlinks of any kind" that makes me hesitant to use them for large commercial websites in Cloud Sites. Can anyone with Rackspace experience give input on this topic?

    Read the article

  • Setting up SSL for phpMyAdmin

    - by Ubuntu User
    I would like to run phpmyadmin using my SSL certificate. I read that if I placed the following within the file: /etc/phpmyadmin/config.inc.php, it would force it to use SSL. And now it does... $cfg['ForceSSL'] =true; However, my issue is when I did this, now I get an error stating "cannot connect to server." I do a port scan and my port 443 is closed for one, but I am connecting via https:// for my secure web based email admin panel. This tells me this may not be the issue. Second, is that I have a SSL certificate I purchased but I am not sure how to apply this cert. mydomain.com.crt is sitting on my desktop, how should I be utilizing this? I remember creating a self signed cert for my web-email access. Do I have to do this for phpmyadmin as well? At least this way, since I am the only one who will ever access the DB, it will never expire. Also the phpmyadmin used to come up as: http://mydomain/phpmyadmin/ of course I am now trying to get to https://mydomain.com/phpmyadmin/ however, I do not have any pages on my website that requires https:// currently. In the future I may add this. But for now, I only want to access phpmyadmin via ssl. I can use my own -- but if this causes problems with future ecommerce apps under mydomain.com I would rather use the SSL cert I already purchased. Thank you!

    Read the article

  • Can't get an IBM xSeries 345 server to load Windows Server 2003 using ServerGuide utility

    - by Kyle Noland
    I have a client that has an IBM xSeries 345 eServer. Per the IBM support website, I have downloaded the ServerGuide Setup 7.4.17 installation ISO and burned a bootable CD. The CD boots fine and loads the utility. I walk through the following screens without any issue: Set the date and Time Detect the IBM ServeRAID card and install the latest firmware Clear the hard disks Set up the RAID array The next step is format the NOS partition. I select my partition size and the utility goes through the following steps: Creating NOS partition Formatting NOS partition (NTFS) Copying W32 files The copying W32 files takes about 10 minutes. I see the CD drive and disks working hard. When the copying is complete, I'm taken to a blank page just NOS Partitioning at the top. At the bottom of the screen are the familiar Back and Exit buttons. I see the place where the Next button should be, and if I click on it I can tell there is something there, but the space is empty. No button is displayed and clicking the empty spot doesn't ever take me to the next screen. I can't load the OS until I get past this part. I have already tried: Burning multiple copies and versions of the ServerGuide CD Letting the final screen just sit there over the weekend thinking it might advance after syncing the drives or something Has anybody else seen this? I'm really at a loss here.

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >