Search Results

Search found 304 results on 13 pages for 'allan anderson'.

Page 5/13 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Windows Installer using usb drive for temp purposes

    - by Douglas Anderson
    When installing apps that are built around Windows Installer, it would appear that it often uses my external usb hard disk (when it's connected) as the temp location while it expands and installs the application (creates a folder off the root with a guid name). Is there anyway to change this so it always defaults to a specific drive? This appears to be the case on Windows Vista and 7, not sure about previous releases. EDIT: Current environment variables look like this: TEMP=C:\Users\<me>\AppData\Local\Temp TMP=C:\Users\<me>\AppData\Local\Temp EDIT: I have a funny suspicion that it's using the drive with the largest available free space.

    Read the article

  • Exim - send certain "local" user mail to smtp

    - by Ryan Anderson
    I'm using Exim 3 and would like to know how to send some local addresses to the smtp server instead of Exim handling them as a localuser. They are local addresses in the sense that they have the same domain as listed in 'local_domains' in exim.conf. I tried using the "require_files" option on the localuser director in exim.conf, but with no luck. Any help appreciated. Thanks, Ryan

    Read the article

  • Automatically Applying Security Updates for AWS Elastic Beanstalk

    - by Eric Anderson
    I've been a fan of Heroku since it's earliest days. But I like the fact that AWS Elastic Beanstalk gives you more control over the characteristics of the instances. One thing I love about Heroku is the fact that I can deploy an app and not worry about managing it. I am assuming Heroku is ensuring all OS security updates are timely applied. I just need to make sure my app is secure. My initial research on Beanstalk shows that although it builds and configures the instances for you, after that it moves to a more manual management process. Security updates won't automatically be applied to the instances. It seems there are two areas of concerns: New AMI releases - As new AMI releases hit it seems we would want to run the latest (presumably most secure). But my research seems to indicate you need to manually launch a new setup to see the latest AMI version and then create a new environment to use that new version. Is there a better automated way of rotating your instances into new AMI releases? In between releases there will be security updates released for packages. Seems we want to upgrade those as well. My research seems to indicate people install commands to occasionally run a yum update. But since new instances are created/destroyed based on usage it seems that the new instances would not always have the updates (i.e. the time between the instance creation and the first yum update). So occasionally you will have instances that aren't patched. And you are also going to have instances constantly patching themselves until the new AMI release is applied. My other concern is that perhaps these security updates haven't gone through Amazon's own review (like the AMI releases do) and it might break my app to automatically update them. I know Dreamhost once had a 12 hour outage because they were applying debian updates completely automatically without any review. I want to make sure the same thing doesn't happen to me. So my question is does Amazon provide a way to offer fully managed PaaS like Heroku? Or is AWS Elastic Beanstalk really more of just a install script and after that you are on your own (other than the monitoring and deployment tools they provide)?

    Read the article

  • Running Safari from the command line adds current directory to the URL

    - by Charles Anderson
    I am trying to run the Safari browser (on Mac OS 10.4) from the command line, as follows: /Applications/Safari.app/Contents/MacOS/Safari http://localhost/dev/myfile.html However, Safari starts up and tries to access file:///Users/charlesanderson/scripts/http://localhost/dev/myfile.html /Users/charlesanderson/scripts happens to be my current directory. Can someone explain why Safari does this? Firefox is much better behaved?

    Read the article

  • How can you make a Windows USB HDD Modify All for All Users

    - by David Allan Finch
    Hi, I use a USB HDD a lot between lots of different Windows Boxes. What I find after a while is that there get to be lots of different Permission on the files in some cases stopping me looking at files or removing them. They want Admin rights or even sometimes you need to put the disk back into the original machine with the original user. This is a right pain. Is there away of making the disk have Modify All for All Users and making this the default for all files on the disk. Thanks

    Read the article

  • What configuration changes can I make to speed up extremely slow Windows VM's in ESXi 4.0.

    - by Shawn Anderson
    I've recently moved from VMWare Server to ESXi 4.0. Running on Dell T310. My VM's have been restored but they are running dog slow compared to VMWare Server. I loaded ESXi 4.0 using only default values. Where are some areas where I can tweak the performance? Even logging onto the VM's can be extremely sluggish. Trying to install software on any of them is a new experience in pain. Dell PowerEdge T310 Xeon X3460 2.80 GHz 32 GB RAM 1 HD (2 TB) I have 16 VM's on this server, but only six or so will be running during my testing. I keep an eye on the Resource Allocation and Performance tabs for the host and I never see CPU or RAM getting anywhere close to pegged. Events tab does show some notices for video RAM issues and some hints on Windows activation issues, but nothing that would point to the sort of sluggishness that I'm experiencing. 1 Windows Server 2008 R2 (64-bit) - 4 GB RAM 1 Windows 7 (32-bit) - 2 GB RAM 1 Vista (32-bit) - 1 GB RAM 3 XP (32-bit) - 1 GB RAM Over to you! Thanks - Shawn

    Read the article

  • How well will ntpd work when the latency is highly variable?

    - by JP Anderson
    I have an application where we are using some non-standard networking equipment (cannot be changed) that goes into a dormant state between traffic bursts. The network latency is very high for the first packet since it's essentially waking the system, waiting for it to reconnect, and then making the first round-trip. Subsequent messages (provided they are within the next minute or so) are much faster, but still highly-latent. A typical set of pings will look like 2500ms, 900ms, 880ms, 885ms, 900ms, 890ms, etc. Given that NTP uses several round trips before computing the offset, how well can I expect ntpd to work over this kind of link? Will the initially slow first round trip be ignored based on the much different (and faster) following messages to/from the ntp server? Thanks and Regards.

    Read the article

  • nginx - Redirect specific page paths to https while keeping everything else on http (in a single server call)?

    - by Kris Anderson
    From what I've gathered so far it's clear that running if statements in nginx should be avoided at all costs. Most of the examples I've found so far regarding specific page redirects involve multiple servers being used. But, isn't that a bit wasteful? I'm not sure, but I would think multiple servers to accomplish this would be somewhat slower then a single server when under heavy load. My current server call is this: server { listen 10.0.0.60:80; listen 10.0.0.60:443 default ssl; #other code } What I want to do is redirect certain http requests to https requests. For example, I want /login/ and /my-account/ to always be forced to use SSL. If you're on /help/ though, I want that served over the default http. Is there a way to accomplish this within a single server call? Or is there no downside to using 2 server calls to get this working? nginx seems to be under pretty active development and a lot of the older guides I've followed were from times when you couldn't listen to requests for port 80 and 443 within the same server call. But now that nginx has been updated to support that (I'm running 1.2.4), I'm wondering if there's a "best practice" way of handling this today. Any help would be greatly appreciated. EDIT: I did find this guide: http://redant.com.au/blog/manage-ssl-redirection-in-nginx-using-maps-and-save-the-universe/ and I updated my code as follows: map $uri $my_preferred_proto { default "http"; ~^/#/user/login "https"; } server { listen 10.0.0.60:80; ## listen for ipv4; this line is default and implied listen 10.0.0.60:443 default ssl; if ($my_preferred_proto = "none") { set $my_preferred_proto $scheme; } if ($my_preferred_proto != $scheme) { return 301 $my_preferred_proto://mysite.com$request_uri; } It's not working though. When I change the default to https everything is redirected to SSL so it does somewhat work. But the redirect of /#/user/login is not redirecting to HTTPS. Any ideas? Also, is this a good way to go about this?

    Read the article

  • How to create a readonly root linux: Can be mounted as writeable for persistent changes?

    - by Mr Anderson
    I'd like a read only file system that runs almost entirely in RAM but the compact flash or hardrive can be mounted and made writeable to make persistent changes. How do I do this on Linux? I've looked at several tutorials but none really explain how to create such a system with the option of being able to mount the storage device and make persistent changes. I looked at this so far: http://chschneider.eu/linux/thin_client/ I also looked on the old gentoo wiki but the article was very specific to Gentoo. I'll be using a debian based Linux but it would be nice I've someone could explain to me how to do this in pretty generic instructions ,that would work on any Linux distro. Thanks.

    Read the article

  • Green System Administrator looking for helpful tips

    - by Joshua Anderson
    I have just been promoted to Systems Administrator for our product. We are designing a application that communicates with the cloud(Amazon EC2). I will be in charge of maintaining all Instances and their underlying components. So far this involves a set of load balanced services instances that connect to a central DB in a multi-tennant DB design. Im interested in what other Sys. Admins have discovered as invaluable tools or practices. Any resources provided will be greatly appreciated.

    Read the article

  • Can I cancel a resize operation in GParted without causing data loss?

    - by Anderson Green
    I'm currently waiting for GParted to finish resizing a partition, but the progress bar is currently at 0, and it's been taking much longer than usual (perhaps an hour). Is it safe to cancel the resize operation? I don't want to wait days for the resize operation to complete, but I don't want to lose all of my files either. (Is there any way that I can simply pause the resize operation, attempt to recover files, and then resume the resize operation?) (An update: the operation has finally completed, and my files are still intact!)

    Read the article

  • Identifying and Resolving Oracle ITL Deadlock

    - by Allan
    I have an Oracle DB package that is routinely causing what I believe is an ITL (Interested Transaction List) deadlock. The relevant portion of a trace file is below. Deadlock graph: ---------Blocker(s)-------- ---------Waiter(s)--------- Resource Name process session holds waits process session holds waits TM-0000cb52-00000000 22 131 S 23 143 SS TM-0000ceec-00000000 23 143 SX 32 138 SX SSX TM-0000cb52-00000000 30 138 SX 22 131 S session 131: DID 0001-0016-00000D1C session 143: DID 0001-0017-000055D5 session 143: DID 0001-0017-000055D5 session 138: DID 0001-001E-000067A0 session 138: DID 0001-001E-000067A0 session 131: DID 0001-0016-00000D1C Rows waited on: Session 143: no row Session 138: no row Session 131: no row There are no bit-map indexes on this table, so that's not the cause. As far as I can tell, the lack of "Rows waited on" plus the "S" in the Waiter waits column likely indicates that this is an ITL deadlock. Also, the table is written to quite often (roughly 8 insert or updates concurrently, as often as 240 times a minute), so and ITL deadlock seems like a strong possibility. I've increased the INITRANS parameter of the table and it's indexes to 100 and increased the PCT_FREE on the table from 10 to 20 (then rebuilt the indexes), but the deadlocks are still occurring. The deadlock seems to happen most often during an update, but that could just be a coincidence, as I've only traced it a couple of times. My questions are two-fold: 1) Is this actually an ITL deadlock? 2) If it is an ITL deadlock, what else can be done to avoid it? Cross-posted from Stack Overflow. Deadlocks are normally a programming problem, but ITL deadlocks relate directly to how Oracle writes to disk, so this may be an area where DBAs have more experience.

    Read the article

  • How to troubleshoot Hyper-V VSS writer causing backup failure on Server 2008 R2

    - by Tim Anderson
    I have a Windows Server 2008 R2 machine running Hyper-V. Backups using Windows Server Backup fail with the error: The backup operation that started at '?2011?-?01?-?02T10:37:01.230000000Z' has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code '2155348129'. Please review the event details for a solution, and then rerun the backup operation once the issue is resolved. I have traced this to a problem with the Hyper-V VSS writer. vssadmin list writers reports: Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {fcf0dd79-d282-4465-88ae-7b6857e055c2} State: [8] Failed Last error: Inconsistent shadow copy However I can't get any further. A few relevant facts: I get the error even if all the VMs are shut down If I disable the Hyper-V VSS Writer by stopping the Hyper-V Management Service backup completes OK There are no errors in the Hyper-V-VMMS application log I tried to set tracing for VSS but can't get any output for some reason. I set the correct registry entries but no trace log is generated. Tim

    Read the article

  • Setup a new domain controller over a temporary VPN, but now Windows delays startup?

    - by Kris Anderson
    I'm migrating servers from colo locations to Amazon's VPC EC2 instances. If anyone hasn't worked with Amazon VPC before, VPN is a pain in the arse! Anyways, I setup a new server that acts as the domain controller for our Amazon VPC. In order to migrate all the user accounts from our existing domain controllers I manually connected to our colo VPN using my user account on the new Amazon EC2 machine. I was able to join the domain and the new Amazon server became another domain controller on our network. So far so good. The problem I'm having is that when booting the EC2 domain controller (which is no longer connected to the VPN so it can't communicate with the existing controllers), it takes a good 6-8 minuted before I can remote into the server (instead of the 1-2 minutes it should take). Also, during this time most of the services we also run (like IIS) also give 404 errors until the 6-8 minutes have passed. It's almost like the domain controller is attempting to reach the other domain controllers first and after 6-8 minutes it falls back to the one located on the local machine? I don't think that's what's happening though, because Server 2008 R2 doesn't have primary and backup domain controllers. They're all equal as far as Windows is concerned. For my network adapter I have only one DNS listed, 127.0.0.1, so it should be looking up the local domain controller and not the other domain controllers it connected to over VPN when VPN was enabled. In the server logs I'm seeing these warnings pop up during a reboot: The winlogon notification subscriber is taking long time to handle the notification event (CreateSession). The winlogon notification subscriber took 409 second(s) to handle the notification event (CreateSession). Any ideas on what's happening here? I would try removing the existing domain controllers from the new Amazon EC2 machine, but I still need to connect over VPN a few times to migrate some data between the servers, and I don't want that change being reflected back to the other domain controllers in our colo locations.

    Read the article

  • How to backup/restore OSX Parental Controls before/after complete reimage?

    - by Jim Anderson
    We typically "nuke and pave" users Mac OSX laptops if they have software issue. Prior to doing so, we backup the primary (non-admin) user's home folder. Our standard image has four accounts: Admin (uber admin user); Parent (admin account for the parents of students); Loaner (so our standard image will also work for our loaner laptop pool); Student (this is the primary, non-admin user of the laptop) Our standard image has only minimal Parental controls on the Loaner and Student accounts. Some parents choose to tighten the parental controls. We never know when parents have made changes to parental controls, or what those changes are. Once we have reimaged the machine with our standard image (minimal parental controls) we would like to be able to restore any custom parental controls parents may have placed on their student's account. Any help in this would be appreciated. Thanks.

    Read the article

  • MSSQL Auditing Recomendations

    - by Josh Anderson
    As an aspiring DBA, I have recently been asssigned the task of implementing the tracking of all data changes in the database for a peice of software we are developing. After playing with microsoft's change data capture methods, Im looking into some other solutions. We are planing to distribute our product as a hosted solution and unlimited installations would be desired for maximum scalability. Ive looked at IBM's Guardium as well as DB Audit by SoftTree. Im curious if anyone has any solutions they may have used in the past or possibly any suggestions or methods to achieve complete, and of course cost effective, auditing of data changes.

    Read the article

  • How do I configure an ordinary TV remote control to work with lirc on Linux?

    - by Allan Lewis
    I am running MythTV on Ubuntu 9.10 and I would like to use a TV remote to control it. I know that lirc needs a configuration file for the remote, but none of my remotes is in the official database. If I point a remote at the receiver on my TV card (a Pinnacle PCTV "Solo", model 72e) and press a button, dmesg logs the code generated by the remote, so I assume I just have to make a config file with a list of commands assigned to these codes. I've read a few how-tos but I still don't understand exactly how to create the config file. Some of the guides I've read refer to IR receivers on TV cards working at a "higher level of abstraction", which I take to mean that they decode the signal and provide a code, like the ones I can see in dmesg, rather than just giving raw data, but none of them explain where to go from there! Any help would be greatly appreciated!

    Read the article

  • Exchange DiskShadow/Robocopy backup does not purge log files

    - by Robert Allan Hennigan Leahy
    I have a series of scripts setup to backup my Exchange. The following command is executed to start the process: diskshadow /s C:\Backup_Scripts\exchangeserverbackupscript1.dsh This is exchangeserverbackupscript1.dsh: #DiskShadow script file set verbose on #delete shadows all set context persistent writer verify {76fe1ac4-15f7-4bcd-987e-8e1acb462fb7} set metadata C:\Backup_Scripts\shadowmetadata.cab begin backup add volume C: alias SH1 create expose %SH1% P: exec C:\Backup_Scripts\exchangeserverbackupscript1.cmd end backup delete shadows exposed P: exit #End of script And this is exchangeserverbackupscript1.cmd: robocopy "P:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group" "\\leahyfs\J$\E-Mail Backups\Day 1" /MIR /R:0 /W:0 /COPY:DT /B This is not causing Exchange to purge its log files. The edb file is 4.7 gigabytes, but the First Storage Group folder itself is 50+ gigabytes due to many, many log files for each day going back to 2009. Is there any way -- I've Googled and haven't found anything -- to notify Exchange when I've completed a full backup, and have it purge its log files? According to this and this, end backup should cause Exchange to "flush the transaction logs for that storage group" but only "if a successful backup of a storage group occurred", which leaves my question as: What constitutes a "successful backup", and why is what I'm doing not it?

    Read the article

  • What is the harm in giving developers read access to application server application event logs?

    - by Jim Anderson
    I am a developer working on an ASP.NET application. The application writes logging messages to the Windows event log - a custom application log just for this application. However, I do not have any access to testing or staging web/application servers. I thought an admin could just give me read access to this event log to help in debugging problems (currently a service that is working in dev is not working in test environment and I have no idea why) but that is against my client's (I'm a consultant) policy. I feel silly to keep asking an admin to look at the event log for me. What is the harm in giving developers read access to application server application event logs? Is there a different method of application logging that sysadmins prefer programmers use? Surely, admins don't want to be fetching logging messages for developers all the time.

    Read the article

  • How to make local drive available in apache localhost

    - by Ronald Allan
    How can I make my "Drive D:" "Drive E" available in localhost. I'm running apache on my backtrack machine. My default is /var/www/. Every directory I created inside the /var/www/ is available and all working fine. Let's say I created /var/www/PENTEST/ the contents of that PENTEST directory can be accessed through: localhost/PENTEST/ How can I make this work: localhost/media/DATA/ The /media/DATA/ is my DRIVE D: I edited this: ServerAdmin webmaster@localhost DocumentRoot /media/DATA/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /media/DATA/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> Still not working. I'm getting 404. # # I figured it out. Thank for the post of "RiggsFolly" which can be found here: http://forum.wampserver.com/read.php?2,89163. I just have to change this: ServerAdmin webmaster@localhost DocumentRoot /media/DATA/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /media/DATA/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Into this: ServerAdmin webmaster@localhost DocumentRoot D:/media/DATA/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory D:/media/DATA/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory>

    Read the article

  • Fax in Small Business Server 2003 fails to hang up

    - by Tim Anderson
    We have a problem with fax in SBS 2003. External modem, Courier v Everything which is a recommended model. It receives a fax OK, but then sometimes (quite often) fails to hang up. Attempts to send faxes thereafter get an engaged tone. The only fix when this happens is to reset the fax modem. Even restarting the server is not enough. We've tried with a different model modem, same problem. Any ideas? Tim

    Read the article

  • Can a device (WAP or switch) be configured as an 802.1x supplicant?

    - by Allan Ross
    We are looking at implementing 802.1x on a wired/wireless network. What I am looking for is a device that can act as a supplicant and once authenticated on the network, is able to pass traffic from any downstream connected device. The point of doing this would be to allow a properly pre-configured device to be provided to a client user who could then connect any device on the downstream side of the device. We will be able to manage the aggregate traffic on the device without concern for what is connected on the far side. Am I dreaming; does every device out there support this and I just don't know it or is reality fall somewhere in the middle?

    Read the article

  • ipv6 auto configuration not working in ubuntu natty

    - by allan ruin
    In win7, my computer can automatically get a IPV6 global address and use ipv6 network, but in ubuntu natty, I can't find out how to let stateless configuration work. My network is a university campus network,so I don't need tunnels. I think if one thingg can silently and successfully complished in windows it shouldn't be impossible in linux. I did can manually edit /etc/network/interfaces and use a static ipv6 address, and I can use ipv6 this way, but I just want to use auto-configuration. I found this post: How to disable autoconfiguration on IPv6 in Linux? and try sudo sysctl -w net.ipv6.conf.all.autoconf=1 sudo sysctl -w net.ipv6.conf.all.accept_ra=1 but no luck

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >