Search Results

Search found 5908 results on 237 pages for 'cody short'.

Page 162/237 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Is there a historical computer peripherals or accessories museum or even just a current list?

    - by zimmer62
    Thinking about all the unique and different peripherals I've owned over the years, from ISA capture cards, to parallel port controlled shutter glasses for 3d games. I've seen many many accessory or computer peripherals come and go. The nostalgia of these things is a lot of fun. I tried to find some sort of historical time-line or list but what mostly turned up is computers themselves. I'm more interested in the mice, scanners, the weird adapters that shouldn't exist, short run very rare products, strange devices from computer shows in the 80's and 90's... Hardware you might find in a geeks basement that would be completely useless now, but was the coolest thing around when it was new. An example would be a drawing tablet I had for my TI-99 computer, or the audio tape player accessory for a C64 which let you save files to audio tapes, An ISA card that did the same for PC's hooked up to a VCR. Remember that IBM-PC Jr upgrade kit, that added a floppy drive, more memory and the AT switch in the back? I'd love to find either a wiki, or a list that has already been assembled which contain many of these weird (or common) accessories. I've had so many over the years I suppose I could start a wiki here if such a list doesn't already exist.

    Read the article

  • Is there a better way to do bonded vlan tagged interfaces with XEN

    - by AJ01
    We have a number of XEN servers all running CentOS or RHEL. The VM's that they run are all required to be on their own VLAN for no other reason than the customer expects them to be. Long story short however, I can't change this right now. We are also required to have bonding enabled on the interfaces. So to accommodat this we enslave eth1 and eth2 to bond0. We then create a seperate interface called bond0.VLANID where VLANID corresponds to the correct vlan; eg ifcfg-bond0.204 DEVICE=bond0.204 BOOTPROTO=static ONBOOT=yes VLAN=yes BRIDGE=xenvlan204 Bridge to XEN As you will see, we eventually have to bridge this out to XEN, and we do this by adding another interface called xenvlan204 (in this instance) which contains; ifcfg-xenvlan204 DEVICE=xenvlan204 BOOTPROTO=none ONBOOT=yes TYPE=bridge XEN Vm Config Finally in our XEN config for each VM, we add vif = [ "bridge=xenvlan204" ] This then allows the vm host to access that particular vlan The Problem We've noticed a few problems with this setup. One being that we currently create the interfaces manually. Which means if we add more vlan enabled interfaces and bridges we usually have to restart xend which is something I'm not so hot about. Also lower level staff have their heads melted by the number of interfaces and the risk of a mistake occurring is high. Secondly, it can take sometime for a host to come up if it has a number of vlan taged interfaces. Thirdly, its just not scaling well on the management aspects The Question Is there a better more flexible way to do this (in particular with Xen that ships with centos 5.3, 5.4 and 5.5 as we have to support all three) that leverages either scripting or other solutions to allow an arbitrary amount of interfaces to be created when a vm is instanced. Your advise and expertise is more that welcomed.

    Read the article

  • Where to put the SPF TXT record?

    - by YellowSquirrel
    I've set up Google apps for my domain: I've registered the domain with Google by adding the CNAME Google asked and I've apparently succesfully setup the MX Google mail servers. So far I haven't yet a dedicated server: I'm just having a domain at a registrar. Now I want to activate SPF and I'm confused. In the following short webpage: http://www.google.com/support/a/bin/answer.py?answer=178723 it is written that I must add a TXT record containing: v=spf1 include:_spf.google.com ~all Where should I enter this? Should this go in the zone (?) file, like I did for the CNAME and the MX records? So far I have something like this: @ 10800 IN A 217.42.42.42 @ 10800 IN MX 5 ASPMX3.GOOGLEMAIL.COM. @ 10800 IN MX 5 ASPMX2.GOOGLEMAIL.COM. @ 10800 IN MX 3 ALT2.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 3 ALT1.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 1 ASPMX.L.GOOGLE.COM. google8a70835987f31e34 10800 IN CNAME google.com. Does adding the SPF TXT record mean I should literally have something like that: @ 10800 IN A 217.42.42.42 @ 10800 IN MX 5 ASPMX3.GOOGLEMAIL.COM. @ 10800 IN MX 5 ASPMX2.GOOGLEMAIL.COM. @ 3600 IN TXT "v=spf1 include:_spf.google.com ~all" @ 10800 IN MX 3 ALT2.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 3 ALT1.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 1 ASPMX.L.GOOGLE.COM. google8a70835987f31e34 10800 IN CNAME google.com. I made that one up and included right in the middle to show how confused I am. What I'd like to know is the exact syntax and where/how I should put this TXT record.

    Read the article

  • Block SMTP session with sender domain which doesn't itself accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the declared sender's domain has an MX which refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. Note that I'm not, as some commenters have assumed, talking about checking whether the SMTP client will receive messages. The check I want is whether the declared sender's domain (regardless of who the current SMTP client is) will accept SMTP connections from here. In other words: when we get around to sending a message back, we'll need the sender's domain to accept SMTP connections; I want to do that check before accepting the incoming session. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • Windows Server 2008 - inexplicable system time jumps/glitches/inaccuracies

    - by Nathan Ridley
    I'm running a production web server on Windows Server 2008. On this server I have a database which logs certain user actions, but every now and again I inexplicably get database entries which, according to the record ID and the records immediately before and after, have the wrong time logged against them (7 days+ too old). For example, record ID 1001 will be for Dec 7, 11pm, 1002 will be for Dec 7, 11:01pm, then 1003 will be for Nov 28, 1:38am, then the next will be back on track again. The problem seems to occur in random records (or 2-3 records in a row) and crops up once every few days. This is absolutely baffling because there is only one place in the application that assigns this date/time value and it's simply the system UTC date. I have been synchronizing the system time to time-a.nist.gov (which I read in another article was a bit more reliable than the default time.windows.com) and it seems to occasionally get out of time anyway (3-4 minutes), but I'm speculating that occasionally the time server has a temporary glitch where the date changes to a drastically wrong value for a short space of time, then changes back. Either that, or the motherboard clock battery is screwed and the reason the time momentarily changes is that the motherboard loses the time and then the time synchronization puts it back again. Could either of my suspicions be right? Should I turn off time synchronization for a production server? Assigning dates to an event log where the dates are up to 2 weeks prior to the actual date is a severe problem I can't have when the next version of my application is released. Any suggestions or advice would be appreciated.

    Read the article

  • Get Safari to use different autocompletion on different URLs on same hostname

    - by Luke404
    I have a webserver publishing different services over the same SSL VirtualHost, the two most commonly used being PhpMyAdmin and Cacti. These (and others) use 'cookie' style authentication, asking user and password in an HTML form (thus not using HTTP Authentication). Being on the same hostname, the Safari browser didn't manage too well stored passwords: if I login to one app with user foo, and then go to app two it would propose me user foo and its password in the login form. Changing just the username to bar used to be sufficient to let Safari autocomplete the correct password in its form field. Annoying, but I could live with it - usernames are short and easy to remember when compared to the passwords we use. After the update to safari5 this seems to be no longer true: if I store in safari (actually user keychain on OSX) credentials for https://www.foobarbaz.com/app1 AND credentials for https://www.foobarbaz.com/app2 there seem to be no way for it to autocomplete both based on the url. Even editing the keychain to add the path (it will store only the hostname by default) does not help. Is there anything I can do to let it work the way I want while still keeping everything on one hostname? Modifying anything server side is of course possible, but I can't switch apps to HTTP Auth (and not every one will support it anyway) to use different 'realms'.

    Read the article

  • Are there any critical reasons why one could not use ubuntu as a server platform?

    - by Chiggsy
    We were using Lenny. ( Well Sid, really ). Had to do that for development. I upgraded my server with ubuntu 10.04, for a different project. Noticed the packages. Wearing my developer hat, it's a no brainer. Everything we need is there. I'm the admin as well. We might need more than one "box" (running on VPS for now). I do not want to build things that apt would put on for me. It's not hard, but I'm going to need that time. The debian "box" has a bunch of stuff on it, that'll have to be integrated properly, but I think we are going live in a distressingly short time. (Just found out.) I am aware of the reflexive answers to this question. What I would like to ask is are there critical bugs or critical instabilities that would make one shy away from the ubuntu/server path? I could not find any bugs that would stop me, but perhaps there is something?

    Read the article

  • Barriers to IPv6 deployment: addressing

    - by sysadmin1138
    There are several things that are keeping IPv6 deployment from being a topic of active discussion here at my work. There are the usual technical issues, but one non-technical one appears to be a major stumbling block on the path to actually getting a deployment project going. Addresses, memorizing of. Specifically, IPv4 addresses are comprehensible, and IPv6 addresses just look like a big long string of hex. The human mind has real trouble memorizing lists of more than 7-8 items, and an IPv4 address (192.168.231.148) has four items in it which makes it easy for us to memorize. A fully populated IPv6 address has not only 8 sections, but each section has 4 hex digits in it. IPv6 addresses were not designed for memorization. To the technician who knows that the DNS server is at 192.168.42.42 (or more likely "42.42", since the company prefix is likely memorized), the idea of memorizing an IPv6 address fills them with dread. Which in turn makes them much less enthusiastic about participating in an IPv6 deployment project. Because of how our network works we're not fully dynamic in terms of v4 addressing. We have several to many subnets that are entirely statically assigned for a variety of reasons, chief among them being that the overhead of static DHCP assignments is perceived as being too great. Also, some devices still aren't smart enough to pull DNS addresses out of DHCP while also having a static assignment, and therefore require manually configured DNS settings. Therefore, some v6 address memorization will have to be done. We're not under any mandate to get v6 out the door, so we don't have pressure from the top. However, it is time to start prepping our infrastructure to handle IPv6 even if we don't convert wholesale. For those of you who have been in IPv6-land for a while, what short-cut methods do you use to discuss or keep track of subnets and specific/critical IP addresses? If I can help reduce some of the dread surrounding IPv6 we might get the project going.

    Read the article

  • Users removing Administrator from files/folders permissions

    - by user64204
    We're running Windows Server 2003 R2 with Active Directory and are having an issue with network shares whereby users, in an attempt to secure their documents, remove everybody (including the Administrator account) from their files/folders permissions. Since the Administrator no longer has read permission to them, we can't even backup files manually as we get permission errors. One solution that we've found is to change the owner of the files and directories to the Administrator account. We can then change the permissions as we wish. The problem is that this has to be done manually so can't really be applied to an entire share. Another solution that we've tried is to use cacls as follows: cacls d:\path\to\share /C /T /E /G Administrator:F The problem with this is that we're still getting an ACCESS DENIED error on files/folders on which Administrator was removed. Q1: Is there a way to restore at least read access to all files/folders to the Administrator account in a recursive fashion? That would be for the short term. For the long term we're looking for a solution to prevent users from removing Administrator from files/folders permissions. Since we're going to migrate to Windows Server 2008 R2 soon we could wait until we've migrated to implement such solution if need be. Q2: Is there a way to prevent users from removing Administrator from files/folders permissions on Windows Server 2003/2008?

    Read the article

  • Sane patch schedule for Windows 2003 cluster

    - by sixlettervariables
    We've got a cluster of 75 Win2k3 nodes at work in a coarse grained compute cluster. The cluster is behind a mountain of firewalls and resides in its own VLAN. Jobs of all sizes and types run on the cluster and all of the executables running are custom-made. (ed: additional notes on our executables) The jobs range from 30 seconds to 7 days in duration, and may contain one executable or 2000 sub-jobs (of short duration). Obviously we are trying to avoid the situation where our IT schedules a reboot during a 7 day production job. We have scheduling software which accomodates all of the normal tasks for a coarse grained cluster and we can control which machines are active for submission, etc. If WSUS was in some way scriptable (or the client could state it's availability for shutdown) we could coordinate the two systems and help out. Currently, the patch schedule is the Sunday after Super Tuesday regardless of what is running on the cluster. We have to ask for an exemption every time we want to delay patching a machine for a long running production job. Basically, while our group is responsible for the machines we have little control over IT's patch schedule. Is patching monthly with MS's schedule sane for a production Windows cluster? Are there software hooks in WSUS where we could say, "please don't reboot just yet"?

    Read the article

  • AMD Phenom II X2 555 BE, core unlock suddenly not working?

    - by user328271
    I've had a Phenom II X2 555 BE for around 2 - 3 years. When I got it, I immediately core unlocked it with my ECS A880GM-A3 mobo, which makes it turn into a Phenom II X4 B55. A few days ago, I installed Windows 7 64 bit to compensate for my 4 gigs of ram. When I start my system with its cores unlocked, it will restart after the BIOS screen. If I disable the core unlock, it boots to OS just fine. My question is: Does 64 bit OS makes a difference in core unlocking? Does my 3rd and 4th core burnt out? Also extra info: I tried core unlocking but keeping the 3rd and 4th core disabled and it still won't boot into OS. Could it be motherboard problems? Sorry for bad English. I will try to give additional information if needed. Thanks! Also it is worth mentioning I'm no computer expert but I tried to make my explanation as short as possible. I also asked my question on TomsHardware, but I had no answer till now so I decided to ask here too. anyone...?

    Read the article

  • Hadoop initscript askes password

    - by Ramesh
    I have installed hadoop on my ubuntu 12.04 single node .I am trying to execute an init script to make the hadoop run on start up but it asks password every time i execute. #!/bin/sh ### BEGIN INIT INFO # Provides: hadoop services # Required-Start: $network # Required-Stop: $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Hadoop services # Short-Description: Enable Hadoop services including hdfs ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HADOOP_BIN=/home/naveen/softwares/hadoop-1.0.3/bin NAME=hadoop DESC=hadoop USER=naveen ROTATE_SUFFIX= test -x $HADOOP_BIN || exit 0 RETVAL=0 set -e cd / start_hadoop () { set +e su $USER -s /bin/sh -c $HADOOP_BIN/start-all.sh > /var/log/hadoop/startup_log case "$?" in 0) echo SUCCESS RETVAL=0 ;; 1) echo TIMEOUT - check /var/log/hadoop/startup_log RETVAL=1 ;; *) echo FAILED - check /var/log/hadoop/startup_log RETVAL=1 ;; esac set -e } stop_hadoop () { set +e if [ $RETVAL = 0 ] ; then su $USER -s /bin/sh -c $HADOOP_BIN/stop-all.sh > /var/log/hadoop/shutdown_log RETVAL=$? if [ $RETVAL != 0 ] ; then echo FAILED - check /var/log/hadoop/shutdown_log fi else echo No nodes running RETVAL=0 fi set -e } restart_hadoop() { stop_hadoop start_hadoop } case "$1" in start) echo -n "Starting $DESC: " start_hadoop echo "$NAME." ;; stop) echo -n "Stopping $DESC: " stop_hadoop echo "$NAME." ;; force-reload|restart) echo -n "Restarting $DESC: " restart_hadoop echo "$NAME." ;; *) echo "Usage: $0 {start|stop|restart|force-reload}" >&2 RETVAL=1 ;; esac exit $RETVAL Please tell me how to run hadoop without entering password.

    Read the article

  • HTC Diamond Touch sync problem

    - by Anders
    I have a HTC Diamond Touch with all my contacts etc. on it. Did however not use it for 6mo while being abroad. When I start the phone now I realize that the touch screen has stopped working. I have tried restarting, soft resetting, shutting it off etc but the touch just wont follow commands. However, I can manage the phone by buttons so it's not frozen. Hence I can get into the phone and watch contacts but not use it to call etc. The problem is, how do I get my 300 contacts out of the thing!? When I'm plugging in the phone, it lets me choose between "Sync with Outlook" and "Use as storage device". It automatically selects "Use as storage device". Now, I cannot choose to sync it with the buttons. I can not change this option afterwards either. In short, I have a phone with all of my contact data and am completely unable to get that out of it. Any tips/help/suggestions? If possible, preferably one that does not including sending the phone to a hardware workshop for three weeks in order to get it fixed:)

    Read the article

  • Best way to troubleshoot intermittent network outages?

    - by Ben Scheirman
    We have a Comcast 50/10 line into our office. We keep seeing very short but sometimes frequent drops in our internet service. It's enough to kick you off of skype and stop any websites from loading, which is obviously affecting our productivity. We've tried 4 different routers, we've tried moving everyone off of wireless and onto wired via a switch and so far nothing has helped. Right now we're on a Cisco SB WRP400-G1 router. Attached to the router is a 16 port switch going to the ports in all of the offices. We've moved to OpenDNS in the case that it was the comcast DNS servers going down. Today we tried putting the modem, router, and switch on a UPS to make sure it wasn't power fluctuations that was causing it. Every time we call Comcast, by the time they are here the internet is working fine. I'd like to somehow prove that the problem is with Comcast, so if that means plugging in a machine directly into their router and collecting data all day, I'm up for that. I just want to hear ideas on what tools to run and how to collect this data. I could just continuously ping google.com all day long but I'm not sure how valuable that data would be. Thoughts?

    Read the article

  • Remote mouse pointer not visible in VNC

    - by aef
    I used VNC desktops as a kind of collaboration server, as shared planning and pair programming environment for a long time. Now my latest iteration uses a KVM guest running Fedora 17 "Beefy Miracle", the Cinnamon desktop environment and an X11VNC server. The X11VNC server is automatically started with the desktop environment using the following command: x11vnc -localhost -many -shared -display :0 -bg My problem is that depending on the VNC client, the mouse pointer of the remote system which is shown through VNC is not synchronized to my client. I really need this, so I can see what my partner is doing on the desktop. When using Vinagre 3.2.1 on Ubuntu Oneiric Ocelot (11.10) or Vinagre 2.3.0.3 on Debian Squeeze (6.0) and I don't have my local mouse pointer inside the VNC view, I cannot see the mouse pointer of my remote system, nor its movement. When using TightVNC on Windows 7, I can recognize a mouse pointer trace for very short amounts of time after moving the mouse, but it is not clearly visible. Using UltraVNC on Windows 7 the mouse pointer is clearly visible all the time. With Gnome 2 I never had any problems with remote pointer synchronization, using exactly the same clients. I suspect this could have something to do with Cinnamon's dependency on 3D acceleration. On the other hand, it doesn't change anything to start Cinnamon's fallback environment Cinnamon 2D. Update: Same effect when I use Gnome 3.

    Read the article

  • Which AMI to to use for Java/Tomcat/MySQL in Amazon EC2?

    - by Justin
    I originally posted this on stackoverflow.com and it was suggested serverfault.com might be a better place to ask this question. So here goes: I'm trying to determine which Amazon Machine Image (AMI) to use as my Virtual Server in Amazon's EC2. For now, I'll need to choose an AMI that complies with the AWS Free Usage Tier. I want to deploy a Java app that I've been developing using Eclipse on Windows XP, Tomcat 7 and MySQL 5.5. I'm aware that I can choose the Basic 32-bit Amazon Linux AMI. Then I'd manually install Tomcat and MySQL (does MySQL get installed on the image or separately on an Elastic Block Store (EBS)?). Here's the rub, I'm a bit of a Linux noob. I can start Tomcat and tail the logs and such on Linux but I'm not familiar with the install process for Tomcat and MySQL on Linux and commands like sudo and chmod. I'm happy to get more hands on with Linux but I'm short on time right now. Are there AMI's that already have Tomcat and MySQL bundled? The Request Instance Wizard shows 805 Community AMI's that are Free Tier Eligible. 51 of the Free Tier Eligible AMI's have "Tomcat" in their name. I'm willing to consider using Elastic Beanstalk but my research thus far hasn't found any discussion of using MySQL with Beanstalk. The discussions all seem to use Amazon's SimpleDB. Any advice is greatly appreciated.

    Read the article

  • Join multiple filesystems (on multiple computers) into one big volume

    - by jm666
    Scenario: Have 10 computers, each have 12x2TB HDDs (currently) in raidZ2 (10+2) configuration, so, in the each computer i have one approx. 20TB volume. Now, need those 10 separate computers (separate raid groups) join into one big volume. What is the recommended solution? I'm thinking about the FCoE (10GB ethernet). So, buying into each computer FCoE (10GB ethernet card) and - what need more on the hardware side? (probably another computer, FCoE switch? like Cisco Nexus?) The main question is: what need to install and configure on each computer? Currently they have freebsd/raidz2, but it is possible change it into Linux/Solaris if needed. Any helpful resource what talking about how to build a big volumes from smaller raid-groups (on the software side) is very welcomed. So, what OS, what filesystem, what software - etc. In short: want get one approx. 200TB storage (in one filesystem) from already existing computers/storage. Don't need fast writes, but need good performance on reading data. (as a big fileserver), what will works transparently, so when storing data don't want care about onto what computer the data goes. (e.g. not 10 mountpoints - but one big logical filesystem). Thanks.

    Read the article

  • Intermittently uncommunicative subnets

    - by mhd
    Last week proved me a veritable Cassandra: I've always said that it's a bad idea to have only one firewall/router, without a backup or failover. And thus our Cisco PIX went haywire, refusing to route properly. And of course, the only one available here on short notice is me, and while I'm quite grounded in Linux, I'm really a developer not a sysadmin (the fact that this hit me on sysadmin appreciation day is a bit ironic). Anyway, this weekend I tried to hack up a temporary solution: I used an old server with enough NICs (two built-in, four on a card) to serve as a gateway and firewall. Due to some problems with the raid controller, I got only two router distros running, and between Untangle and Ebox I decided for the latter. Now everything is quite okay. I've got all the different subnets we've got here (all with separate switches) talking to each other and even to the internet (Cisco 2800 router, T1 lines). But from time to time (20-60 minute intervals), I get a total routing failure. Our main, office subnet can't talk to our server subnet and can't connect to the internet. This is not the end of a gradual slowdown, either everything's working perfectly or I get a total lack of communication for about two minutes each time. Now I'm a bit at wits end what to check. At least with the default EBox setup, nothing in /var/log shows anything weird and it doesn't exactly have lots of built-in monitoring tools. So I'm hoping someone here could give me some pointers about what to look out for. I did change the ethernet cable from the office switch to the firewall, with no results. I might change switches, although within the switch it seems to work ok enough. Edit: I'm not sure whether this is the sole cause of the problem, but after I noticed a few DHCP entries just before the last drop of connectivity, I tried to reproduce that. And alas, whenever I renew a DHCP connection, I can't access other subnets anymore. Running ISC DHCPD 3.0.6.

    Read the article

  • How to get data from a borked Windows Home Server

    - by harhoo
    Yesterday we had a power surge, followed by a power outage. This left my WHS borked: powering on just gives to a flashing blue light (the led on the power supply also flashes green) - no fan or boot activity, nothing. I urgently needed some files off there in the short term (and the 500GB of photos, music, personal video etc in the long term) so I took the hard drive out and put it in my computer. The files and folders showed up, but I couldn't access them - clicking on an image gave an invalid image error in Picasa, I couldn't play MP3s etc. I changed the ownership and permissions of the files, still nothing. I booted in with a LiveCD, the same: files appear, but won't open. Is there anything else I can do? I'm now wondering if it was just the power cable that's broken, but if so, why can;t I access my files from the hard drive? If it is the power cable, and I replace that and the hard drive, will I have done any harm messing around with ownership and file permissions?

    Read the article

  • I deployed Flash Player via a Software Installation policy. How to upgrade?

    - by eleven81
    I have a Windows Server 2008 machine as my DC. Earlier this year I created a Software Installation GPO to deploy Adobe Flash Player plugin MSI. I assigned the policy to the computers, about half run Windows XP x86 and the other half Windows 7 x64. That all works like clockwork. When I created the Software Installation Policy, I disabled the Flash Player plugin's automatic update feature by editing the MSI in Orca. I did this because I wanted all of my machines to run the exact same version of the plugin. Now, some time has passed and a newer version of the Flash Player plugin has been released. It is time for me to push out the updated version of the plugin. I already have the new MSI, but I am lost on what to do next. I see the upgrades tab in the Software Installation GPO, but everything there reads like that would be used for add-ons to a larger master program and not for updates that are released over time. I have read that it is best to create a new Software Installation policy with the new MSI, revoke the old GPO, and assign the new GPO. I feel as though, over time, I will wind up with more revoked policies than active ones. I have also read that some people have had success by replacing the old MSI with the new MSI and simply telling the GPO to redeploy. This seems like a backdoor method that will only get me in to trouble. In short, what is the correct, best-practice, or preferred way to roll out the new version via Group Policy?

    Read the article

  • How do you optimize your Outlook Exchange + IMAP setup?

    - by Mike
    My company provides an Outlook/Exchange account we must use for mail/calendar. Like many companies, they unfortunately also provide a ridiculously small mail quota. I got tired of managing and backing up .pst files (since I'm always in my e-mail there is never a good time to back it up), so I started storing my archived mail "in the cloud", using an IMAP server I set up on my Linux box. This has a few drawbacks for me: IMAP (at least the implementation in Outlook) is *very slow*. Furthermore, if I move a large number of messages to the IMAP server, it blocks the entire Outlook client for hours sometimes, which is quite annoying. Can't use exchange over HTTP to do mail without launching a VPN session, because the client-side rules I have which organize my mail fail and disable the rule if the IMAP server can't be reached. If I reply to a message from my IMAP store, I have to specify a SMTP server willing to relay for me in order to send e-mail, unless I always remember to select my Exchange account while composing e-mail. ... but the main advantage of being very easy to back up, with a couple of cron jobs that essentially do an 'rsync'. Short of moving the IMAP server to my local host (which seem like might have the same file locking problems as using a .pst), my options seem limited for solving (1). I'd like to come up with a solution for (2) and (3) though. For problem (2) would it be possible to somehow tell Outlook that the IMAP server is "offline", and have it synchronize my changes during a periodic "send and receive"? If so, I wonder if it would block the Outlook client, like it does in problem (1), and if it would be compatible with the client-only rules I use to sort my mail into folders. I've looked all over the options menu and have not found a way to tell Outlook to not use a certain account for sending mail, which would solve (3). Is anyone else crazy enough to be doing something like this? Any ideas?

    Read the article

  • Switch to IPv6 and get rid of NAT? Are you kidding?

    - by Ernie
    So our ISP has set up IPv6 recently, and I've been studying what the transition should entail before jumping into the fray. I've noticed three very important issues: Our office NAT router (an old Linksys BEFSR41) does not support IPv6. Nor does any newer router, AFAICT. The book I'm reading about IPv6 tells me that it makes NAT "unnecessary" anyway. If we're supposed to just get rid of this router and plug everything directly to the Internet, I start to panic. There's no way in hell I'll put our billing database (With lots of credit card information!) on the internet for everyone to see. Even if I were to propose setting up Windows' firewall on it to allow only 6 addresses to have any access to it at all, I still break out in a cold sweat. I don't trust Windows, Windows' firewall, or the network at large enough to even be remotely comfortable with that. There's a few old hardware devices (ie, printers) that have absolutely no IPv6 capability at all. And likely a laundry list of security issues that date back to around 1998. And likely no way to actually patch them in any way. And no funding for new printers. I hear that IPv6 and IPSEC are supposed to make all this secure somehow, but without physically separated networks that make these devices invisible to the Internet, I really can't see how. I can likewise really see how any defences I create will be overrun in short order. I've been running servers on the Internet for years now and I'm quite familiar with the sort of things necessary to secure those, but putting something Private on the network like our billing database has always been completely out of the question. What should I be replacing NAT with, if we don't have physically separate networks?

    Read the article

  • Patch management on multiple systems

    - by Pierre
    I'm in charge of auditing the security configuration of an important farm of Unix servers. So far, I came up with a way to assess the basic configuration but not the installed updates. The very problem here is that I just can't trust the package management tools on those machine. Indeed some of them did not sync with the repository for a long time (So I can't do a "yum check-updates" on Redhat for example). Some of those servers are not even connected to the internet and use an company repository. Another problem is that I have multiple target systems: AIX, Debian, Centos/Redhat, etc... So the version could be different (AIX) and the tools available will be different. And, last but not least, I can't install anything on the target system. So I need to use a script to retrieve the information and either: process it directly or save the information to be able to process it later on a server (Which may happen to run a different distribution than the one on which the information have been retrieved). The best ideas I could come up with were: either retrieve the list of installed packages on the machine (dpkg -l for example on debian) and process it on a dedicated server (Directly parsing the "Packages" file of debian repositories). Still, the problem remains the same for AIX and Redhat... or use Nessus' scripts to assess vulnerability on the installed packages, but I find this a bit dirty. Does anyone know any better/efficient way of doing this ? P.S: I already took time to review some answers to similar problems. Unfortunately Chef, puppet, ... don't meet the requirements I have to meet. Edit: Long story short. I need to have the list of missing updates on a Unix system just like MBSA on Windows. I'm not authorized to install anything on this system as it's not mine. All I have are scripts languages. Thanks.

    Read the article

  • Spotlight has stopped indexing/returning anything in /Applications

    - by pra
    After a recent kernel panic & restart, Spotlight no longer seems to know anything about the files under my /Applications folder. I used to launch Safari.app, Opera.app, Textedit.app, etc via Spotlight as a matter of routine. Now, I get "No results found" for all of them (except Textedit.app, which launches a demo text editor from a Qt installation). The programs are still there & still launch directly from Finder. I've already run disk utility & verified the disk, no issues. I repaired disk permissions, which made several changes, but to no effect. Is there anything else I can do, short of re-installing MacOS? Update: I already verified that "Applications" was still checked in my Spotlight preferences. It was still returning applications located elsewhere (the Qt textedit sample app), so that shouldn't have been the issue. A few hours later it resolved itself; I guess there's a background process running on some interval.

    Read the article

  • Windows VPN always disconnects after < 3 minutes, only from my network

    - by hemp
    First, this problem has existed for almost two years. Until serverfault was born, I pretty much gave up on solving it - but now, hope is reborn! I've set up a Windows 2003 server as a domain controller and VPN server at a remote office. I am able to connect to and work over the VPN from every windows client I've tried, including XP, Vista, and Windows 7 without issue, from at least five different networks (corporate and home, domain and non.) It works fine from all of them. However, whenever I connect from clients on my home network, the connection drops (silently) after 3 minutes or less. After a short while, it will eventually tell me the connection has dropped and attempt to redial/reconnect (if I've configured the client that way.) If I reconnect, the connection will re-establish and appear to work correctly, but again will silently drop, this time after a seemingly shorter time period. These are not intermittent drops. It happens every single time, in exactly the same way. The only variable is how long the connection survives. It doesn't matter what type of traffic I send. I can sit idle, send continuous pings, RDP, transfer files, all of that at once - it makes no difference. The result is always the same. Connected for a few minutes, then silent death. Since I doubt anyone has experienced this exact situation, what steps can I take to troubleshoot my evanescing VPN?

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >