Search Results

Search found 5560 results on 223 pages for 'brute force attacks'.

Page 173/223 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Change default profile directory per group

    - by Joel Coel
    Is it possible to force windows to create profiles for members of one active directory group in a different folder from members in another active directory group? The school here uses DeepFreeze to protect public computers. In a nutshell, DeepFreeze prevents all changes to a hard drive such that every time you restart the machine the disk is identical to it was at the time you froze it. This is a bit different than restoring to an image, in that it never really wrote changes to disk in a permanent way in the first place. This has a few advantages over images: faster recover times, and it's easy to thaw the machine for a few minutes to perform maintenance such as windows updates (which can even be automated). DeepFreeze also allows you to configure a "thawspace" partition, where changes are persistent across reboots. One of the weaknesses of DeepFreeze is that you end up needing to create a new profile every time you log in, unless your profile existed at the time the machine was frozen. And even then, any changes you make to your profile while working on a frozen machine are lost. As students have frequent legitimate needs to log in to our classroom machines, there is currently a lot of cleanup involved from time to time in removing their old profiles and changes, so I want to extend DeepFreeze to protect our classroom computers as well as public computers. The problem is that faculty have a real need to keep a stateful profile locally on these classroom computers. The solution I would like to use is to configure Windows via group policy (or even manually, if that's the way I'll have to do it) to place profile folders on the thawspace partition, but only for members of the faculty security group. Is this possible?

    Read the article

  • Find slow network nodes between two data centers

    - by 2called-chaos
    I've got a problem with syncing big amount of data between two data centers. Both machines have got a gigabit connection and are not fully occupied but the fastest that I am able to get is something between 6 and 10 Mbit = not acceptable! Yesterday I made some traceroute which indicates huge load on a LEVEL3 router but the problem exists for weeks now and the high response time is gone (20ms instead of 300ms). How can I trace this to find the actual slow node? Thought about a traceroute with bigger packages but will this work? In addition this problem might not be related to one of our servers as there are much higher transmission rates to other servers or clients. Actually office = server is faster than server <= server! Any idea is appreciated ;) Update We actually use rsync over ssh to copy the files. As encryption tends to have more bottlenecks I tried a HTTP request but unfortunately it is just as slow. We have a SLA with one of the data centers. They said they already tried to change the routing because they say this is related to a cheap network where the traffic gets routed through. It is true that it will route through a "cheapnet" but only the other way around. Our direction goes through LEVEL3 and the other way goes through lambdanet (which they said is not a good network). If I got it right (I'm a network intermediate) they simulated a longer path to force routing through LEVEL3 and they announce LEVEL3 in the AS path. I basically want to know if they're right or they're just trying to abdicate their responsibility. The thing is that the problem exists in both directions (while different routes), so I think it is in the responsibility of our hoster. And honestly, I don't believe that there is a DC2DC connection which only can handle 600kb/s - 1,5 MB/s for weeks! The question is how to detect WHERE this bottleneck is

    Read the article

  • ACER ASPIRE V3-571G-9435 Fan not kicking in leading to overclocking

    - by brythespy
    This laptop has always had this problem. The temperatures kick up to the thermal ceiling of 99C for the CPU (i7-3610QM) and 94C for the GPU (GT 640M). Problem is, the FAN doesn't give a damn. It's actually QUIETER when the temperatures are that high, than when it's at 60C or so. I figured it was a problem with the BIOS, so I updated that, no change. So maybe it was a problem with windows? Nope, same result on gaming with Ubuntu. The major problem of this, is that after gaming for ten minutes the CPU throttles itself to 1197MHz(as opposed to 3193), and the GPU goes down to 135MHz( as opposed to 843MHz). The problem is that the fan won't kick in like I know it can, because when the laptop is in POST, like at BIOS setup, the fan is like a vacuum cleaner it's so loud! I don't really care about noise, so I'd love to have the fan like that all the time as long as the temperatures don't fly through the roof... So, things I've tried so far, to avoid possible duplicate answers. Checked for dust: It's been this way since the laptop was new, and I've since then taken it apart. No dust buildup. Background stuff running?: No, problem persists across OS'es, and it happens while gaming anyways Manually underclocking both CPU/GPU: Using windows, I can force the CPU to stay at 1.1GHz, but the temperature STILL easily hits 99C after 5 min of gaming Contacted Acer support?: No help at all. They told me to update and reset the BIOS, which I have done multiple times. There are only about 6 changeable things anyway, none of which should affect the FAN control Third party fan control program?: None detect the fan So, I'm screwed until I can afford to replace this laptop, but I am very satisfied with performance in games... Whenever the CPU/GPU aren't being throttled. Anyone that can offer advice to solve this problem would be greatly appreciated. Hell, if you solved my problem I'd send you some monies through paypal.

    Read the article

  • Windows 7 64 bit Installation freezes after a while durig the setup

    - by vinz243
    I have a windows 7 32 bits on my computer. Because i have 5 gb of ram (kingston) on my asus M2N motherboard and only 3 were able to be used, I bought W7 x64 and install it. It loads the wizard, but after a while, it freezes, and I have to force reboot. It first crashed while unzipping w7 files, but if I wait a while on the terms page for example, it can crash before, which make me think that it is a matter of time. I remember I had the same issue while booting on Ubuntu x64, it crashed randomly but not load completely. No bip or other messages. Configuration: Software OS (before) W7 x86 Pro New OS : W7 x64 Pro Antivirus : avast (bios verification ?) BIOS 03/27/2008 - v08.00.12 Hardware : Motherboard : Asus M2N Processor : AMD Athlon 64 dualcore @ 2.6 GHZ Memory : 5120 MB ((2 + 2) + (1)) NOTES : I ran a memory test using openSUSE cd, though i have not finished it, it ran. EDIT: I tried not to run the setup but wait, and i get the BSOD : A problem... TL;DW IRQL_NOT_LESS_OR_EQUAL If it is.. TL;DW ***STOP: 0x0000000A (0x0000000000000000,0x0000000000000002, 0x0000000000000001, 0xFFFFF8001A49ED1F)

    Read the article

  • Change Windows Authentication user for Sql Server Management Studio

    - by Asmor
    We're using Sql Server 2005 with Windows Authentication setup. So normally, when you log in using e.g. Sql Server Management Studio, it forces you to log in at MACHINE_NAME\Username. Anyways, on this one particular computer, the person said they had to make a new account called User01 to do something and showed me where she'd created it under security in the "master" system database. And so now when she logs in, it's listed as MACHINE_NAME\User01 (not the actual Windows user name). It's still set to Windows Authentication, though, and I'm unable to change the login name. Now here's where the real problem comes in... I didn't realize that she was being logged in under this user name at the time, and I disabled it to see what would happen. Now I can't log into the server under her account. I created a new account in Windows called test, and as expected SSMS had the username as MACHINE_NAME\test, and I was able to log in fine. However, the area where the User01 account was listed is not visible to me as far as I can tell and so I can't reenable it. I also tried running the following query: alter login User01 ENABLE And got this error: Msg 15151, Level 16, State 1, Line 1 Cannot alter the login 'User01', because it does not exist or you do not have permission. So in a nutshell, ideally I'd like to reenable User01 somehow, just to get things back to where they used to be. Failing that, how can I force SSMS to log in using the Windows account name as it should be, rather than trying to use User01?

    Read the article

  • Terratec Cinergy Hybrid T USB XS is not recognized anymore on Mac OS X

    - by Gabble
    I have used Terratec Cinergy Hybrid T USB XS for years now, alongside with Elgato EyeTV software. I am a happy and completely satisfied user! Since a couple of weeks the USB sitck stopped working on my MacPro1,1 (OS version 10.6.3): EyeTV does not see any device attached, and actually, the green led on the stick stays off. It is not a USB port fault: I have unplugged any other USB/Firewire device and tried with different USB ports, to no avail (any other USB devices work as expected on any port) I have completely uninstalled EyeTV software, including preferences and system daemons/extensions, rebooted and reinstalled the latest EyeTV. No way. Reset the PRAM. Nope. Checked the Apple System Profiler - USB: No device attached, The MacPro does not see it at all. I need to say that: a) The device worked as a charm even with the latest OS 10.6.x (so it's not a OS upgrade cause). b) I have plugged the Terratec Cinergy Hybrid T USB XS to my MacBook5,1 where EyeTV is not installed and was never installed: The green led on the stick turns on, the growl bubble pops up, and the device is perfectly recognized by the system. Apple System Profiler says (sorry, Italian language): Cinergy Hybrid T USB XS (2882): ID prodotto: 0x005e ID fornitore: 0x0ccd Versione: 1.10 Numero di serie: 061102005755 Velocità: Fino a 12 Mb/sec Produttore: TerraTec Electronic GmbH ID posizione: 0x04100000 Corrente disponibile (mA): 500 Corrente necessaria (mA): 500 At this point I am pretty sure the Terratec stick is not damaged and there is something wrong with my MacPro. I kindly ask you: Is there a way to force my MacPro recognize the USB device? What can I check? Is there something that caches USB connection that can be reset? A OS reinstall would be the very last resort for me. Thanks in advance for any help you will offer!

    Read the article

  • How can I remotely display images on my computer?

    - by Jakob
    What I Have: A laptop booted with Ubuntu and a stationary computer dual-booted with Ubuntu and Vista, both connected through a wireless ad-hoc network. What I Want: I want a way to display images in fullscreen on my stationary, using my laptop as a "remote control". I want to be able to choose another picture at any time and have my stationary computer remain in fullscreen mode at all times. Preferably, I should also be able to display just an empty (black) screen. How can I arrange for this? What I Have Tried: I have tried simply SSH:ing into my stationary computer and opening the image files using an image viewer, but all of the ones that I have tried (Eye of Gnome, Mirage, Gwenview, and others) open a new window for every new image. I don't know how to force them into using a single instance. I have tried using the VLC remote control command line interface, but apart from seeming somewhat unreliable (exiting with segmentation faults at one point), it also displays some images with a green border and forces me to pause playback in order for the image to remain on screen. Bonus Question: In my final setup, I also need to play music through my stationary computer's speaker and have the ability to switch to another track at any point, like with the images. Preferably, I would like to control the images and the audio through the same interface. How can I best achieve this?

    Read the article

  • csvde doesn't import users

    - by The Eighth Ero
    I have a small problem as I'm a server manager beginner, I installed a Domain Controller on my Windows Server 2008, and I created three OUs, now I'm trying to add users to each OU via csvde command, but I get as a result of the operation, without mentioning any errors: > C:\csvde>csvde -i -f List.csv > Connecting to "(null)" > Logging in as current user using SSPI Importing directory from file > "List.csv" Loading entries. > 0 entries modified successfully. Below is the csv file I'm using to add 2 users to "Offshoring1" OU, the domain name is "iado.lan". DN objectClass sAMAccountName sn givenName userPrincipalNAme cn=BB NN,ou=Offshoring1,dc=iado,dc=lan user BB NN BB [email protected] cn=II YY,ou=Offshoring1,dc=iado,dc=lan user II YY II [email protected] and this the csv data as generated by Word 2011 on my mac : DN;objectClass;sAMAccountName;sn;givenName;userPrincipalNAme cn=BB NN,ou=Offshoring1,dc=iado,dc=lan;user;BB;NN;BB;[email protected] cn=II YY,ou=Offshoring1,dc=iado,dc=lan;user;II;YY;II;[email protected] I do use -k option to force import but still no success.

    Read the article

  • root folder php scripts not running in nginx

    - by Thermionix
    nginx with php-fpm on ubuntu 12.04 server. attempting to access /var/www/test.php (via https://example.net/test.php) downloads the script instead of executing it. if I place the test.php in a subdirectory, i.e. /var/www/test/test.php it executes. root.conf; root /var/www; include php-fpm.conf; location ~ /\. { access_log off; log_not_found off; deny all; } php-fpm.conf; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.socket; include fastcgi_params; } fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; #fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • Forcing programs to be installed to another drive

    - by zyboxenterprises
    I have an SSD as my main Windows drive, with a 640GB 2.5" HDD, partitioned to store programs and user settings, and also to act as backup (it's the only thing I had lying around at the time of building my PC). The task was to make the PC as fast as possible, while having an increased storage capacity available to store normal user data, and to assist in my small data recovery business. The problem is that whenever I install a program, it installs to C:\Program Files [(x86 for the 32 bit programs]\, although I have changed the environment variables. This wouldn't normally be an issue, however every installation program points its shortcut to my 640GB HDD. The root layout of both drives: To clarify: Program files get installed to C:\ Program shortcuts are always pointed to Z:\, my 640GB HDD Modifying the relevant environment variables doesn't do anything, I looked at this, but however it only talks about modifying the registry and environment variables, which I have already done so. I install to the Z:\ drive if the installation program lets me change the installation path, but however the installation programs sometimes don't let me change this. Is there a way that I can force every program to install to the relevant location on Z:\? Perhaps I'm missing something here? Edit: Found this program; would it be appropriate to use in my case? I would be able to move the entire Program Files (and its x86 version) to Z:\, without impacting on the performance.

    Read the article

  • Hibernate between OS X and Bootcamp Win 7

    - by Willem
    Wouldn't it be great if someone wrote a guide or an app which allowed you to switch instantly between OS X and Windows using Hibernate in both OS:s? Windows 7 already has an option "Hibernate" which allows you to boot back to your OS X partition, but OS X does not exactly offer the same. However, there are possibilities here. It seems that the recent Mac's have 3 different kinds of sleeping mode: Sleep: Low power consumption, RAM still active. Legacy Safe Sleep: No power consumption(?), writes RAM to disk and shuts down (is this the same as Hibernate?) Safe Sleep: Writes RAM to disk and enters sleep mode. If battery level drops too low it goes into Hibernate (is this Hibernate the same as #2 in this list? This is the Hibernate I will be referring to int he rest of this post) It seems that I am unable to force my MacBook Pro (Late 2011) OS X 10.7.3 into a true hibernate using either command line or apps that are supposed to do this. I believe the Mac should show that white loading bar whilst waking up if it was truly put into hibernate (which it does not). But I can get this white bar to show by letting my battery level drop to 0% so there is obviously a system function for it (obviously, duh! :). When Win 7 goes into hibernate it shuts down completely and you can then boot into OS X on startup. On OS X however, hibernate forces you to wake up into OS X. Can you hack this so that you're allowed to select boot partition after OS X hibernates? Would it be possible to use the true hibernate system functionalities of Win 7 and OS X to create a kind of instant switching between the two? Imagine this on a quick SATA-3 SSD like my 180GB Intel 520. Thanks / Willem

    Read the article

  • Autounmounting USB keys with FAT filesystem on Linux (RHEL5)

    - by niXar
    For security reasons, I have two workstations i front of me, and I can only transfer data between them through a USB key. As you can imagine, it can get quickly tiresome, but the most annoying is having to unmount the things before removing them. Not umounting them results in missing files most of the time, even if I remove them a while after having last written to them. Now, since they're only used for transferring smallish files, and each are basically written once and read once, I don't need the fancy pansy caching infrastructure that makes clean unmounting a necessary step. And since the data is always a copy of something I have at hand, I don't care if the filesystem croaks from time to time. But anyway the system doesn't need to force that on me, it could simply make sure everything is committed with a second, and works synchronously. Then when I remove the key, nothing is lost. Is there a way to do this? I would appreciate any other tips on handling this situation. Edit: it appears the situation has changed between RHEL5 and Fedora up to F11 on one hand, and F12 on the other. The latter use DeviceKit-disk, and I haven't quite figured out how to do this. The method provided below in gconf does not work anymore.

    Read the article

  • How to organize deployment process in Chef-controlled environment?

    - by Alex
    I have a web Linux-based infrastructure which consists of 15 virtual machines and over 50 various services. It is fully controlled by Chef. Most of the services are developed internally. Basically the current deployment process is triggered by a shell script. A build system (a mix of Python and shell scripts) packages the services as .deb files and puts these packages into a repo. It runs apt-get update on all 15 nodes then because the standard Chef apt cookbook only runs apt-get once per day and we definitely do not want to run apt-get update unconditionally on each chef-client wake. The build system restarts chef-client daemons on all 15 nodes finally (we need this step because of pull Chef nature). The current process has a number of drawbacks we want to address. First off, it is asynchronous because the deployment script does not check chef-client logs after restart so we don't even know if the deployment was successful. It does not even wait for Chef clients to complete the cycle. Second, we definitely do not want to force chef-client restarts on all nodes because we usually deploy only a small number of packages. And third, I am not quite sure using chef-client for deployment is legitimate, probably we are just doing it wrong from the start. Please share your thoughts/experience.

    Read the article

  • Apache Redirect is redirecting all HTTP instead of just one subdomain

    - by David Kaczynski
    All HTTP requests, such as http://example.com, are getting redirected to https://redmine.example.com, but I only want http://redmine.example.com to be redirected. For example, requests for I have the following in my 000-default configuration: <VirtualHost *:80> ServerName redmine.example.com DocumentRoot /usr/share/redmine/public Redirect permanent / https://redmine.example.com </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> . . . </VirtualHost> Here is my default-ssl configuration: <VirtualHost *:443> ServerName redmine.example.com DocumentRoot /usr/share/redmine/public SSLEngine on SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown <Directory /usr/share/redmine/public> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> LogLevel info ErrorLog /var/log/apache2/redmine-error.log CustomLog /var/log/apache2/redmine-access.log combined </VirtualHost> <VirtualHost *:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> . . . </VirtualHost> Is there anything here that is cause all HTTP requests to be redirected to https://redmine.example.com?

    Read the article

  • Three server processes consume no more than 50% of Dual Core CPU

    - by thor
    I have three processes running on Intel Core 2 Duo CPU. From watching output of 'top' and graphs of CPU load (drawn by MRTG, data collection via SNMP) I can see that CPU load is never more than 50%, and, most of the day, when those processes are busy CPU load has a ceiling at 50 %. I mean, CPU load grows up to 50% in the morning and stays there until late evening. My first thought was that only one core was used at 100% thus giving 50% of both CPUs. But, as there are three processes running and from 'top' I see that both cores are being loaded, so this is not the case. schedtool shows that CPU affinity for those three processes is at default, 0x03, allowing them to use both cores. If I force one process to one core (schedtool -a 0x01), and two others to second (schedtool -a 0x02), cumulative usage grows beyond 50%. Why three processes seem to consume only 50% of two cores? Why forcing them to different CPUs allows usage to grow higher? Any hints? P.S. Processes in question are Counter-Strike servers.

    Read the article

  • Known USB 2.0 devices don't install driver, but must be manually forced

    - by Darragh
    When a known USB 2.0 device is plugged in and detected, it doesn't install the driver correctly but shows a Code 28 error and lists the device under "Other Devices" in Device Manager. When view properties of this device , it shows the following status; The drivers for this device are not installed. (Code 28) There is no driver selected for the device information set or element. To find a driver for this device, click Update Driver. When updating the driver manually and selecting the appropriate driver Windows doesn't believes it's the correct driver, but you can force the installation and it works! The other condition the driver will auto-install is when the same USB device is plugged into a USB 3.0 port. Power related issues are not also causing this as I have tried vi a Docking station, USB hub. etc.. Devices tried; Jabra Headset USB-Mass Storage Device (flash disk and ext HD) MS Wireless Keyboard & Mouse USB Ethernet controller (USB-MAC controller) This is on a laptop part of a Domain with Windows 7 Ent 7601, I am logged in as a local administrator. There isn't any Group Policies blocking not signed driver or whitelisted devices on the domain. Any suggestions please feel free

    Read the article

  • Mass-migrating from POP3 to Exchange 2010, how do I copy mailboxes?

    - by Erik P. Skaalerud
    I'm in the process of planning our migration from an internal hosted POP3-server (dovecot) to Exchange 2010. We're using Outlook 2003 for the moment, but will soon upgrade to Outlook 2010. The big problem is that we have about 50 computers here in our HQ, plus ~30 clients in branch offices (wich will get their Exchange migration later sometime). I'm the only IT personel, and having to go around and manually set up Outlook and copy over their PST contents is not a option I'm looking for. Some users have set outlook to keep messages for X number of days on the POP3 server, others have not. Using a POP3 connector to transfer over the mails is not a viable option. Here is what I've done so far: Created a transform for the Office 2003 administrative installation point Created a .PRF file to modify any existing e-mail account to switch over to Exchange (including the RPC-encrypt hotfix described in MSKB 2006508) Tested both transform and PRF, both works Created a test-OU and GPO containing the Office 2003 installation with transform applied, also works My big question is: How can I force Outlook to import any existing .PST into the new Exchange mailbox when the user starts up Outlook for the first time after the MST/PRF have been applied? Is this possible?

    Read the article

  • How can I install iTunes in such a way that it can't put any "hooks" or helper programs on my computer?

    - by Joshua Carmody
    I'm buying a new iPad, which means I must once again install iTunes. I've not used iTunes in more than 6 months, since I bought a new computer. I don't like iTunes, but I can live with using it to buy/manage media and sync my Apple devices when the program is open. What I would like to do though, is find a way to install iTunes in such a way that it has absolutely no effect on my system when it is closed. iTunes normally installs several helper programs such as iTunesHelper.exe, and the Bonjour service. These programs run in the background when iTunes is closed. You can force-close them, or remove them from your setup files, but iTunes will often put them right back when you run it. I know these programs are mostly harmless, but they have at times caused issues such as iTunes spending system resources trying to catalog media files or drives connected to VPN, or other issues. At best they're just one more small background process eating up a small piece of my CPU time and RAM. How can I run iTunes without letting it get it's "hooks" into my system? One thought I had is that I could create a Windows user account just for iTunes, and deny it admin privileges. Then if I installed iTunes using that account maybe anything it installed wouldn't affect the "main" account on my PC? But I'm not sure if that would work.... Failing that, maybe some kind of virtualization software or sandbox I could install it in? I'm open to any suggestions. My system is an Intel-based PC running Windows 7 Professional 64-bit. Thanks!

    Read the article

  • How can I make WSUS less invasive for our users?

    - by Cypher
    We have WSUS pushing updates out to our user's workstations, and things are going relatively well with one annoying caveat: there seems to be an issue with a pop-up being displayed in front of some users informing them that their machine will be rebooted in 15 minutes, and they have nothing to say about it: This may be because they did not log out the prior night. Nevertheless, this is a bit too much and is very counter-productive for our users. Here is a bit about our environment: Our users are running Windows XP Pro and are part of an Active Directory Domain. WSUS is being applied via Group Policy. Here is a snapshot of the GPO that is enforcing the WSUS rules: Here is how I want WSUS to work (ideally - I'll take whatever can get me close): I want updates to automatically download and install every night. If a user is not logged in, I would like the machine to reboot. If a user is logged in, I would like their machine not to reboot, but instead wait until the next "installation period" where it can perform any other needed installations and reboot then (provided the a user account is not still logged in). If a user is to be prompted for reboot, it should only happen once per day (if possible), but every time they are prompted, they must have a way to postpone the reboot. I do not want users to be forced to restart their computer whenever the computer thinks it should happen (unless it's after an update installation and there are no logged in users). That doesn't seem productive to force a system restart in the midst of a person's workday. Is there something that I can do with the GPO that would help make WSUS less intrusive? Even if it gave the user an option to Restart Later - that would be better than what is happening now.

    Read the article

  • Backup files from Linux client to Windows Server

    - by Andrew
    I'm trying to backup my files from my Linux box to my Windows Server 2008 as a push, and when I delete them from my Linux box, they remain on my Windows Server. I've found lots of sources that are similar, but most results were from Windows to Linux. I managed to find slightly more similar cases like Using rsync and cygwin to Sync Files from a Linux Server to a Windows Notebook PC, and rsync from Windows PC to remote Linux server, with the most similar being a backup from Linux to Windows Server, but through a pull from the Windows Server. Initially, I used Unison because I thought having the 2-way capability would come in handy, and I would just have to set some configurations to make it 1-way. Unfortunately, I couldn't find the right configuration, and only managed to synchronize using the command unison "profile" -ui text -auto -silent. When I deleted the files on my Linux box, the files in the Server got deleted too, which of course, isn't what I want. When I tried to find any options for Unison, I only discovered the -force option, which didn't help, since what I wanted was an incremental update to the Server. I found out I could achieve this from using rsync and the -a option (archive), which would keep adding files even if I deleted them from my Linux box. I installed Cygwin on my Windows Server, configured an SSH daemon, but I can't seem to get it working. I've also already configured Windows Firewall to open port 22 (both inbound and outbound). I used the following command from my Linux box: rsync -avrzn /folder/to/be/backed/up/ [email protected]:/cygdrive/c/place/to/store/backed/up/files (a - archive, v - verbose, r - recurse into subdirectories, z - compress, n - dryrun) but it just won't work. Can anyone help me out? I don't mind using either Unison or rsync, as long as it achieves what I want.

    Read the article

  • Deleted printers keeps coming back - and multiply

    - by MojoDK
    My users are on 2012 R2 RDS Session Host servers. I've used "Deploy Printers" (from Print Manager) to deploy 4 printers. The last week, I've had a lot of problems where users can't print. If I deleted the printer and added it again, they could print just fine. Now I've removed all printer deploying from GPO - and I have no printers in any login scripts. I did a gpupdate /force, but all the 4 printers are now listed 3 times... If I delete the printers and log off and back on, all the printers are popping up again. Sigh! This is driving me nuts. This script doesn't show any of the "SVFREJA" printers... Set objWMIService = GetObject("winmgmts:\\.\root\cimv2") Set colPrinters = objWMIService.ExecQuery ("Select * From Win32_Printer") If colPrinters.Count <> 0 Then 'If there are some network printers Dim s s = "" For Each objPrinterInstalled In colPrinters ' For each network printer s = s + objPrinterInstalled.Name + chr(13) Next msgbox s End if It gives me this result... (sry for the big picture) My problem is not with the "redirected" printers, my problem is that I have several printers with the same name (on SVFREJA) and I can't get rid of them. Any idea why I can't get rid of the "ophaned" printers??

    Read the article

  • Is it possible for the Subversion Apache module to serve html files with an html content-type without using the svn:mime-type property?

    - by Martin Pain
    I am aware that if you set the svn:mime-type Subversion property on a .html file to text/html then when viewing the file in a browser through the Subversion module in Apache httpd it will be served with a Content-Type: text/html header, enabling the browser to render it as HTML rather than plain text. However, I am looking for a way to do this without using the svn:mime-type property. I'm aware that you can configure your svn client to automatically add the property - this is not what I want, as I do not want to ensure all users have these settings. I'm also aware that I could create a pre-commit hook that rejects the commit if the properties are not set, in order to force users to set the property - I might fall back to that, but I'm looking for something less intrusive. I'm also aware that I could use a post-commit hook to add the properties automatically on the server-side. I'd rather not do that (as users then have to update immediately after their commit, and it's not trivial to write) - I'm looking for a better alternative. Perhaps something with rewrite rules in the Apache server?

    Read the article

  • Flickering dual screens in Virtual Box Ubuntu 13.10 Guest

    - by alexleonard
    I have Ubuntu 13.10 x64 installed as a guest in VirtualBox (under a Windows 8.1 host) and have the settings for the virtual machine setup to run with a monitor count of 2, 128MB video memory and 3D acceleration enabled. In my guest I have the virtual box additions installed (which allowed me to have two 1920x1080 screens). Here's a screenshot of my VM settings. My laptop is an Asus N550JV which has both Intel's HD Graphics 4600 GPU and Nvidia's GeForce GT 750M. By default though I believe the Intel GFX card is being used to render the VM. When I boot up the VM it loads perfectly on dual screens, however whenever I move the mouse from one screen to the other (I have a Dell S2340L running over a HDMI connection as a second screen) the screen flickers. I've tried a variety of settings changes in both Ubuntu and the VM settings, but cannot seem to stop this screen flicker. I also used the NVidia control panel in Windows to force the dedicated graphics card to always be used but found that the display driver sometimes crashed whilst working in the VM, resulting in my VM session being destroyed, so I figured it's better to stick with the Intel GFX as that appears to be more stable. I also tried without 3D acceleration but that was much worse, and if I ran the VM with a low amount of graphics memory it really struggled. Here's my dmesg output: http://pastebin.com/1LJuYWMj (not sure if this is helpful in this situation). I read some posts suggesting changes to /etc/X11/xorg.conf but I don't appear to have an xorg.conf file. There were also a few posts (though related to Synergy) suggesting running xset -dpms but this command doesn't appear to have had any effect for me. As an additional note, I'm finding that window drawing in the guest is a little laggy/glitchy. For example, quickly scrolling through a web page may result in parts of the viewport displaying original content. Certainly I notice drawing issues most in the web browser, but it also impacts other software with parts of the window not being drawn when, say, switching between accounts in thunderbird. Any suggestions greatly appreciated!

    Read the article

  • Gigabit LAN not working on ASUS M2N-MX

    - by chmod
    Today I replace my FastEthernet switch with a newly bought gigabit switch (DGS-1008A). All computers in my house are displaying that the connection speed is 1 Gbps except for one. The computer that is not working is an ASUS M2N-MX which contain an onboard gigabit NIC. See ASUS link for confirmation http://www.asus.com/Motherboards/AMD_AM2/M2NMX/ Here are some info of the machine OS: Windows 7 Ultimate SP1 64bit BIOS version: 1004 (latest) Driver: installed via Windows update (latest from Windows update) Windows Update: fully updated The machine is reformatted 3 days ago, so it's pretty clean, no junk, no virus, etc Cable: Amp CAT5E 5 meters In device manager, the name of the NIC is "NVIDIA nForce 10/100/1000 Mbps Ethernet" What I have try: I did try to install the driver provided in ASUS website, but there isn't any for Windows 7 64 or Vista 64. I did try to install the latest nForce340/6100, downloaded from Nvidia website. However, the LAN driver refuse to install, it complain that I already have the best driver installed. I looking in the property -- advance tab -- Speed/duplex settings, in an attempt to force it to run at 1000Mbps, but there is no 1000Mbps choice, only 10 and 100Mpbs. I change the CAT5E cable (use one from another computer that is running gigabit without problem) Anyone have this issue or know how to solve it? Thanks

    Read the article

  • mod_rewrite not working for subdomain in Apache2

    - by Matt
    Hi, I'm having some trouble with mod_rewrite. So I'm implementing it through .htaccess, and I can get it working on my main vhost, domain.com - what I want it to do is rewrite http:// domain.com to force it to https:// domain.com, which it does well. I want to have name-based vhosts for the one IP with the following redirects: (I'm breaking up domain names with a space because otherwise serverfault recognises them as links) http:// domain.com -- https:// domain.com http:// staging.domain.com -- https:// staging.domain.com http:// test.domain.com -- https:// test.domain.com http:// beta.domain.com -- https:// beta.domain.com domain.com redirects to https:// domain.com, but staging.domain.com doesn't, although I can access https:// staging.domain.com. The .htaccess is identical for both, just with the domain name different. It doesn't seem to do any rewriting at all for staging.domain.com, I've tested this by trying to get it to rewrite to www.google.com. I have a wildcard DNS record, *.domain.com which points to the domain IP. Is there a particular way I should have the virtualhosts configured to allow this? I keep reading in the Apache documentation that it doesn't support multiple SSL name-based vhosts. But I can access both https:// domain.com and https:// staging.domain.com just fine. Any thoughts? Thanks to everyone for your help with this.

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >