Search Results

Search found 40159 results on 1607 pages for 'multiple users'.

Page 317/1607 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • How do you prevent spam being sent by your users?

    - by Ernie
    So, for the third time in about two weeks (maybe less), one of our customers has had their password compromised, and a spammer was sending mail with their account using our webmail. As a result, our outgoing mail server has been listed at Spamhaus, and a lot of our outgoing mail is being rejected. I can't think of any way to prevent this from happening (although now our webmail server is using Sendmail instead of SMTP, but that just limits the scope of the problem), yet the big ISPs never seem to have a problem like this.

    Read the article

  • Where do I define a group policy that will set a users desktop background color to green the first time they log in?

    - by Tyler
    Servers: W2k8 R2 x64 Desktops: Win7 Pro x64 Our current group policy uses a custom ADM file to define certain properties of the desktop (Background Image (centered), Background Color is green (00 74 00)). This policy works for us, but the down-side is that policies defined in our custom ADM are only applied after a GPUpdate /Force is applied. We would like these desktop theme settings to be applied the first time the user logs onto the computer. I've been working on a new policy that forces the computer to wait for the network when the user logs on to handle folder redirection. The reason for writing the new policy was to resolve the issue that a user needs to run GPupdate /Force the first time they log in, so it doesn't make sense for me to implement the new policy if there is still something that requires GPUpdate /Force to get the user in the state that we want them. I've moved the setting for background image out into Admin Templates- Desktop- Desktop- "Desktop Wallpaper" so this is now being set properly when the user first logs in. Now I'm left with a black background until I force a group policy update. I have tried to play around with setting a default "Theme" and had limited success; this was not reliable enough to call a solution. I suppose I could set the background color with a script? Any thoughts? It feels like I'm missing something obvious, or that this should be much easier than it is.

    Read the article

  • A Firefox "master password" feature that's friendly to guest users?

    - by Josh
    I use the "master password" feature of Firefox and like it for a number of reasons. It does have it's drawbacks, though: anytime I hand my laptop over to my girlfriend so she can check her email on it, she's continually confronted with the prompt to enter my master password. I have since disabled the feature and am back to square one. Is there an addon or tweak that will help?

    Read the article

  • Why not all users can log in to a network computer by using <network computer name>/<user name> form

    - by Haris
    There is a file server in my office that is not connected to my office LAN. In this server there are folders that are shared. This server is connected to a switch and a wireless router. Everyone who wants to access this server uses wireless network connection. They log in to this server by providing user name and password registered to the server. Some people can log in to my office file server by providing user name in (the server name)/(user name) format, while other must use (the server IP address)/(user name) format. Why is it like this? I need everyone can access the file server by providing user name in (the server name)/(user name) format. I have tried to change the %SystemRoot%\system32\drivers\etc\hosts file, as some suggested, but it won't work. Any other suggestion?

    Read the article

  • Windows Server 2012 Essentials - Trying to setup "Anywhere Access" but the "Computer Access" list for users is blank

    - by tetranz
    I have a new installation of Windows Server 2012 Essentials and I'm trying to setup "Anywhere Access" for both VPN and remote desktop. The basic setup is all working. Shared folders is working but remote desktop has no computers available. On the server, if I edit a user with the Essentials Dashboard and go to "Computer access", the list is empty. The desktop computers have been joined to the domain. I can see them in AD under "Computers". I think our mistake was that we didn't use the connector tool to join the domain. We moved from a previous domain and went to Computer / Properties, changed the domain and started with a new profile. Is there something I can do now to make these desktops available for remote desktop? I can access a desktop directly no problem by going directly to it with the RDP client on port 3389. I do that from the outside world through an SSH tunnel.

    Read the article

  • Puppet Decentralized Setup

    - by paul.tw
    I want to migrate my existing Puppet setup from master/client to a decentralized solution. I know other solutions, such as Ansible are easier to setup for that purpose, but I really want to stay with Puppet. I found "supply_drop"(https://github.com/pitluga/supply_drop) on github, so I followed the instructions and did the following: rvm gemset create testing rvm use 1.9.3@testing gem install supply_drop The output is the following: [m@ms-MacBook-Pro:~ $ irb 1.9.3-p547 :001 require 'supply_drop' NameError: uninitialized constant Capistrano from /Users/m/.rvm/gems/ruby-1.9.3-p547@testing/gems/supply_drop-0.17.0/lib/supply_drop/tasks.rb:1:in `' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:55:in `require' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:55:in `require' from /Users/m/.rvm/gems/ruby-1.9.3-p547@testing/gems/supply_drop-0.17.0/lib/supply_drop.rb:10:in `' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:135:in `require' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:135:in `rescue in require' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:144:in `require' from (irb):1 from /Users/m/.rvm/rubies/ruby-1.9.3-p547/bin/irb:12:in `' Since that doesn't work without problems, I was wondering which alternatives are there available to do the same. Do you have any suggestings?

    Read the article

  • Can I mount a volume at /Users on OS X?

    - by bshacklett
    I'm considering purchasing a SSD for my Macbook Pro. Unfortunately, I can't afford a large one, so I'm thinking of moving the current hard drive to the optical bay. On a Linux system, I would mount the second drive at /home. Given that OS X is Unix, I'm thinking it should be possible to do the same thing. Has anyone done this before? Edit: I should mention I'm running Snow Leopard.

    Read the article

  • Can two users both control a third machine simultaneously using Synergy?

    - by Reason
    I've been a Synergy user for some time now, as I use a PC on the left side of my Mac. My girlfriend and I both have our desks on each side of the other, and we'd like to know if it were possible for the both of us to control the PC in the middle, with our own separate mouse & keyboards. Here's a crude drawing of our setup (1) her pc (2) my pc (3) my mac Currently, 3 is running a synergy server, and 2 is running the client. But like I said, I'm wondering if there's a way for 1 & 3 to both control 2 with their own mouse and keyboard. I'd ~love~ to have it set up where we could go even farther, and have both of our mice & keyboards able to control all 3 computers at the same time, for moments when we need to click or press keys for each other. But that seems a little too much to ask! Any thoughts?

    Read the article

  • What are some techniques to monitor multiple instances of a piece of software?

    - by Geo Ego
    It was recommended that I ask this question here by a member of StackOverflow. I have a piece of self-serve kiosk software that will be running at multiple sites. I'd like to monitor their status remotely. The kiosk application itself is pretty much finished. I am now in the process of creating a piece of software that will monitor all of the kiosks from a central location so that the customer can view particular details remotely (for instance, how many bills are in the acceptor's cash cartridge, what customer is currently logged in, etc.). Because I am in such an early stage of development, my options are quite open. I understand that I'm not giving very many qualifications, but I'd like to try to get a good variety of potential solutions. Some details: Kiosk software is a VB6 app running on Windows Embedded Monitoring software will be run on a modern desktop version of Windows (either XP, Vista, or 7) Database is SQL Server 2008 My initial idea was to develop a .NET app that would simply report the last database transaction for each kiosk at a set interval (say every second or so) but I'd really like for the kiosk software to report its status in real-time. I'm not exactly sure where to begin in terms of what modifications may need to be made to the kiosk software, and what the monitoring software will require. Links to articles on these topics would be most welcome.

    Read the article

  • How do I get my server to recognise a change in users?

    - by Gareth
    I'm new in post running a small charity. As a result I've inherited a system from the previous manager that was poorly run. In my enthusiasm to change login details etc. I appear to have killed my microsoft outlook account. When trying to access outlook it prompts me for my user name, domain name and password - which I'm obviously not filling in correctly because it's not accepting my answers. Is there a way to determine that information from the server side? I have access to the server and the staff computers.

    Read the article

  • Why would Windows Task Scheduler spawn multiple instances of the same task that run into each other?

    - by swagner88
    Overview: I use Windows Task Scheduler to run automated tasks. Occasionally I will see that randomly a task has failed to perform its duties. When I check Task Scheduler to see what has occurred in the history log, I see that for some reason, when the tasks are triggered at their schedules, they are spawning several instances of themselves simultaneously which turns into a train wreck for the task and it either kills the other instances and tries to run the "first" one, or it just does not run at all as it believes another instance of itself is already running. Sometimes this occurs in the same tasks and then occasionally it happens with others. The fix is just to end all instances and start the task manually. Question: Why would one single task with one single schedule decide to spawn multiple instance of itself simultaneously? Note: I've got a separate user account set to run the tasks instead of myself. That user is indeed an admin on the machine that runs the tasks and the tasks are set to tun whether or not the user is logged on. Also, the machine is windows server 08 R2.

    Read the article

  • knife on Windows inconsistently reads ~\.ssh\knife.rb on Management Workstation

    - by gWaldo
    I am implementing a new instance of (Open-source v10.12) Chef in an existing environment. Currently the environment is mostly Windows, but more Linux is being introduced. I have used Chef in a previous gig, however that was a *nix-only environment. Because this is a primarily-Windows environment, my main workstation is Windows 7 (x64), and I use Powershell as my main terminal. I created a ~\.chef directory, populated with a knife.rb and my client.pem file. When I run knife client list from ~, I get the expected results. I keep my work in Dropbox just in case my laptop should fail or be stolen. When I run knife client list from the repo directory (C:\Users\waldo\Dropbox_company\projects\chef`), I get ERROR: Your private key could not be loaded from C:/home/waldo/.chef/waldog.pem Check your configuration file and ensure that your private key is readable (Note that the path is incorrect) This is the progression as I walk up the tree towards my ~ running knife client list: C:\Users\waldo\Dropbox\_company\projects\ => Above error C:\Users\waldo\Dropbox\_company\ => Above error C:\Users\waldo\Dropbox\ => It works! (Expected results) C:\Users\waldo\ => Expected results C:\Users\waldo\Documents\ => Expected Results C:\Users\waldo\Documents\GitHub => Expected Results C:\Users\waldo\Documents\GitHub\aProject\ => Expected Results What. The. Eff! Now, I know that I can add -c path\to\knife.rb, but that's a HUGE PITA. Question is: Why is knife inconsistently reading my ~\.chef\knife.rb, and how can I get around that without incurring carpal tunnel?

    Read the article

  • What is the impact of Windows 8 with UEFI on normal users?

    - by Sam
    I am a normal man-in-the-street computer user and so do not really understand what this is about, but I want to. Can someone please explain to me if: The Windows 8/UEFI secure boot thing will make it impossible to run normal/legacy applications in Windows 8 (as they will be unsigned)? It will turn Windows into an Apple-like system where only Microsoft approved applications can be run? As I say, I'm a normal user, and that is the overall impression I have from reading all the blogs, etc about it. If, on the other hand, all it does is make sure the system is booting a signed OS, how does this prevent malware (which is what at least two Microsoft blogs that I read seemed to be saying), given that most malware is not part of the boot process? The only way I can see this making sense is if it is ensuring that all OS components are signed. Is that it? Like I say, I'm a mortal, so please don't get technical on me, but rather explain how it will affect me, the user.

    Read the article

  • How does one make sure or even guarantee server time are sync correctly between dozens of servers across multiple datacenter on different location?

    - by forestclown
    Currently our web applications contain a logic to check if the data sent to the web server is expired or not by comparing the timestamp of the data with the date/time of the server. Everything goes will, until some dude from data center accidentally modify one of the web server date/time and causes some disruptions in our web services. My managers are of course not happy with this, and said we shouldn't use timestamp to check expiry in the first place...anyway.... Network Time Protocol is implemented, because of data centers are spread across different continents so we have one NTP server in each data center. The servers within the data center will have cron jobs to check against the time with their NTP server from the same data center. If time is out of sync it will auto update the server date/time. But then with our managers not happy with it, and think it could still easily causes the same problem. e.g. what if someone accidentally modify the NTP date/time? what if all the NTP servers are out of sync with each other? which NTP servers we can really trust? and blah blah.. So my questions are: What are the current practice to sync date/time between servers across multiple data centers or locations? How does one manages time stamp between web apps? e.g. Server A send data (contain timestamp of Server A) to Server B (compare timestamp between Server B and the timestamp from the data to see if it has expired or not. This is to avoid HTTP replay) Should we really not use timestamp check? Thanks & Best Regards

    Read the article

  • What is the best way/Software to manage multiple short lived instances of virtual machines ?

    - by Newtopian
    Hi, We have a QA department that have to test our software on multiple combination of OS and DMBS. With Windows spewing out many different versions the combinatorial math of all this can be daunting. So we decided on visualizing our setups but so far it only displaces the problem. The cost of hardware is expensive and we need many different combination far exceeding your server capacity to deliver. Also, these instances are throw away, once the test is complete we no longer need it, furthermore to ensure proper test isolation we should start fresh from a new instance. Lastly we only need a small subset of these system online at any given time. What I am looking for is a way to manage inventory so that our QA staff can order instances to be put online as required and discarded once used. Instances are spawned from a pool of freshly installed systems with the appropriate combination ready to accept our software. It also should be possible for two or more people to start the same instance at the same time, though we could manage without this if it proves too complex to put in place. Finally our budget is pretty thin, we can probably make some purchases but ideally expenditures should be kept to a minimum. To summarize we should be able to : Bring instances online on demand. Ideally should offer queue and scheduling management Destroy instances on demand Keep masters in inventory but not online. Manage large inventory of VMs (30-100 maybe more) with small staff of users (5-10). Allow adding, deleting and changing instances from inventory (bring online, make changes and check back in, or create new and check in). Allow few long lived instances for support tools (normal VM server usage) Thanks for your answers

    Read the article

  • Options for remote desktop software for helping remote users?

    - by Nick G
    I need an easy way to jump on someone elses machine to help them solve a problem. It needs to be really easy for them to install (preferably doesn't actually require an "install" but just running an exe?). It must punch through any firewalls automatically using a relay server or P2P (so Remote Desktop itself is no use to me). I've found commercial products like MeetMeNow but they're really expensive. I want something that you can either buy a cheap pack of sessions or minutes, or preferably something free. I'm not in the business of commerical support and would only use it once every couple of months perhaps.

    Read the article

  • Can I use multiple URLs in the URL field of KeePass?

    - by Sammy
    I am using KeePass version 2.19. What I would like to do is have more than just one URL address associated with a given user name and password. The entry for a given website might look something like this... Title google User Name email Password pass URL https://accounts.google.com/ServiceLogin?hl=en&continue=https://www.google.com/ https://accounts.google.com/ServiceLogin?hl=sv&continue=https://www.google.com/ https://accounts.google.com/ServiceLogin?hl=de&continue=https://www.google.com/ As you can see the ?hl=en changes into ?hl=sv and then to ?hl=de for the three different languages in which I wish to view the Google log-in page. But this of course could be something completely different, like different web services from the same provider like YouTube and Gmail by Google. Very much like SE where you have several websites but only use one user name and password. I imagine something along the lines of having multiple entries for one and the same website, where KeePass would actually prompt you to choose which one you want to use. So you have several user names and passwords that use the same URL. But is it possible to have several URLs using the same user name and password, so that KeePass asks me "to which of the following three URLs do you want to auto-log into with this password"?

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • How do I setup JBoss 5.1.0.GA to run multiple instances?

    - by djangofan
    Does anyone have any experience or advice in setting up multiple JBoss 5.1.x instances on the same machine that has 1 network card? Here is what I did: Installed JBoss 5.1.0.GA into c:\myjboss 1.5. I copied the server/default directory to server/ports-01 and server/ports-02 so they have their own config. did I assume correctly? Ran .\run.bat -c ports-01 Ran .\run.bat -c ports-02 At this point there are 2 instances but the second instance doesn't load correctly because of what is probably a few port conflicts. For example: the http port ends up being 8080 for both instances, which it gets from line #49 in the C:\myjboss\server\all\conf\bindingservice.beans\META-INF\bindings-jboss-beans.xml file. Earlier in the server load it clearly gets the value from line#63 in that same file. I don't know why it gets part of the port config from line #49 and the other part from line#63. Confused. I also tried: .\run.bat -Djboss.service.binding.set=ports-01 -c ports-01 and it made little difference. Any ideas on what I am doing wrong?

    Read the article

  • Which group memberships are necessary for simple users in Ubuntu 12.04?

    - by Joey Carson
    I'm configuring Ubuntu 12.04 for my sister. I'd like to give her a system that she really can't screw up, but can still do normal things like install software. I don't want to just add her user to /etc/sudoers so that she can become root because she could possibly mess something up. I know that I should be able to get around this by just adding her to the necessary groups, but I'm not sure which ones those should be. Could anyone suggest them or point me in the direction of some kind of list that heavily used software in Ubuntu requires group membership?

    Read the article

  • In an environment with multiple WiFi access points, do wireless clients sometimes connect to both at the same time?

    - by Bobby Burgess
    This is more of a curiosity than a problem, but in this new office I have two D-link DAP-2553's connected in a master/slave array (this just means the master keeps certain configuration options aligned with the slave). The network is set to 802.11n-only, and each AP has the same SSID and WPA2 key. The only difference is that they are on different channels (1 and 11). The WiFi network itself is working well. Users can roam around and the signal/speed is fairly consistent. However, I notice that when I look at the 802.11 client list in the web admin page for each of the 2 APs, I see that certain clients are connected to both, for extended periods of time, but I assume they are only passing data through one of them. Not every client is seen on each AP, but at any given time the same MAC address of a WiFi adapter can be associated (and remain associated) with both APs. The client list auto-refreshes every few seconds so I believe I'm looking at the most recent rather than stale information. One of the WiFi adapters that consistently associates with both APs is an Intel Centrino Wireless-N 1030 (laptop chip). Is it part of the WiFi standard that more than one association per WiFi card can be established concurrently on separate APS?

    Read the article

  • IIS6 Multiple SSL websites to a single HTTP website?

    - by docflabby
    Running a IIS6 server on Windows 2003. All the websites use ASP.NET I have a number of websites all running separate HTTP websites: www.domain1.com www.domain2.com www.domain3.com I have a separate HTTPS website www.secure.com These websites are all running on the same server. I now wish to intergrate the content of www.secure.com into each of the domains in a transparent way. Such that each website despite having its own SSL connection displays the same website. The complicatrion is www.secure.com needs to know which website the connection has come from to apply the appropriate branding. The idea behind this is to have only one website, and location, but it keeps the core website brand. https://domain1.com looks alot better from a marketing point of view (and avoids users getting confused about what our secure website is) SSL www.domain1.com/secure - displays www.secure.com (branded domain1) SSL www.domain2.com/secure - displays www.secure.com (branded domain2) SSL www.domain3.com/secure - displays www.secure.com (branded domain3) How would the best way of achieving this, i'm open to using additional software if necessery. Would a reverse proxy be sutible for this situation?

    Read the article

  • How can I track all emails sent from my users?

    - by schnapple
    My client runs a small business. This business has a small number of employees. For various reasons, my client would like to be able to have a copy of all of the emails sent from their employees BCC'd to them. The net effect here would be similar to the access they would have if they hosted their email through Exchange but the business is too small to make this a feasible option. They are currently hosted through GoDaddy. I have not investigated it myself personally but apparently GoDaddy can do something along these lines for all incoming email but not for outgoing email. Is there a way to set up email accounts for a particular domain to where a specified admin user could be copied on all outgoing email?

    Read the article

  • How can I track all emails sent from my users?

    - by Schnapple
    My client runs a small business. This business has a small number of employees. For various reasons, my client would like to be able to have a copy of all of the emails sent from their employees BCC'd to them. The net effect here would be similar to the access they would have if they hosted their email through Exchange but the business is too small to make this a feasible option. They are currently hosted through GoDaddy. I have not investigated it myself personally but apparently GoDaddy can do something along these lines for all incoming email but not for outgoing email. Is there a way to set up email accounts for a particular domain to where a specified admin user could be copied on all outgoing email?

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >