Search Results

Search found 10683 results on 428 pages for 'the rowland group'.

Page 348/428 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Extending partition on linux gparted but not more space in the vm

    - by Asken
    I have a vm test installation of a linux running a build server. Unfortunately I just pressed ok when adding the disk and ended up with an 8gb drive to play with. Well into the test the builds are consuming more and more space, of course. The vm drive was resized to 21gb and using gparted I expanded the drive partitions and that all worked fine but when I go back into the console and do df there's still only 8gb available. How can I claim the other 13gb I added? fdisk -l Disk /dev/sda: 21.0 GB, 20971520000 bytes 255 heads, 63 sectors/track, 2549 cylinders, total 40960000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006d284 Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 40959999 20229121 5 Extended /dev/sda5 501760 40959999 20229120 8e Linux LVM vgdisplay --- Volume group --- VG Name ct System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 19.29 GiB PE Size 4.00 MiB Total PE 4938 Alloc PE / Size 1977 / 7.72 GiB Free PE / Size 2961 / 11.57 GiB VG UUID MwiMAz-52e1-iGVf-eL4f-P5lq-FvRA-L73Sl3 lvdisplay --- Logical volume --- LV Name /dev/ct/root VG Name ct LV UUID Rfk9fh-kqdM-q7t5-ml6i-EjE8-nMtU-usBF0m LV Write Access read/write LV Status available # open 1 LV Size 5.73 GiB Current LE 1466 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Name /dev/ct/swap_1 VG Name ct LV UUID BLFaa6-1f5T-4MM0-5goV-1aur-nzl9-sNLXIs LV Write Access read/write LV Status available # open 2 LV Size 2.00 GiB Current LE 511 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1

    Read the article

  • Basic connectivity issues between Win 7 and XP mixed wired/wireless network. [Solved]

    - by Pulse
    Setup: Windows 7 x64 Ultimate desktop hard wired to Asus WL500gp router (WL500gpv2-1.9.2.7-d-r1445 firmware) Several Bridged VirtualBox VM's running XP, 7, ubuntu server 10.04, Mint 9 and SuSE 11.2 Win XP Pro SP3 notebook with D-Link Airplus wireless network card. No firewall or other security software currently running on either platform (at least for the duration of the test) Situation: Router is acting DHCP server Clients are receiving correct addresses and additional parameters Internet connectivity is available from all clients Windows 7 sharing is set to Network type = work (not home group) NetBT is disabled on all clients using smb over TCP What I can do: I can ping the router and internet addresses from the wireless XP notebook I can ping the Win 7 desktop and any VM from the XP wireless notebook I can ping all devices from the router All VM's and 7 can ping each other and the router as well as Internet addresses What I can't do: I cannot ping the XP wireless notebook from either the Win 7 desktop or the VM's; it always returns a destination host unreachable error. Tracert resolves the name or the XP notebook but also returns a destination host unreachable. From the above it would seem that something is blocking connectivity in a single direction (from the Win 7 box to the Win XP notebook) only but the router can ping the XP notebook. Some fresh input would be most welcome, as this is beginning to drive me batty. Thanks

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • Sending same email through two different accounts on different domains using Outlook 2010

    - by bot
    I am a programmer and don't have experience in Outlook configurations. Our company has two email domains namely xyz.com and xyz.biz. Each employee has an email id on one of these domains but not both depending on the project they are working on. The problem we are facing is that when a communication email is sent from the Accounts, HR, Admin, etc departments, they need to send the email twice. Once through the xyz.com account to all employees with an email address on xyz.com and once through xyz.biz to all employees with an email address on xyz.biz. I am not sure why they have to send two separate emails but the IT team has directed all departments to do so as there is no other solution according to them. Even though two different groups have been created, sending an email to employees in a group of xyz.biz from xyz.com does not seem to work. I want to know if Outlook provides a feature such that we can configure some kind of rules to send an email through an id on xyz.com to all users on xyz.com and the same email gets sent automatically to users on xyz.biz through an id on xyz.biz. The only technical details I know is that we are using Exchange 2003 and the IT team claims that this is a limitation causing the problem. Edit: Our company is split into two main divisions depending on the type of projects. I am pretty sure I use domain XYZ wheras the employees in the other division use the doman ABC to log in into the windows machine or outlook itself. Also, employees in domain XYZ can access the machines on the network in domain ABC but not the other way around

    Read the article

  • HAProxy, health checking multiple servers with different host names

    - by Marco Bettiolo
    I need to load balance between multiple running servers with different host names. I cannot set-up the same virtual host on each one. Is it possible to have only one listen configuration with multiple server and make the Health Checks apply the http-send-name-header Host directive? I am using HAProxy 1.5. I came up with this working haproxy.cfg, as you can see, I had to set a different hostname for each health check as the health check ignores the http-send-name-header Host. I would have preferred to use variables or other methods and keep things more concise. global log 127.0.0.1 local0 notice maxconn 2000 user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 option redispatch timeout connect 5000 timeout client 10000 timeout server 10000 stats enable stats uri /haproxy?stats stats refresh 5s balance roundrobin option httpclose listen inbound :80 option httpchk HEAD / HTTP/1.1\r\n server instance1 127.0.0.101 check inter 3000 fall 1 rise 1 server instance2 127.0.0.102 check inter 3000 fall 1 rise 1 listen instance1 127.0.0.101:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.example.com server www.example.com www.example.com:80 check inter 5000 fall 3 rise 2 listen instance2 127.0.0.102:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.bing.com server www.bing.com www.bing.com:80 check inter 5000 fall 3 rise 2

    Read the article

  • How to export and import an user profile from one Quassel core to another?

    - by Zertrin
    I have been using Quassel as my bouncer for IRC for quite a long time now. We (a group of administrators of a small network) have set up a shared Quassel core with many users on the same core. But now I would like to export everything related to my user account from the Quassel database on this core, in order to re-import it later in another Quassel core on my own server. Unfortunately, while a feature for adding users has been implemented into Quassel, nothing is so far provided for either exporting or deleting an user. (if deleting-a-user feature was available, I could have made a copy of the current database, delete all the other users leaving only mine, and use this resulting database on my own server, while leaving the first one untouched on the shared server) Despite extensive research on the Internet on this subject, I've found so far no solution. I have to precise that the backend database for the core has been migrated from the default SQLite backend to a PosgreSQL backend as the database grew sensibly (over 1,5 GB for now). However I'd be glad to hear from any working solution (SQLite or PostgreSQL backend) describing a way to export the data related to a specific user profile and then re-import-it in a new Quasselcore database.

    Read the article

  • FTP account ownership on vhost directory makes Apache not run website correctly

    - by CodeShining
    I've purchased a virtual server, where I'm given of a non-root sudo-enabled user. Actually I do need to create an FTP account that's not that sudo-able account, so I created a no-login account just for that purpose. I've set up VSFTPd correctly, also enabling the "userlist" feature, to specify which user are permitted to use FTP. Then I created an empty directory under my sudo-able account, and I gave ownership permissions to the second account, so to make it more easy to understand, let's say the main account (the one I do use to manage my VPS) is called ubuntu and the FTP-user is named ftpuser, I created a directory /home/ubuntu/mywebsite giving the ownership to ftpuser:ftpuser. Then I uploaded a worpdress website, whose default permissions are 755 and 644. The issue is that Apache is not given of any privilege to run the website. How can I make the website run properly, and which is the most secure? Should I run that virtualhost with another user (if it's possible)? Should I force the FTP user to use the www-data group (if that's possible) and run with permissions like 775 and 664? How can I solve this issue? Any help is appreciated, I'd like to run it using the default permissions, so any update won't break up anything (because of permissions reset).

    Read the article

  • linux shutdown hang with wifi cifs mounts

    - by Sirex
    Since fedora 15 (and now with 16) it seems that wireless clients take a long while to shutdown when they have network filesystems mounted at shutdown time. I've pushed out a cifs mount via puppet, and all clients have it, including those on wireless. If say a laptop is on a wired connection it shuts down just fine, but if its on the wifi at the time (and no wired connection) it'll hang at the fedora f logo. I'm not sure if its indefinite or just a really long while, but ill give it a test when i shut this machine down in a second. Needless to say its pretty annoying, so is there a way of causing the machine to shutdown even if network connectivity has been lost at unmount time, -- or an official way to reorder events so the wireless card is kept up until after the unmount happens during the shut down process (short of writing a custom script for shutdowns which is a bit of a kludge) ? It does this on multiple machines, and all started doing it when we went from fedora 14 to 15. It was such an obvious issue i'd kind of assumed someone must have reported it or there was an easy fix, but i've not discovered anything yet. Additional info: I can confirm that manually unmounting the mounts then shutting down (sudo shutdown or the xfce shutdown button) will shutdown just fine, it only hangs if the mounts are still mounted The puppet config that sets the mount looks like this (now with the _netdev entry that is indeed pushed to clients successfully, but makes no difference): file { "/mnt/share": ensure = directory,} mount { "/mnt/share": atboot = true, ensure = mounted, remounts = false, fstype = cifs, device = "//srv/share", options = "user,gid=shareusers,uid=${user},file_mode=0700,dir_mode=0700,credentials=/root/.smbcreds,_netdev", require = [ File["/mnt/share"], Group["shareusers"] ], } }

    Read the article

  • Unable to remove invalid(orphaned?) SPNs

    - by Brent
    tldr version: Renamed domain from internal.domain.com to domain.com, have 4 SPNs that am unable to remove from DC. So my domain was internal.domain-name.com and I renamed it to domain-name.com and I thought everything was good. Several days later, I start setting up my RD Gateway and am noticing issues surrounding group policy. I run dcdiag and the SystemLog part fails. Starting test: SystemLog A warning event occurred. EventID: 0x00001796 Time Generated: 08/25/2014 02:48:30 Event String: Microsoft Windows Server has detected that NTLM authentication is presently being used between clients and this server. This event occurs once per boot of the server on the first time a client uses NTLM with this server. An error event occurred. EventID: 0xC0001B70 Time Generated: 08/25/2014 02:49:18 Event String: The SQL Server (MSSQLSERVER) service terminated with the following service-specific error: An error event occurred. EventID: 0xC0001B70 Time Generated: 08/25/2014 02:49:48 Event String: The SQL Server (MSSQLSERVER) service terminated with the following service-specific error: An error event occurred. EventID: 0xC0001B70 Time Generated: 08/25/2014 02:52:47 Event String: The SQL Server (MSSQLSERVER) service terminated with the following service-specific error: This made me check my AD for possible connections to the .internal domain. I found four which I remove by: setspn -D E3514235-4B06-11D1-AB04-00C04FC2DCD2/d79fa59c-74ad-4610-a5e6-b71866c7a157/internal.domain-name.com ServerName setspn -D HOST/ServerName.domain-name.com/internal.domain-name.com ServerName setspn -D GC/ServerName.domain-name.com/internal.domain-name.com ServerName setspn -D ldap/ServerName.domain-name.com/internal.domain-name.com ServerName Also, checking my dns records, there's an internal subdomain that I can delete but it comes back as well. I've tried removing the spns to no avail. Is there something I'm missing?

    Read the article

  • Why are SIP calls via my server silent?

    - by Archcode
    I have FreeSWITCH SIP server up and running. It has public IP and sits behind 1-to-1 NAT (it's Amazon EC2 instance actually). I can connect to it, make a call to other endpoint (namely, my android device to my pc and vice versa) and signals are send with no problems (call, answer, hangup, etc). Unfortunately, and what drives me crazy, that's all: no audio gets through, no video either. Server does not throw errors, it reports many retransmission though, looks like this: switch_rtp.c:915 [ zrtp engine]: WARNING! HELLO Max retransmissions count reached (20 retries). ID=15 Codecs are set up correctly (same config worked locally on my LAN). NAT/firewall on client side may be a problem, signals do get through (perhaps due to fixed port, data streaming runs on random one, that is currently my best bet). STUN/TURN/ICE setting on client seem to have no effect. Endpoints sit behind symmetric NAT. On server there are no iptables rules, security group is set as suggested there: http://wiki.freeswitch.org/wiki/Firewall Help, please. How to make it work or at least diagnose what's wrong?

    Read the article

  • After RAID failure SBS 2008 issues logging in and Exchange store does not mount

    - by Josh R
    today has been one of those days. Yesterday a hard drive in our Dell Poweredge 2900 server failed and the RAID array didn't degrade gracefully, so I called Dell (Server still under warranty) and got an engineer to work though the RAID issues with me. He was a nice guy but didn't do too much. We tried to put the RAID in a state where it was bootable and even though we only lost one disk there are still issues with the server. Once we got the server to boot there was an error message saying that the logonui.exe was corrupted and we needed to run chkdsk. I clicked through the error messages and the login screen never came up. So I power cycled the server and it chkdsk automatically but the login screen didn't appear. I tried safe mode, no difference there either. So the issues I am currently having are: 1) The server boots up, the loading windows screen comes up then it dumps me into a black screen where I can only see my mouse cursor. Ctrl+Esc doesn't work Ctrl+Alt+Del doesn't work 2) Some of the services come up: DHCP, DNS, DFS, and Print come up 3) The exchange information store and transport service don't start - I tried using mmc to connect to services.msc on the computer and start them but they throw an error message of "Can't start because group or dependency failed" Has anyone had a problem like this? Can anyone offer some guidance? Thanks a bunch!

    Read the article

  • 403 Forbiden on Apache (CentOS) Server

    - by pouya
    These are my VM setup: HOST: windows 7 ultimate 32bit GUEST: CentOs 6.3 i386 Virtualization soft: Oracle virtualBox 4.1.22 Networking: NAT -> (PORT FORWARD: HOST:8080 => GUEST:80) Shared Folder: centos all the project files goes into shared folder and for each project file a virtualhost conf file is created in /etc/httpd/conf.d/ like /etc/httpd/conf.d/$domain I wasn't able to see anything in my browser before disabling both windows firewall and iptables in centos after that if i type for example: http://www.$domain:8080/ all i see is: Forbidden You don't have permission to access / on this server. Apache/2.2.15 (CentOS) Server at www.$domain.com Port 8080 A sample Virtual Host conf file: <VirtualHost *:80> #General DocumentRoot /media/sf_centos/path/to/public_html ServerAdmin webmaster@$domain ServerName www.$domain ServerAlias $domain *.$domain #Logging ErrorLog /var/log/httpd/$domain-error.log CustomLog /var/log/httpd/$domain-access.log combined #mod rewrite RewriteEngine On RewriteLog /var/log/httpd/$domain-rewrite.log RewriteLogLevel 0 </VirtualHost> centos shared folder is availabe to guest at /media/sf_centos These are file permissons for sf_centos: drwxrwx--- root vboxsf vboxsf group includes: apache and root So these are my questions: 1- How to solve Forbidden Problem? 2- How to setup both host and guest firewalls? 3- How can i improve this developement environment to simulate production environment as much as possible specially security improvements?

    Read the article

  • Logging Remote Server Access via Remote Desktop

    - by Nate Bross
    The objective here is to start a simple .NET application I've written which captures some environment variables (time, username, computername, etc) upon login. This .NET application subscribes to the Windows "User logout" event. Upon launch, the application captures the above variables, and creates a record in my database, upon logout (which I'm capturing) I update another field in the same record, with the logout time. The above is working exactly as I would like, when I launch the binary, it makes its initial log entry, then waits for the logout event and updates the same record. Restrictions, the .NET binary should be able to live on a share point (\server\share\myapp\v1) so I can update the application to (\server\share\myapp\v2) and simply update the GPO/Logon script. My initial thought was to use the \domaincontroller\sysvol\ directory to store the binary and then update all user accounts to include a call to my application. Can you see any flaws in this approach? My question is this: First, is there anything wrong with my idea above? Second, if so, what is the best way (through group policy or otherwise) to ensure this application launches whenever a session is started on a server?

    Read the article

  • Server 2008 R2 boot is at 2 hours and counting. What now?

    - by Jesse
    This morning, we rebooted our Server 2008 R2 box. No problem, came right back up. Then we shut it down and let it install windows updates. While it was off, we added some RAM. Then we turned it back on. The system came right back up to the "press ctrl-alt-delete" screen, so far, so good. I logged in. The system got as far as "Applying Group Policy" -- then spent almost an hour applying drive mappings. Finally finished that, and has now spent 30 minutes on waiting for the Event Notification Service. I still haven't been able to log in. Remote desktop service doesn't appear to be running yet. I tried viewing the event log from another machine. I see that the box is writing to the Security log, but there are no events in System or Application in the last 45 minutes. Digging through the System log of events from 45 minutes ago, I see a bunch of timeouts: A timeout (30000 milliseconds) was reached while waiting for a transaction response from the ShellHWDetection service. [lots of these] A timeout (30000 milliseconds) was reached while waiting for a transaction response from the wuauserv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the SessionEnv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the Schedule service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the CertPropSvc service. What can I do? Should I try shutting it down remotely, or will that do more damage?

    Read the article

  • Volume is no longer showing in Raid Controller BIOS and in Windows

    - by Gordon
    Hi all, I have installed some critical Windows Updates yesterday and now my external RAID Volume no longer shows in Windows Vista x64. All updates went through successfully. From their description, I cannot see how they should relate to the issue, but this is the only change that happened, so who knows. Anyway, here is the details: I have an external eSata enclosure that is running on a SiI4726 controller. I can connect to the controller with it's management utility from the computer the enclosure is connected to. The three drives in the enclosure show up as JBODs. I had those drives configured to be one logical RAID5 drive. RAID management is done through a SiI3132 SoftRaid controller. The Raid Management Utility just shows empty channels where it usually shows the Raid Group. In the Windows Disk Manager, I can see an unknown unitialized device. This is fine according to the setup manual. What it doesn't show is my Raid drive. It's gone. Also, when booting Windows, the BIOS of the controller used to show the RAID volume before booting the OS. This is not happening anymore. Updating drivers and firmware did not help. I have made sure the drivers and firmware are compatible to each others. And like I said, it used to work before. Any clues?

    Read the article

  • User unable to delete folder / files "File in use by another user" Server 2003

    - by Az
    I am administering a standalone Windows 2003 Terminal Server with no domain membership. Occasionally (about once a week or so) a user will attempt to delete a sub-folder in a Shared folder and gets denied with "File in use by another user". I tried checking the shared folder snap-in and that folder is not open. She has full control and is the owner of the folder as well. I even checked in "Effective permissions" for some of the folders / files she cant delete and she truly has full control. I am able to delete the folder as Administrator with no problem. Another odd thing, she can delete the files IN the folder most of the time (this issue happens on both folders and files in the share). Sometimes merely waiting a day or two will allow her to delete the folder or files. I am curious as to why she gets the message that it is in use as creator/owner with full control yet I don't get it simply as a member of the Admin group. If anyone out there has any ideas I'd love to hear them! THANK YOU.

    Read the article

  • How best to handle end user notification in the event of system failure incl. email?

    - by BrianLy
    I've been asked to research ways of handling end user notifications when systems such as email are experiencing problems. Perhaps an example will make this a little clearer. We have a number of sites in different countries. Recently email was impacted at one of the sites, but it could have been a complete network outage. Information was provided by phone to local IT managers at the site but onward communication was slower than some would have liked. It seems like almost everyone at the site has a personal mobile phone which could receive text messages, and perhaps access a remote website with postings on the situation. However managing and supporting a system to text people on these relatively infrequent occasions would be very costly to do internally. What are other people doing to handle situations like this? Some things I've thought of include: Database of phone numbers to text. Seems costly and not very easy to maintain for an already stretched IT group. Is there an external service that would let you do this policies? Send voicemail message to all phones on site. Maintain an external website. This would not work in all situations (network failure), and there is a limit on the amount of info that can be posted externally. A site outage could be sensitive information in some situations. How could the site be password protected? Maybe OpenId/Facebook connect would work. Use a site like Yammer.com which is publicly accessible but only by people with a company email address. Anyone using this for IT outage notifications? To me it looks like there is no clear answer, and that there are solutions for some subsets of users. To be comprehensive a number of solutions would need to be combined. Any additional thoughts or recommendations? What worked or didn't work for your organization?

    Read the article

  • Master does not appear to be a git repository error

    - by EmmyS
    I've inherited a position and instructions for creating a new git repository. Unfortunately I've run into problems and no one here knows what to do. Hoping someone can help me out. Here are the instructions I was left: Create a new repository: For these steps you need to be in the gitosis-admin repository, if you don't have it, in a suitable parent folder do: git clone [email protected]:gitosis-admin.git Edit gitosis.conf file - in gitosis-admin root, under [group base-repo] section, add the name of the new repo to the end of the "writable =" section. Commit change and push back to gitosis-admin master. For the next commands, my_new_project represents the name of your project mkdir my_new_project cd my_new_project git init Copy in any files you want to use to start the repo git commit -a -m "Initializing new repository" git remote add origin [email protected]:my_new_project.git git push master git push master:qa So I did 1 and 2, with no problem. It created a local folder on my machine called gitosis-admin. I edited the gitosis.conf file as indicated. But when I try to do step 3 (which I assume is git push gitosis-admin master) bash tells me that fatal: 'master' does not appear to be a git repository fatal: The remote end hung up unexpectedly What am I doing wrong?

    Read the article

  • Powershell Script Scheduled Task Stopped Running (Could not Start)

    - by Hatsune Yuki
    I'm running a scheduled task (for Powershell Script) on Windows 2003 Server. I believe the script works fine. The task is scheduled to run every 10 minutes from 7:00am to 11:50pm everyday. However, it never gets to run more for than a day. It always stops some time in the afternoon (between 2pm and 6pm). I'm not sure exactly what happened but I always get the error The attempt to log on to the account associated with the task failed, therefore, the task did not run. The specific error is: 0x80070569: Logon failure: the user has not been granted the requested logon type at this computer. Verify that the task's Run-as name and password are valid and try again. It seems like most people with this error are saying that they need to make user "logon as a batch job". However, this option is greyed-out for me. I search for other places where users have similar problems but the solutions are not written in detail (some of them have something to do with GPO). I've only used the basic features of Windows Server and I have no clue how to get to the place they are referring to. Can someone please confirm whether "logon as a batch job" is indeed a solution and provide a detailed walkthrough on how to solve my problem? Thanks. p.s. someone suggested the website http://technet.microsoft.com/en-us/library/cc755659(v=ws.10) I tried to followed the method for web server with domain. However, got stuck on the 6th step where it mentions Group Policy Object. I don't know where it is.

    Read the article

  • How to set an executable white list?

    - by izabera
    Under Linux, is it possible to set a white-list of executables for a certain group of users? I need them to be unable to use, for example, make, gcc and executables on removable disks. How can this be done? Edit, let me explain better. I'm dealing with a high school IT system, young geeks that (during the lessons) want to play, surf the net, damage those computer however they can. The major step to achieve this goal was to remove the system they're familiar with and install Ubuntu in all the computers. This actually works quite well, but recent events proved that this is not enough. I want to allow them to execute certain safe programs, like Open Office, and to deny any other program, whether it is preinstalled software, something they carry in usb drives, a downloaded program or a script they program on site. It's possible to remove the 'x' permission on any file on the pc, but of course it would be impractical. Furthermore, they would be able to run anything they download. I thought the best solution would be to make a white-list of safe programs and to deny anything else, but I don't really know how to do it. Any idea is helpful.

    Read the article

  • OpenOffice Vs Microsoft Office 2007/2010

    - by Moody Tech
    I have been asked to summarise the pros and cons in connection with the choices between Microsoft Office Vs OpenOffice. I have a broad idea of what needs to be said. However I would like to open a discussion here and have a single place to go to when the time comes to give the summary to management. There are obvious points of contention: For me the lack of compliance with Group Policy is a major concern [Default save location/visibility of C:/Visibility of files and folders on the HDD] However I am sure that functionality and compatibility will be the prime mover. We are looking at making major savings by reducing our commitment to Microsoft licensing. So what are your experiences? What happens when there are no direct equivalents? [Word has a close match in OpenOffice, but a database solution match is not as close, neither is an Outlook [connecting to Exchange Server and downloading all calendars, shared calendars, scheduled events, for Exchange will still exist after the move to OpenSource solutions] In summary then: What do you see as: The benefits of this plan? How do you see the problems being manifest? Discuss.... Many thanks.

    Read the article

  • ASA access lists and Egress Filtering

    - by Nate
    Hello. I'm trying to learn how to use a cisco ASA firewall, and I don't really know what I'm doing. I'm trying to set up some egress filtering, with the goal of allowing only the minimal amount of traffic out of the network, even if it originated from within the inside interface. In other words, I'm trying to set up dmz_in and inside_in ACLs as if the inside interface is not too trustworthy. I haven't fully grasped all the concepts yet, so I have a few issues. Assume that we're working with three interfaces: inside, outside, and DMZ. Let's say I have a server (X.Y.Z.1) that has to respond to PING, HTTP, SSH, FTP, MySQL, and SMTP. My ACL looks something like this: access-list outside_in extended permit icmp any host X.Y.Z.1 echo-reply access-list outside_in extended permit tcp any host X.Y.Z.1 eq www access-list outside_in extended permit tcp any host X.Y.Z.1 eq ssh access-list outside_in extended permit tcp any host X.Y.Z.1 eq ftp access-list outside_in extended permit tcp any host X.Y.Z.1 eq ftp-data established access-list outside_in extended permit tcp any host X.Y.Z.1 eq 3306 access-list outside_in extended permit tcp any host X.Y.Z.1 eq smtp and I apply it like this: access-group outside_in in interface outside My question is, what can I do for egress filtering? I want to only allow the minimal amount of traffic out. Do I just "reverse" the rules (i.e. the smtp rule becomes access-list inside_out extended permit tcp host X.Y.Z.1 any eq smtp ) and call it a day, or can I further cull my options? What can I safely block? Furthermore, when doing egress filtering, is it enough to apply "inverted" rules to the outside interface, or should I also look into making dmz_in and inside_in acls? I've heard the term "egress filtering" thrown around a lot, but I don't really know what I'm doing. Any pointers towards good resources and reading would also be helpful, most of the ones I've found presume that I know a lot more than I do.

    Read the article

  • Azure VM won't boot after sysprep; integration tools installed

    - by Mark Williams
    I have installed the Azure Integration Components and used sysprep on a Windows 2012 VM. Now the machine won't start up. I uploaded the VHD to Azure - it failed there too. When I start up the VM I get a PowerShell window that hangs out for a bit; eventually I get the following error, after which the machine restarts. New-Object: The dependency service or group failed to start. (Exception from HRESULT: 0x8007042C) At line1: char:1 New-Object -comobject WaAgent.WindowsSetupComponent | % { $_.HandleSetupError() ... ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +CategoryInfo : ResourceUnavailable (:) [New-Object], COMException +FullyQualifiedErrorId: NoCOMClassIdentified,Microsoft.PowerShell.Commands.NewObjectCommand I have tried renaming unattended.xml and turning on bootlogging. Neither of those yielded much help. Is there a way I can disable the Azure components that run during OOBE? That seems to be the source of the problem. Mounting the VHD is easy. 0x8007042C looks like a firewall issue, based on my googling. Unfortunately I can't get the machine to boot so I can figure that issue out. Also, I can't get around this problem by booting into safe mode. Thanks for your help, guys.

    Read the article

  • sudo or acl or setuid/setgid?

    - by Xavier Maillard
    for a reason I do not really understand, everyone wants sudo for all and everything. At work we even have as many entries as there are way to read a logfile (head/tail/cat/more, ...). I think, sudo is defeating here. I'd rather use a mix of setgid/setuid directories and add ACL here and there but I really need to know what are the best practices before starting up. Our servers have %admin, %production, %dba, %users -i.e many groups and many users. Each service (mysql, apache, ...) has its own way to install privileges but members of the %production group must be able to consult configuration file or even log files. There is still the solution to add them into the right groups (mysql...) and set the good permission. But I do not want to usermod all users, I do not want to modify standards permissions since it could change after each upgrade. On the other hand, setting acls and/or mixing setuid/setgid on directories is something I could easily do without "defacing" the standard distribution. What do you think about this ? Taking the mysql example, that would look like this: setfacl d:g:production:rx,d:other::---,g:production:rx,other::--- /var/log/mysql /etc/mysql Do you think this is good practise or should I definetely usermod -G mysql and play with standard permissions system ? Thank you

    Read the article

  • Permissions error when creating desktop shortcut

    - by Ryan M.
    Hey guys, I have a user that's got a weird permissions problem on Windows 7. He's trying to create a shortcut for Outlook on his desktop(he doesn't want it in his start menu or his taskbar...). If we right click the outlook.exe and do Send to Desktop, it works just fine. If we do a search for "outlook" in the search bar, and then try and drag and drop the outlook icon to the desktop, we get the error message "You need Permission to perform this action. You require permission from SYSTEM to make changes to this file: Microsoft Office Outlook 2007". Dragging and dropping other exe's onto the desktop work just fine. They create shortcuts without any problems. But if I try to do ANY of the Office programs (Word, Excel, Outlook, etc..) I get this permission error. Any ideas? He's using an A.D. account and he's in the local administrators group. He's an executive so he's not accepting "this isn't a real problem because I found another way to make a shortcut" as an answer. Any help is appreciated.

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >