Search Results

Search found 37607 results on 1505 pages for 'ms access 97'.

Page 463/1505 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • How to setup NTFS ACL with Acces Based Enumeration

    - by Patrick Pellegrino
    We're in the process of migrating from Novell Netware to Windows 2K8 R2 infrastructure (AD, File server, print server... etc) My question is about ACL. While Netware and Windows are totally different, I want to be sure my thnking is good before screwing everything up! There's a scenario : F: | +-- DATA <= Shared as DATA with Access based enumeration | +-- Folder 1 +-- Team 1's Folder +-- Team 2's Folder ... In that case, by default, rights are herited from the F: to the deepest folders. What we want : Administrators group have full control top - down. From DATA, ABE list only folders that users have access. (ex. : I'm in group Team 2, I see Team 2's Folder). From what I understand, at DATA I remove all NTFS ACL to be herited (ex. Users Group), be sure to keep Administrators Group and SYSTEM user. After that, grant Full control (or any right needed) on each folder to Groups or Users that have to have access. Does I'm wrong ? Anything I should take care of ? Any help to my understanding will be very appreciated. Regards.

    Read the article

  • VLAN Tagging Traffic on Cisco Switch

    - by David W
    I have a situation where I'm setting up multiple VLANS on a pfSense firewall on the same physical interface for a client. So in pfSense, I now have VLAN 100 (employees) and VLAN 200 (students - student computer lab). Downstream from pfSense, I have a Cisco SG200 switch, and coming off of the SG200 is the student lab (running on a Catalyst 2950. Yes, that's old, but it works, and this is a poor nonprofit we're talking about). What I'd like to do is tag everything on the network as VLAN 100, except for the student computer lab. Earlier today when I was on-site with the client, I went into to the old Catalyst 2950, and assigned all of its ports to access VLAN 200 (switchport mode access vlan 200) without setting up a trunk on the Catalyst or on the SG200. Looking back on it, I now understand why internet in the lab broke. I reverted the lab back to the default VLAN1 (we're still running on a different firewall - we haven't deployed pfSense -, and the traffic is still separated physically). So my question is, what do I need to do in order to properly deploy this scenario? I believe the correct answer is: Ensure VLANs 100 and 200 are setup in pfSense, and that DHCP is operating correctly (on separate subnets) Setup a trunkport VLAN that allows both 100 & 200 traffic, and plug that port directly into pfSense. Setup a VLAN 200 trunkport on the SG200 (It's not running iOS, but if it were, the command would be switchport trunk native vlan 200), which will then plug into the Catalyst 2950. Setup a VLAN 200 trunkport on the Catalyst 2950 (that is plugged into the SG200 VLAN200 port with the same command - switchport trunk native vlan 200) Setup the rest of the ports on the old Catalyst 2950 in the lab to be access ports on VLAN200. Is there anything that I'm missing, or do I need to tweak any of these steps, in order to properly segment the network traffic?

    Read the article

  • Secure data from a server to a workstation using jumper hosts

    - by apalsson
    Hello. I have a WWW-server, my problem is that the content is sensitive and should not be accessible for people without proper credentials. How can I improve the ease of use but still maintain security following scenario; The Server is accessed through a "jumper host", i.e. the client connects to the jumper using VPN-connection and uses RemoteDesktop to access the jumper. From the jumper he uses RemoteDesktop again to access the Server. Finally on the Server the user can access content using a WWW-browser. All the way from the VPN-client to the WWW-browser requires authentication using a SmartCard-token. This seems quite secure to me. Content only gets mirrored on the RemoteDesktop between Server and jumper, no cached files to worry about. Connection between jumper and client is protected using VPN(ssl), so no eavesdropping. But it is quite cumbersome for the clients with many steps and connections to open. :( So, how can I improve the user experience accessing my server without compromising security? Thanks.

    Read the article

  • Getting Win32_Service security descriptor using VBScript

    - by invictus
    Hi, I am using VbScript for retrieving the securitydescriptor of a Win32_Service. I am using the following code: SE_DACL_PRESENT = &h4 ACCESS_ALLOWED_ACE_TYPE = &h0 ACCESS_DENIED_ACE_TYPE = &h1 strComputer = "." Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate, (Security)}!\\" & strComputer & "\root\cimv2") Set colInstalledPrinters = objWMIService.ExecQuery _ ("Select * from Win32_Service") For Each objPrinter in colInstalledPrinters Wscript.Echo "Name: " & objPrinter.Name ' Get security descriptor for printer Return = objPrinter.GetSecurityDescriptor( objSD ) If ( return <> 0 ) Then WScript.Echo "Could not get security descriptor: " & Return wscript.Quit Return End If ' Extract the security descriptor flags intControlFlags = objSD.ControlFlags If intControlFlags AND SE_DACL_PRESENT Then ' Get the ACE entries from security descriptor colACEs = objSD.DACL For Each objACE in colACEs ' Get all the trustees and determine which have access to printer WScript.Echo objACE.Trustee.Domain & "\" & objACE.Trustee.Name If objACE.AceType = ACCESS_ALLOWED_ACE_TYPE Then WScript.Echo vbTab & "User has access to printer" ElseIf objACE.AceType = ACCESS_DENIED_ACE_TYPE Then WScript.Echo vbTab & "User does not have access to the printer" End If Next Else WScript.Echo "No DACL found in security descriptor" End If Next However, every time I run it I get the message saying the resulting code is -2165236532 something, rather than the error codes defined in the manual. Anyone got any ideas? I am using Windows 7 professional.

    Read the article

  • OpenLDAP ACLs are not working

    - by Dr I
    First things first, I'm currently working with an OpenLDAP: slapd 2.4.36 on a Fedora release 19 (Schrödinger’s Cat). I've just install the openldap with yum and my configuration is the following one: ##### OpenLDAP Default configuration ##### # ##### OpenLDAP CORE CONFIGURATION ##### include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/inetorgperson.schema include /etc/openldap/schema/nis.schema pidfile /var/lib/ldap/slapd.pid loglevel trace ##### Default Schema ##### database mdb directory /var/lib/ldap/ maxsize 1073741824 suffix "dc=domain,dc=tld" rootdn "cn=root,dc=domain,dc=tld" rootpw {SSHA}SECRETP@SSWORD ##### Default ACL ##### access to attrs=userpassword by self write by group.exact="cn=administrators,ou=builtin,ou=groups,dc=domain,dc=tld" write by anonymous auth by * none I launch my OpenLDAP service using: /usr/sbin/slapd -u ldap -h ldapi:/// ldap:/// -f /etc/openldap/slapd.conf As you can see it's a pretty simple ACL which aim to allow access to the userPassword attribute to a specific group read only, then to the owner read and write to anonymous requiring auth and refuse the access to everyone else. The problem is: Even using a valid user with correct password my ldapsearch ends with zero informations retrieved from the directory, plus I've got a strange response on the result line. # search result search: 2 result: 32 No such object # numResponses: 1 here is the ldapsearch request: ldapsearch -H ldap.domain.tld -W -b dc=domain,dc=tld -s sub -D cn=user,ou=service,ou=employees,ou=users,dc=domain,dc=tld I did not specify any filter as I want to check that ldapsearch is correctly printing only allowed attribute.

    Read the article

  • SFTP, ChrootDirectory and multiple users

    - by mdo
    I need a setup where I can put the contents of several user folders to a DMZ server from where external clients can download it, protocol SFTP, Linux, OpenSSH. To ease administration we want to use one single user for the upload. What does work is to define ChrootDirectory /home/sftp/ in sshd_config, set the according ownership and modes and define a home dir in passwd so that the working directory of the user fits. This is my structure: /home/sftp/uploader/user1/file1.txt /user2/file2.txt The uploader user can write file1.txt and file2.txt to the corresponding folders and by having the user folders (user1, user2) set to the users' primary group + setting SETGUID on the folders the users are able to even delete the files (which is necessary). Only problem: because /home/sftp/ is the chroot base dir the users can change updir and see other users' folders, though not being able to change into because of access rights. Requirement: We want to prevent users to change to /home/sftp/uploader/ and see other users' folders. My requirements are to use SFTP, have one upload user and every user must have write access to his home dir. Obviously it's not an option to use something like ChrootDirectory %h because every path component of the chroot path needs to have limited access rights, so as far as I understand this does not work.

    Read the article

  • XP box with 3 NICs on Server 2003 Domain

    - by Hannibal
    I have assigned all three NICs IP addresses that are outside the DHCP pool. I have 2 NICs are connected to 1 switch and the 3rd to my second switch. I want to assign one of the two NICs on the 1st switch to "normal" network activity (e.g. internet access, RDP, etc.) The other two NICs I want to reserve for mirroring ports on their respective switches. While this machine is connected to the domain I can access the internet and Remote Desktop. I have no idea which NIC I am using until I start mirroring a port, at which time, if I happen to be connected through one of the NICs I have dedicated to mirroring, I lose my remote desktop. I am aware that I would have more control over the NICs using Linux. I want to explore Windows solutions before I go that route because reinstalling another OS would be inconvenient (but not impossible). I would likely use a version of Backtrack (3 or 4 not sure). I would also have to learn how to access the machine remotely but I've done it before so this would be a minor obstacle. Thank you for your assistance.

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment [closed]

    - by lmttag
    Possible Duplicate: How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment We have an ASP.NET Web API server that serves up a SQL Server data driven website. The API uses JSON to transfer data from SQL Server to the front end. We need to move it to an internal production environment (nothing will be exposed on the public Internet) and we’re having problems - or just not understanding what needs to be done. There are two domains: The corporate domain - where all users login normally. The process domain - contains the database the Web API needs to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. The ideal configuration is: corp domain (end users) <–> firewall (open port 80) <–> DMZ (web server running IIS) <–> firewall (open port 80 or 1433????) <–> process domain (IIS for Web API and SQL Server) We don’t really understand how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to http://server/appname/api/getitmes? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? NB: The servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.

    Read the article

  • Tools required for a Web Development Project..

    - by RBA
    Hi, I wanted to design a project in linux which could contain programming languages(C, perl, PHP, HTML, XML etc) basically a web based project. Why i have chosen to build on Linux is because it is Open Source, and lot many things can be automated through scripting languages, which in windows i don't know. So, i have installed linux on a virtual machine(Host-Windows 2007 & Guest Linux CentOS), CentOS(command line interface). Since i am a beginner, so I want to know what all tools can be used to facilitate and ease my development process. Some which i know are listed below, and request you to please share your experience on this. 1) Using Putty so that can access the Linux machine from anywhere within the network. 2) Since i want to develop on Linux, but want to use windows as developing platform. So have downloaded Eclipse Editor (C/PHP) on windows. But want to know how can i access linux files from here?? 3) Installed Samba, and still trying to figure out how can i access linux files remotely on Windows. 4) Please share your experience, as how can i ease my development process. and what all tools i can use..?? Please let me know if you need any other clarification..

    Read the article

  • Sharepoint Central Administration stuck / high CPU usage

    - by johnnyb10
    I'm using WSS 3 and I recently added a new web application to my SharePoint Server. After adding it, I wasn't able to open the Central Administration site. I also noticed that there was a w3wp.exe error (Event ID 1000) in the Event Viewer. The situation now is that the w3wp.exe process is hovering around 50% CPU usage continuously. I installed a program called IIS Peek, and it shows continuous GET requests on the Central Administration site; this happens even if I stop the Central Administration site in IIS. The IP addresses identified in the GET request is my workstation, which is what I used to attempt to access Central Administration after I created the new web application. Can someone explain what's going on and how I might fix it? It seems as if my computer tried to access Central Administration and then it hung, but the page requests that were happening at the time are somehow continuing over and over again. So my two problems are the inability to access Central Administration, and the CPU Usage of w3wp.exe, which I'm assuming are two symptoms of the same problem. I'd like to know if there's anything I can do besides restarting IIS, because we have clients accessing other sites on this server. Thanks.

    Read the article

  • How Does EoR Design Work with Multi-tiered Data Center Topology

    - by S.C.
    I just did a ton of reading about the different multi-tier network topology options as outlined by Cisco, and now that I'm looking at the physical options (End of Row (EoR) vs Top of Rack(ToR)), I find myself confused about how these fit into the logical constructs. With ToR it also maps 1:1: at the top of each rack there is a switch(es) that essentially act as the access layer. They connect via fiber to other switches, maybe chassis-based, that act as the aggregation layer, that then connect to the core layer. With EoR it seems that the servers are connecting directly to the aggregation layer, skipping the access layer all together, by plugging directly into what are typically chassis switches. In EoR then is the standard 3-tier model now a 2-tier model: the servers go to the chassis switch which goes straight to the core switch? The reason it matters to me is that my understanding was that the 3-tier model was more desirable due to less complexity. The agg switch pair acts as default gateway and does routing; if you use up all of your ports in your agg layer pair it's much more complicated to add additional switches, than simply adding more switches at the access layer. Are there other downsides to this layout? Does this 3-tier architecture still apply in some way in EoR? Thanks.

    Read the article

  • Mavericks permission issues with Windows Server deduplicated shares

    - by dmohlmaster
    We have a number of 10.9-10.9.3 - Mavericks - machines installed throughout our facility. Much of the user content is pulled from shares stored on our Windows Server 2012 fileservers with deduplication enabled. I have found that files newly written or unoptimized are able to be accessed without issue - read, written, modified, etc. Once the file gets optomized/deduplicated and Windows adds the P & L attributes - sparse and symlink - the Macs running Mavericks begin to have access issues. Once the files get deduplicated, users begin receiving read access errors when copying files (see error1 below). This happens when copying to folders within the current folder tree or copying somewhere to the local system. If you 'stop' the copy operation and retry a few more times, it may eventually work for the specific instance but fail again later. I am however, able to copy these files without issue via the terminal. Other systems running 10.7 do not experience the same issues and are able to access file server resources without issue. Many of the systems having issues are newer and thus not able to be downgraded to 10.8 or 10.7. I have tried finder replacements such as Pathfinder but the results are the same. I know this is at least similar to the issues many Mac users are already experiencing and posting about but I haven't seen it directly linked to deduplication and the attributes written by Windows server. Has anyone seen this issue? Have any solutions been found? Error 1: When copying files after the PL attributes have been set by deduplication. "One or more items can't be copied to "Foler" because you don't have permissions to read them. ******************************************' Via the system.log, I am also seeing the following error when accessing these deduplicated file shares. The reparse point tag listed below is "IO_REPARSE_TAG_DEDUP" Reported error: "smbfs_nget: filename.ext - unknown reparse point tag 0x80000013"

    Read the article

  • Storing secure keys on Ubuntu web server

    - by Sencha
    I'm running Ubuntu 12.04 Precise with a DUNG (Django, Unix, Nginx & Gunicorn) environment and my app (as well as various config files) is stored in a python virtual environment inside /srv, which the www-data user has access to. The nginx & gunicorn processes are all run as www-data. My web app requires secure credentials which I am storing in an environment.sh file. This file contains various exports and is run using source before the gunicorn processes execute. My concern is the location of the environment.sh file and it's permissions. Will it be okay storing this file inside the /srv folder where the www-data has access to it? Or should it be stored and owned by root somewhere else such as /var/myapp/environment.sh? Also, regarding the www-data user, if any of my web processes (which are run as www-data) are compromised and someone gains access to them, does that mean that the user could potentially read any file on the system, even if they can't write? Including my secure keys?

    Read the article

  • Creating security permissions for a non-domain-member user in Windows Server 2008

    - by Overhed
    Hello everyone, I apologize in advance for incorrect use of terminology, as I'm not an IT person by trade. I'm doing some remote work via a VPN for a client and I need to add some DCOM Service security permissions for my remote user. Even though I'm on the VPN, the request for access to the DCOM service is using my PCs native user (and since I'm running Vista Home Premium it looks something like: PC-NAME\Username). The request for access comes back with access denied and I can not add this user to the security permissions as it "is not from a domain listed in the Select Location dialog box, and is therefore not valid". I'm pretty stuck and have no clue what kind of steps I need to do here. Any help would be appreciated, thanks in advance. EDIT: I have no control over what credentials are being passed in to the server by my computer. This scenario is occurring in an installation wizard that has a section which requests you point it to the machine running the "server" version of the software I'm installing (it then tries to invoke the relevant COM service, but my user does not have "Remove Activation Permissions" on that service, so I get request denied).

    Read the article

  • Accessing a shared folder in Windows Server 2008 R2.

    - by Triztian
    Hello all, seems my involvement with computers has grown and I've found my self in the need to access a shared folder on a server. I've read some documentation and managed to set up the folder as a share, for this I created a local group and for now just one local user that has access to the share, the folder is in the public user folder and it's permissions should be (and I believe they are) read/write. The problem is that I can't connect from a remote machine I mean I don't know how the way it should be accessed, the server has a public IP and we use it also as a host to our website I don't know if that affects it though, the folder will be used as the "keeper" for the QuickBooks company files and has the database server manager installed. I've tried setting up a VPN Connection to the but no success. The server has a domain name a "http://www.example.com" that redirects to our website, I am unsure if it could be accessed that way, also the share has a location displayed when I right-click properties Heres what I've tried Setting up a VPN Connection (Windows Vista and 7) Got to the point where I got asked for credential and entered the user I created (which is not an admin) but I got a "Connection fail error 800" I suppose this is because in the domain field I entered the servers workgroup. right-click add network connection (Windows 7) Went through the wizard until I reached the point of entering the location, tried many things, the name in the share's properties(\\SOMETHING\Share), the http://www.example.com , the IP address I'm quite unfamiliar with this, so I have my guesses: Since the group and user are local they do not have access to the folder. The firewall in the server is blocking my connection. Anyways, any help and guidence is truly appreciated.

    Read the article

  • Lighttpd mod_accesslog not logging fastcgi requests

    - by zepatou
    I have recently installed a lighttpd for serving a python script via mod_fastcgi. Everything works fine except that I don't get the requests handled by mod_fastcgi logged in the access.log file (requests on port 80 are logged though). My lighttpd version is 1.4.28 on a Debian 6.0. I used the same working configuration a Ubuntu server 10.04 with lighttpd 1.4.26 and it worked. Here is my config lighttpd.conf server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_compress", ) server.document-root = "/var/www/" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/home/log/lighttpd/error.log" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) accesslog.filename = "/home/log/lighttpd/access.log" url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) server.pid-file = "/var/run/lighttpd.pid" include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" conf-enabled/10-fastcgi.conf server.modules += ( "mod_fastcgi" ) fastcgi.server = ( "/" => ( ( "min-procs" => 1, "check-local" => "disable", "host" => "127.0.0.1", # local "port" => 3000 ), ) ) Any idea ?

    Read the article

  • configuring apache with mod_mono for .net app

    - by Mystere Man
    I'm having a huge problem getting mod_mono and apache configured to work correctly. I've had this working at one time, but I can't seem to figure out where i'm going wrong. I'm using mono-server4. I'm trying to use a seperate port from the main website. So I have in /etc/apache2/sites-available (with a link from sites-enabled) a vhost configuration that looks like this: <VirtualHost *:9999> ServerName XXX ServerAdmin web-admin@XXX DocumentRoot /var/xxx MonoServerPath XXX "/usr/bin/mod-mono-server4" MonoDebug XXX true MonoSetEnv XXX MONO_IOMAP=all MonoApplications XXX "/:/var/xxx" <Location "/"> Allow from all Order allow,deny MonoSetServerAlias XXX SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> </VirtualHost> I used mono-server4-admin to create the application mono-server4-admin --path=/var/xxx --app=/XXX --port=9999 When i start apache, it gives the error: Syntax error on line 13 of /etc/apache2/sites-enabled/xxx: Server alias 'XXX, not found. This corresponds with the MonoSetServerAlias statement. So I commented it out, and when I do that apache starts. However, when I try to access the site, I get a 500 error. The access log indicates that it's trying to access the app on port 80, rather than 9999. I'm not sure what the problem is here. Can anyone help me get figure out where I went wrong? My mono-server4-hosts.conf contains this: # start /etc/mono-server4/conf.d/RMRSite/10_XXX Alias /XXX "/var/xxx" AddMonoApplications default "/XXX:/var/xxx" <Directory /var/xxx> SetHandler mono <IfModule mod_dir.c> DirectoryIndex index.aspx </IfModule> </Directory> # end /etc/mono-server4/conf.d/XXX/10_XXX Also, my /etc/mono-server4/conf.d/XXX/10_XXX contains this: This is the configuration file for the XXX virtualhost path = /var/xxx alias = /XXX vhost = localhost port = 9999

    Read the article

  • .htaccess, mod_rewrite Issue

    - by Shoaibi
    What i want: Force www [works] Restrict access to .inc.php [works] Force redirection of abc.php to /abc/ Removal of extension from url Add a trailing slash if needed old .htaccess : Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / ### Force www RewriteCond %{HTTP_HOST} ^example\.net$ RewriteRule ^(.*)$ http://www\.example\.net/$1 [L,R=301] ### Restrict access RewriteCond %{REQUEST_URI} ^/(.*)\.inc\.php$ [NC] RewriteRule .* - [F,L] #### Remove extension: RewriteRule ^(.*)/$ /$1.php [L,R=301] ######### Trailing slash: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ http://www.example.net/$1/ [R=301,L] </IfModule> New .htaccess: Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / ### Force www RewriteCond %{HTTP_HOST} ^example\.net$ RewriteRule ^(.*)$ http://www\.example\.net/$1 [L,R=301] ### Restrict access RewriteCond %{REQUEST_URI} ^/(.*)\.inc\.php$ [NC] RewriteRule .* - [F,L] #### Remove extension: RewriteCond %{REQUEST_FILENAME} \.php$ RewriteCond %{REQUEST_FILENAME} -f RewriteRule (.*)\.php$ /$1/ [L,R=301] #### Map pseudo-directory to PHP file RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule (.*) /$1.php [L] ######### Trailing slash: RewriteCond %{REQUEST_FILENAME} -d RewriteCond %{REQUEST_FILENAME} !/$ RewriteRule (.*) $1/ [L,R=301] </IfModule> errorlog: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://www.example.net/ Rewrite.log: http://pastebin.com/x5PKeJHB

    Read the article

  • which virtualization technology is right for me?

    - by Chris
    I need a little help with this getting this sorted out. I want to setup a linux virtual server that I can use to run both sever and desktop systems. I want a linux system that is minimalist in nature as all the main os will be doing is acting as a hypervisor. The system I'm trying to setup will be running a file server, windows 7, ubuntu 10.04, windows xp and a firewall/gateway security system. All the client OS'es accessing and storing files on the file server. Also all network traffic will be routed through the gateway guest os. The file sever will need direct disk access while the other guests can run one disk images. All of this will be running on the same computer so I wont be romoting in to access the guests OS'es. Also if possible I would like to be able to use my triple head setup in the guest OS'es. I've looked at Xen, kvm and virtualbox but I don't know which is the best for me. I'm really debating between kvm and virtual box as kvm seem to support direct hardware access.

    Read the article

  • How to change mount to grant user write permissions?

    - by nals
    I am on TomatoUSB, and using the feature to have a NAS. The only way I can write to the Samba share is if I force root: [global] interfaces = 127.0.0.1, 192.168.1.1/24 bind interfaces only = no workgroup = WORKGROUP netbios name = TOMATO security = share wins support = yes name resolve order = wins lmhosts hosts bcast guest account = nobody [Public] path = /mnt/sda2 read only = no public = yes only guest = yes guest ok = yes browseable = yes comment = Network share force user = root writeable = yes I dont really like the idea having to use root to allow write access to my share. I have a samba account created already named nobody to allow access to the share. However every time I try to write I get access denied error. fstab: /dev/sda2 /mnt/sda2 vfat defaults 0 0 Further more every time I try to chmod 777 /tmp/mnt/sda2 the permissions are not changed, and no error is produced. They stay 755. drwxr-xr-x 2 root root 4096 Jun 4 01:49 sda2 Basically; how can I give the user nobody write permissions to my mount? dev name: /dev/sda2 dev mount: /tmp/mnt/sda2

    Read the article

  • Netgear router is not resolving hostnames

    - by Thomas Clayson
    Not sure what the problem is, but I am used to being able to access other clients on my LAN via their hostname. We got a new router from plusnet (Netgear WNR1000v3) and now this has stopped working. For instance, if I were to run a web server from one computer with the hostname TomPC usually I could go into a browser and type http://TomPC and I would get the web server front page. I can access it by using the LAN ip of the machine. e.g. http://192.168.1.1 works fine. I thought it would be a simple option in the admin panel of the router - possibly I had to turn on DNS/DCHP or something. But I can't see anything, and my searches on the internet seem to turn up ridiculous solutions involving setting up a dedicated DNS server on my LAN. This is just a small home network - really I just want to be able to access my Raspberry PI on the network without having to work out the IP address every time. (I know I could just set a static IP address for it, but using its hostname would be much easier). DCHP is enabled in the LAN settings part, although RIP is disabled (I don't know what the latter is) I don't know if this has anything to do with it? Many thanks!

    Read the article

  • SATA DVD drive refuses to read movie DVDs

    - by poke
    Hey, I have a problem with my DVD Drive (Asus DRW-2014L1T, most current firmware installed) on Windows 7 x64. When I insert a DVD movie and Windows starts to access the drive (for autoplay, or when I manually click on the drive icon), my computer hangs up in a particular way, while trying to read the disk. Explorer stops reacting and several programs won't run or their launch is horribly delayed (like the device manager). In th end, I can't access the movie and can't even eject the disk (probably because Windows is still trying to access it). To get the disk out of the drive I then have to reboot (which sometimes doesn't work either) and eject the disk before Windows boots. BIOS recognizes the drive just fine, and Windows is also able to read data disks (tried it with some software disks), but just refuses any movies. I have checked the region code in the device manager, but it is correct. My notebook is reading the disks just fine btw.. I remember having the same problem with an older drive as well, but I don't remember what I did to make it work again (maybe I didn't even fix the problem back then). I do remember however that booting with the disk inserted made Windows recognize the disk, however this doesn't work in this situation either. Do you have any idea what to do to fix that problem?

    Read the article

  • openvpn port 53 bypasses allows restrictions ( find similar ports)

    - by user181216
    scenario of wifi : i'm using wifi in hostel which having cyberoam firewall and all the computer which uses that access point. that access point have following configuration default gateway : 192.168.100.1 primary dns server : 192.168.100.1 here, when i try to open a website the cyberoam firewall redirects the page to a login page (with correct login information, we can browse internet else not), and also website access and bandwidth limitations. once i've heard about pd-proxy which finds open port and tunnels through a port ( usually udp 53). using pd-proxy with UDP 53 port, i can browse internet without login, even bandwidth limit is bypassed !!! and another software called openvpn with connecting openvpn server through udp port 53 i can browse internet without even login into the cyberoam. both of softwares uses port 53, specially openvpn with port 53, now i've a VPS server in which i can install openvpn server and connect through the VPS server to browse internet. i know why that is happening because with pinging on some website(eb. google.com) it returns it's ip address that means it allows dns queries without login. but the problem is there is already DNS service is running on the VPS server on port 53. and i can only use 53 port to bypass the limitations as i think. and i can not run openvpn service on my VPS server on port 53. so how to scan the wifi for vulnerable ports like 53 so that i can figure out the magic port and start a openvpn service on VPS on the same port. ( i want to scan similar vulnerable ports like 53 on cyberoam in which the traffic can be tunneled, not want to scan services running on ports). improvement of the question with retags and edits are always welcomed... NOTE : all these are for Educational purpose only, i'm curious about network related knowledge.....

    Read the article

  • openvpn port 53 bypasses allows restrictions ( find similar ports)

    - by user181216
    scenario of wifi : i'm using wifi in hostel which having cyberoam firewall and all the computer which uses that access point. that access point have following configuration default gateway : 192.168.100.1 primary dns server : 192.168.100.1 here, when i try to open a website the cyberoam firewall redirects the page to a login page (with correct login information, we can browse internet else not), and also website access and bandwidth limitations. once i've heard about pd-proxy which finds open port and tunnels through a port ( usually udp 53). using pd-proxy with UDP 53 port, i can browse internet without login, even bandwidth limit is bypassed !!! and another software called openvpn with connecting openvpn server through udp port 53 i can browse internet without even login into the cyberoam. both of softwares uses port 53, specially openvpn with port 53, now i've a VPS server in which i can install openvpn server and connect through the VPS server to browse internet. i know why that is happening because with pinging on some website(eb. google.com) it returns it's ip address that means it allows dns queries without login. but the problem is there is already DNS service is running on the VPS server on port 53. and i can only use 53 port to bypass the limitations as i think. and i can not run openvpn service on my VPS server on port 53. so how to scan the wifi for vulnerable ports like 53 so that i can figure out the magic port and start a openvpn service on VPS on the same port. ( i want to scan similar vulnerable ports like 53 on cyberoam in which the traffic can be tunneled, not want to scan services running on ports). improvement of the question with retags and edits are always welcomed... NOTE : all these are for Educational purpose only, i'm curious about network related knowledge.....

    Read the article

  • Replacing HD in an MacOS 10.6.8 server caused all shares to fail

    - by Cheesus
    I'm hoping someone might have a helpful suggestion about this problem. We have 2 MacOSX servers available for file sharing. (quad Xeons - 2GB RAM, both 10.6.8), No.1 is an Open Directory Master with 50+ user accounts, No.2 has only 2 local accounts (/local/Default) and looks at the OD Master for all user accounts (/LDAPv3/10.x.x.20/) Both servers have 3 internal HD's, The boot volume with only Server OS and minimal Apps. A 'DataShare' HD (500GB) and a backup drive (500GB). After upgrading the DataShare HD in Server No.2 from a small internal HD (500GB) to larger capacity (2TB) drive, users are unable to connect to shares on Server No.2. Users get an error "There are no shares available or you are not allowed to access them on the server" The process I followed was to use Carbon Copy Cloner to create an exact copy of the original data drive (keeps all ownership data, UID, permissions, last edit date and time). Everything booted up ok, no indication there was any issues. (Paths to the sharepoint look good) Notes during troubleshooting - Server1 is operating perfectly, all users can access shares and authenticate etc. - I've checked the SACL (Server Access Control List) settings is ok. - On Server2 in the Server Admin' app, I can see all the shares listed ok. The paths seem valid, I can disable / reenable the shares, no errors. - On Server2 'workgroup manager' lists all the accounts from the OD Master in the LDAP dir view. All seems fine from here. Basically everything looks normal but no file shares on Server2 can be accessed from regular users.

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >