Search Results

Search found 1021 results on 41 pages for 'has and belongs to many'.

Page 28/41 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Increasing Java's heapspace in Tomcat startup script

    - by Ankur
    I want to increase my heap size when using Tomcat. I was told to add this line export CATALINA_OPTS=-Xms16m -Xmx256m; In to the startup.sh script - I did so (at the beginning) but got the error export: 24: -Xmx256m: bad variable name Where am I supposed to add it, am I doing something else wrong? <b>export CATALINA_OPTS=-Xms16m -Xmx256m;</b> # Better OS/400 detection: see Bugzilla 31132 os400=false darwin=false case "`uname`" in CYGWIN*) cygwin=true;; OS400*) os400=true;; Darwin*) darwin=true;; esac # resolve links - $0 may be a softlink PRG="$0" while [ -h "$PRG" ] ; do ls=`ls -ld "$PRG"` link=`expr "$ls" : '.*-> \(.*\)$'` if expr "$link" : '/.*' > /dev/null; then PRG="$link" else PRG=`dirname "$PRG"`/"$link" fi done PRGDIR=`dirname "$PRG"` EXECUTABLE=catalina.sh # Check that target executable exists if $os400; then # -x will Only work on the os400 if the files are: # 1. owned by the user # 2. owned by the PRIMARY group of the user # this will not work if the user belongs in secondary groups eval else if [ ! -x "$PRGDIR"/"$EXECUTABLE" ]; then echo "Cannot find $PRGDIR/$EXECUTABLE" echo "This file is needed to run this program" exit 1 fi fi exec "$PRGDIR"/"$EXECUTABLE" start "$@"

    Read the article

  • how does a computer know which IP address will route information to the internet? [closed]

    - by JohnMerlino
    Possible Duplicate: How does IPv4 Subnetting Work? For example, I have a computer with a Network Inteface Card (NIC) which is an Ethernet card that is connected by Ethernet cables to a router. There is also another computer with a cable that is connected in another port of the router. This is a Belkin router operating over an Ethernet in the LAN. When I connect to serverfault.com, it maps to an IP address. My computer now has a task of connecting to that IP address. But my computer itself cannot connect to the serverfault IP address. Only the router can. So the task of my computer is to find the IP address associated with the node that will do the routing to the public internet. How does my computer know that a particular IP address in the local network belongs to the router, and is not another computer connected to the network? Is this information configured manually in the operating system itself? Somehow my computer must know that it must send ethernet frames to the router with the expectation that the router will then send the packet to a public IP. How does it know to send it to the router? Is the router's ip address stored in my computer like a key/value pair e.g. "router"="192.168.2.6", so that when I put a public ip address, my computer first knows to connect to 192.168.2.6?

    Read the article

  • Transparent proxying in MacOS X 10.6 Snow Leopard (and maybe FreeBSD)

    - by apenwarr
    I'm trying to create a transparent proxy on my MacOS machine in order to port the sshuttle ssh-based transproxy VPN from Linux. I think I almost have it working, but sadly, almost is not 100%. Short version is this. In one window, start something that listens on port 12300: $ while :; do nc -l 12300; done Now enable proxying: # sysctl -w net.inet.ip.forwarding=1 # sysctl -w net.inet.ip.fw.enable=1 # ipfw add 1000 fwd 127.0.0.1,12300 log tcp from any to any And now test it out: $ telnet localhost 9999 # any port number will do # this works; type stuff and you'll see it in the nc window $ telnet google.com 80 # any host/port will do # this *doesn't* work! After the latter experiment, I see lines like this in netstat: $ netstat -tn | grep ^tcp4 tcp4 0 0 66.249.91.104.80 192.168.1.130.61072 SYN_RCVD tcp4 0 0 192.168.1.130.61072 66.249.91.104.80 SYN_SENT The second socket belongs to my telnet program; the first is more suspicious. SYN_RCVD implies that my SYN packet was correctly captured by the firewall and taken in by the kernel, but apparently the SYNACK was never sent back to telnet, because it's still in SYN_SENT. On the other hand, if I kill the nc server, I get this: $ telnet google.com 80 Trying 66.249.81.104... telnet: connect to address 66.249.81.104: Connection refused telnet: Unable to connect to remote host ...which is as expected: my proxy server isn't running, so ipfw redirects my connection to port 12300, which has nobody listening on it, ie. connection refused. My uname says this: $ uname -a Darwin mean.local 10.2.0 Darwin Kernel Version 10.2.0: Tue Nov 3 10:37:10 PST 2009; root:xnu-1486.2.11~1/RELEASE_I386 i386 Does anybody see any different results? (I'm especially interested in Snow Leopard vs Leopard results, as there seem to be some internet rumours that transproxy is broken in Snow Leopard version) Any advice for how to fix?

    Read the article

  • How to make new file permission inherit from the parent directory?

    - by Wai Yip Tung
    I have a directory called data. Then I am running a script under the user id 'robot'. robot writes to the data directory and update files inside. The idea is data is open for both me and robot to update. So I setup the permission and owner group like this drwxrwxr-x 2 me robot-grp 4096 Jun 11 20:50 data where both me and robot belongs to the 'robot-grp'. I change the permission and the owner group recursively like the parent directory. I regularly upload new files into the data directory using rsync. Unfortunately, new files uploaded does not inherit the parent directory's permission as I hope. Instead it looks like this -rw-r--r-- 1 me users 6 Jun 11 20:50 new-file.txt When robot tries to update new-file.txt, it fails due to lack of file permission. I'm not sure if setting umask helps. In anycase the new files does not really follow it. $ umask -S u=rwx,g=rx,o=rx I'm often confounded by Unix file permission. Do I even have a right plan? I'm using Debian lenny.

    Read the article

  • Why is my global security group being filtered out of my logon token?

    - by Jay Michaud
    While investigating the effects of filtered tokens on my file permissions, I noticed that one of my global security groups is being filtered in addition to the regular system-defined filtered groups. My Active Directory environment is a single-domain forest on the Windows Server 2003 functional level. I'll call the domain "mydomain.example.com". I am logged onto a Windows Server 2008 Enterprise Edition machine (not a domain controller) as a member of the "MYDOMAIN\Domain Admins" group and the "MYDOMAIN\MySecurityGroup" global security group (among others). When I run "whoami /groups" from an elevated command prompt, I see the full list of groups to which my account belongs as expected. When I run "whoami /groups" from a regular, non-elevated command prompt, I see the same list of groups, but the following groups are described as "Group used for deny only". BUILTIN\Administrators MYDOMAIN\Schema Admins MYDOMAIN\Offer Remote Assistance Helpers MYDOMAIN\MySecurityGroup Numbers 1 through 3 above are expected based on Microsoft documentation; number 4 is not. The "MYDOMAIN\MySecurityGroup" global security group is a group that I created. It contains three non-built-in global security groups, and these security groups contain only non-built-in user accounts. (That is, I created all of the accounts and groups that are members of the "MYDOMAIN\MySecurityGroup" global security group.) There are other, similar groups of which my account is a member that are not being filtered out of my logon token, and this group is not granted any specific user rights in the security settings of this computer or in Group Policy. What would cause this one group to be filtered out of my logon token?

    Read the article

  • Specify default group and permissions for new files in a certain directory

    - by mislav
    I have a certain directory in which there is a project shared by multiple users. These users use SSH to gain access to this directory and modify/create files. This project should only be writeable to a certain group of users: lets call it "mygroup". During an SSH session, all files/directories created by the current user should by default be owned by group "mygroup" and have group-writeable permissions. I can solve the permissions problem with umask: $ cd project $ umask 002 $ touch test.txt File "test.txt" is now group-writeable, but still belongs to my default group ("mislav", same as my username) and not to "mygroup". I can chgrp recursively to set the desired group, but I wanted to know is there a way to set some group implicitly like umask changes default permissions during a session. This specific directory is a shared git repo with a working copy and I want git checkout and git reset operations to set the correct mask and group for new files created in the working copy. The OS is Ubuntu Linux. Update: a colleague suggests I should look into getfacl/setfacl of POSIX ACL but the solution below combined with umask 002 in the current session is good enough for me and is much more simple.

    Read the article

  • Copying compressed files from Server 2008 R2 network share to XP client via VPN fails

    - by Dejan Janjuševic
    At the first sight the question looks similar to this one. I have experienced an odd behavior while trying to copy a certain file from Windows Server 2008 R2 network share to Windows XP Professional client via VPN. The VPN was set up using RRAS on the server machine. I will try to provide as much informations as possible in order to make the issue more clear. When trying to copy the compressed file sized ~2.5 MB (via Explorer or CMD, doesn't matter), the process stalls after some 20%, producing an error message after few seconds: Cannot copy filename: The specified network name is no longer available. If i start the command ping -t 192.168.2.1 (where the IP address specified belongs to the server) side by side with the copy command, I can clearly see that the ping command times out for few seconds as the copy process stalls. When this happens all network activities are frozen. After a few seconds, the network recovers, ping continues to run normally, however the copy process stands still before it displays the above error message. Copying other files (I tried 4-5 files), of which some are larger and some are smaller, succeeds. Seems to me that I can copy all uncompressed files. As soon as I try to copy an archive, the process freezes. Even a 707 KB large archive can't be copied. I can only reproduce this behavior on 2 machines, both Windows XP Professional, one is w/ SP2 and the other w/ SP3. Other XP clients don't have this problem, neither do Windows 7 clients. If I connect to the server using Remote Desktop Connection without using VPN from either of these 2 machines (using the same user account), I can copy anything I want normally, even these "problematic" files. Does anyone have any clue about what could possibly be going on?

    Read the article

  • How to make new file permission inherit from the parent directory?

    - by Wai Yip Tung
    I have a directory called data. Then I am running a script under the user id 'robot'. robot writes to the data directory and update files inside. The idea is data is open for both me and robot to update. So I setup the permission and owner group like this drwxrwxr-x 2 me robot-grp 4096 Jun 11 20:50 data where both me and robot belongs to the 'robot-grp'. I change the permission and the owner group recursively like the parent directory. I regularly upload new files into the data directory using rsync. Unfortunately, new files uploaded does not inherit the parent directory's permission as I hope. Instead it looks like this -rw-r--r-- 1 me users 6 Jun 11 20:50 new-file.txt When robot tries to update new-file.txt, it fails due to lack of file permission. I'm not sure if setting umask helps. In anycase the new files does not really follow it. $ umask -S u=rwx,g=rx,o=rx I'm often confounded by Unix file permission. Do I even have a right plan? I'm using Debian lenny.

    Read the article

  • "Delivered-To" Header in Exchange

    - by Kaii
    In some SMTP server implementations (i.e. Postfix) you can enable Delivered-To and X-Original-To headers that will be added to your email. (or [X-]Envelope-To) This is very helpful with distribution lists to determine which e-mail address the mail has been redirected to. So, when the mail has been sent to [email protected], you can see in the Delivered-To or Envelope-To header that it has been redirected (distributed) to [email protected], which is one of many other e-mail addresses that are linked to a single mailbox. How do I find which address was used to deliver this mail to a specific mailbox on Microsoft Exchange 2010? Looking at the plain message (with all headers) i can not find any information that the mail arrived via address [email protected] I think I need the Delivered-To header (or a similar one) to be set on Microsoft Exchange when a mail is delivered via distribution lists. Is there any way to enable such header in Exchange 2010? I need it so that our Ticket system (OTRS) correctly recognizes where the ticket belongs to. Adding all the e-mail addresses of all distribution lists to the system configuration is not the right solution. And if there is a solution for Exchange 2010, is this possibly also applicable to Exchange 2007?

    Read the article

  • how to prevent other computers from seeing our network computers through vpn

    - by Disco
    We have a local office domain consisting of Windows 7 and XP machines that is running on Windows Server 2008 R2. We also have users that connect via VPN into our network. My concern is that when a remote user opens up a folder, the Network section on the left side of the folder shows the remote user all the computer names in our local network. I would like to go about renaming our computers in the local network with more descriptive computer names, but I do not want the users off-site to be able to see these computer names by simply opening up a folder. (Granted, they can already do this, but our current naming scheme does not link computer names to users.) I would like to change our computer names so we can determine which computer belongs to which user more easily IF it can be done securely. How can I ensure that our local computer names are not showing up in the Network folder for remote, VPN-connected users? My online searches have turned up results where people are advised to turn off Network Sharing and Discovery, but that seems to only ensure that the local machine doesn't see other computer names. I want to prevent OUR computer names from showing up on OTHER computers, and I can't go into the VPN-connected computers and turn off THEIR Network Discovery settings. I would think there is a group policy that would control this but I have not found one yet and I don't know how I would apply it to VPN-connected computers. Thanks! EDIT: That's true, a Group Policy wouldn't run on users only connecting via VPN, good point. What about a VPN/router policy, then?

    Read the article

  • Outlook conversation view and categories

    - by Greg Jackson
    At work, I tend to receive a couple of hundred emails a day. To keep from being overwhelmed, I have been using categories to sort and prioritize my mail messages. I auto-assign categories, then group by them: Code Reviews, To, CC, Distribution List/BCC. This means that, for example, a message that's explicitly to me will always show up higher in my inbox than one I get because I'm on a Distribution List. It's a huge time saver and it brings important emails to my attention much more quickly. Recently, the email threads I'm involved in have started to get quite long, and I'd like to be able to use conversation view, or at least sort by subject. Outlook, however, doesn't seem to support any (useful) combination of conversation view and categories. I've tried the following things without success: Grouping by category, then conversation view -- Outlook gives me an error (the grouping/sort combination is too complex). Using a custom view to group by conversation -- category doesn't show up as an option to sort by Grouping by category, then subject -- Getting closer, but the top subject is the first alphabetically, not the most recent Grouping by conversation, then category -- This works, but it doesn't do me much good, because the top conversation is the latest, without regard to what category it belongs to Is there a way for me to retain my category system or something similar while taking advantage of grouping related emails together? I've written Outlook plugins in the past, so even that's not too out there to serve as a proper solution.

    Read the article

  • Sending mail through local MTA while domain MX records point to Google Apps

    - by Assaf
    My domain's email is managed by Google Apps, so that domain users get Gmail and Calendar, etc. But I also want to be able to send applicative notifications to users outside the domain via email (e.g. "some commented on your post", and so on). However, if I try to send email through code I get blocked by Gmail after a few emails. I send marketing email through MailChimp, to minimize the risk of appearing as spam to my users (one-click unsubscribe, etc.). But I can't send applicative message in this way. I want to install a local MTA (my server runs Ubuntu), but I'm not sure what anti-spam measures I need to implement so that receiving MTAs don't think it's a spam server. What's stopping anyone from setting up a mail server and sending emails using my domain name? AFAIK it's the DNS records that show the MTA's address actually belongs to the domain. But my understanding of this is rather superficial, so someone please correct me if I'm wrong. But what sort of DNS configuration do I need to put in place so that I don't get blacklisted (assuming I don't actually spam anyone)? The MX records already point to Google, and I'd like to keep it this way. So do I just need to define an A record for my internal mail server? Should it show email as coming from a sub-domain, so as not to conflict with the bare domain being managed by google? Edit: Does the following SPF record make sense if I want email from my domain name to be sent by either google's servers or any server with a dns name ending with mydomain.com? "v=spf1 ptr mx:google.com mx:googlemail.com ~all" How should I set up reverse DNS for my server? If I have an A record that points mailsender.mydomain.com to my MTA's ip address, does it mean that reverse lookup will only allow emails sent from [email protected]?

    Read the article

  • How to connect to Oracle DB via ODBC

    - by Mat
    I am attempting to connect to a remote Oracle DB via ODBC. I am totally inexperienced and fail to connect. What I have installed: Oracle 'ODBC Driver for RDB' A program I want to connect from (Altova Mapforce, an ETL) What I do: Under Administrative tools I open the Windows "ODBC Data Source Administrator I click 'Add..' and select the Oracle ODBC Driver The Window 'Oracle RDB Driver Setup' opens. I fill in: Data source name: free choice Description: I leave blank Transport: I choose TCP/IP Server: I input the IP address of the server Service: I leave 'generic' UserID: I enter the user name (that belongs to the password I have) Attach Statement: no idea what do do here?? Upon choosing 'OK', the 'Oracle RDB ODBC Driver Connect' opens and I am prompted the password. I enter the password and the connection fails. Questions Do I need further programs on my computer, e.g. the Oracle client of Instant client? I am never prompted the port of the server - isn't this relevant? I am never prompted SID - isn't this relevant? I connected from SQL developer easily - it prompted only server IP, port, username, password and SID.

    Read the article

  • Transparent proxying leaves sockets with SYN_RCVD in MacOS X 10.6 Snow Leopard (and maybe FreeBSD)

    - by apenwarr
    I'm trying to create a transparent proxy on my MacOS machine in order to port the sshuttle ssh-based transproxy VPN from Linux. I think I almost have it working, but sadly, almost is not 100%. Short version is this. In one window, start something that listens on port 12300: $ while :; do nc -l 12300; done Now enable proxying: # sysctl -w net.inet.ip.forwarding=1 # sysctl -w net.inet.ip.fw.enable=1 # ipfw add 1000 fwd 127.0.0.1,12300 log tcp from any to any And now test it out: $ telnet localhost 9999 # any port number will do # this works; type stuff and you'll see it in the nc window $ telnet google.com 80 # any host/port will do # this *doesn't* work! After the latter experiment, I see lines like this in netstat: $ netstat -tn | grep ^tcp4 tcp4 0 0 66.249.91.104.80 192.168.1.130.61072 SYN_RCVD tcp4 0 0 192.168.1.130.61072 66.249.91.104.80 SYN_SENT The second socket belongs to my telnet program; the first is more suspicious. SYN_RCVD implies that my SYN packet was correctly captured by the firewall and taken in by the kernel, but apparently the SYNACK was never sent back to telnet, because it's still in SYN_SENT. On the other hand, if I kill the nc server, I get this: $ telnet google.com 80 Trying 66.249.81.104... telnet: connect to address 66.249.81.104: Connection refused telnet: Unable to connect to remote host ...which is as expected: my proxy server isn't running, so ipfw redirects my connection to port 12300, which has nobody listening on it, ie. connection refused. My uname says this: $ uname -a Darwin mean.local 10.2.0 Darwin Kernel Version 10.2.0: Tue Nov 3 10:37:10 PST 2009; root:xnu-1486.2.11~1/RELEASE_I386 i386 Does anybody see any different results? (I'm especially interested in Snow Leopard vs Leopard results, as there seem to be some internet rumours that transproxy is broken in Snow Leopard version) Any advice for how to fix?

    Read the article

  • Install VirtualBox on Ubuntu 12.04.1 (on [Samsung] Chromebook)

    - by iphonedev7
    I have dual booted Ubuntu Linux 12.04.1 LTS on my Samsung Series 5 ChromeBook, and am trying to run/install Oracle VirtualBox (from the generic .run file downloaded from their website). However, every time I try to run it (as root from the command line), it gives me the following error occurs: Please install the build and header files for your current Linux kernel. The current kernel version is 3.4.0 Problems were found which would prevent VirtualBox from installing. I have tried the version from the Software Center, as well as the command line installation, both of which gave me errors based on my linux-headers/linux-kernel/linux-[kernel]-image. Here's an error I keep getting (on the command line): First Installation: checking all kernels... It is likely that 3.4.0 belongs to a chroot's host Building only for 3.5.0-18-generic Building initial module for 3.5.0-18-generic ERROR (dkms apport): kernel package linux-headers-3.5.0-18-generic is not supported Error! Bad return status for module build on kernel: 3.5.0-18-generic (x86_64) Consult /var/lib/dkms/virtualbox/4.1.12/build/make.log for more information. Setting up virtualbox-qt (4.1.12-dfsg-2ubuntu0.2) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place ...And one of the more cryptic errors I get when trying to start any Virtual Machine: Result Code: NS_ERROR_FAILURE (0x80004005) Component: Machine Interface: IMachine {5eaa9319-62fc-4b0a-843c-0cb1940f8a91}

    Read the article

  • Oracle: Getting ORA-01195 and ORA-01110 when attempting resetlogs

    - by MacAnthony
    I am trying to get our database to startup. When I login to sqlplus and do a startup, I get the message: Total System Global Area 534462464 bytes Fixed Size 2215064 bytes Variable Size 331350888 bytes Database Buffers 192937984 bytes Redo Buffers 7958528 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open So I do a shutdown, startup mount (which works fine) and then run: SQL> alter database recover using backup controlfile until cancel; alter database recover using backup controlfile until cancel * ERROR at line 1: ORA-00283: recovery session canceled due to errors ORA-19909: datafile 1 belongs to an orphan incarnation ORA-01110: data file 1: '/<path>/system01.dbf' SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01195: online backup of file 1 needs more recovery to be consistent ORA-01110: data file 1: '/<path>/system01.dbf' I know I've used instructions to get me past this error before, but I seem to be having trouble tracking it down. A bit of history: We wanted to refresh the data in this from another db so we attempted to do a expdb/impdb into this instance. The impdb did not complete correctly and got an end of file error message in it and hung (I still have the message in a log if it's important). Since the instance would start at this point, we decided to use the hotbackup process we have to restore the db. The hotbackups are from another server/instance. We went through the same process 2 weeks ago. At the point of recreating the control file is where we got to the issue above.

    Read the article

  • Difference between CurrentClockSpeed and MaxClockSpeed

    - by Ben
    Rationale this belongs on ServerFault rather than StackOverflow - I already have my program which gets the value, I am querying the value returned and what it means. I have an in-house program which audits our company PCs, and one of the things it checks is the speed of the processor. To do this, it queries the Win32_Processor WMI class and gets the value of CurrentClockSpeed. We were playing with the data today and found an anomaly with some of the speeds being reported incorrectly (for example, CurrentClockSpeed said 1.0GHz, whereas the CPU name said Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz [Confirmed it is in fact 1.83GHz]). I did a bit of digging on the internet and found this blog post which might explain what is going on. My initial thought was that I could change the program to instead get the value for MaxClockSpeed instead of CurrentClockSpeed, but Microsoft's documentation doesn't clearly define what this will return. What I mean by that is will this return a value which is its actual maximum speed (say if it were overclocked) but which it would not normally be running at, or would it return what I expect, which is its maximum speed under normal (not overclocked) conditions?

    Read the article

  • Tomato/DD-WRT router to act as switch & only NAT some port

    - by fseto
    BACKGROUND: I have a device that must use a real IP address. Currently, my ISP uses DHCP and I can have up to 4 real IP address assigned. However, the cable modem only have 1 ethernet port and it's connected to my router (running Tomato, but can run DD-wrt or other Openwrt if required). Question stems from how I can connect the additional device, requiring a real IP? EASY SOLUTION: would be to get a switch and connect to the CM, Router, and Device. But alas, I want to avoid this route, since: my wiring cabinet in my home is drawing lots of power and heat already Device will be unprotected by any firewall unable to monitor the traffic to/from device. Besides, what would be the FUN in that? =) IDEA: So what I want to do is to configure the router, so that one of the switchport is removed from the normal br0 bridge. Instead, I want to make it behave like a switch on the WAN port. What's the best way of doing this? Should I create another bridge on the WAN & the device port? Can a single port belongs to two bridges? or would I need to create a subinterface first? Would I need a DHCP-relay? Am I expecting too much from my poor cheapie router? +------+ | CM | +--++--+ || +----WAN---------------+ | / \ Router | | BR1? BR0 | | | \ | | | {NAT} | | | / | | \ | +-P0----P1-P2-P3-Wifi--+ | +------+ |Device| +------+

    Read the article

  • VLAN ACLs and when to go Layer 3

    - by wuckachucka
    I want to: a) segment several departments into VLANs with the hopes of restricting access between them completely (Sales never needs to talk to Support's workstations or printers and vice-versa) or b) certain IP addresses and TCP/UDP ports across VLANS -- i.e. permitting the Sales VLAN to access the CRM Web Server in the Server VLAN on port 443 only. Port-wise, I'll need a 48-port switch and another 24-port switch to go with the two existing 24-port Layer 2 switches (Linksys); I'm looking at going with D-Links or HP Procurves as Cisco is out of our price range. Question #1: From what I understand (and please correct me if I'm wrong), if the Servers (VLAN10) and Sales (VLAN20) are all on the same 48-port switch (or two stacked 24-port switches), afaik, the switch "knows" what VLANs and ports each device belongs to and will switch packets between them; I can also apply ACLs to restrict access between VLANs at this point. Is this correct? Question #2: Now lets say that Support (VLAN30) is on a different switch (one of the Linksys) switches. I'm assuming I'll need to trunk (tag) switch #2's VLANs across to switch #1, so switch #1 sees switch #2's VLAN30 (and vice-versa). Once Switch #1 can "see" VLAN30, I'm assuming I can then apply ACLs as stated in Question #1. Is this correct? Question #3: Once Switch #1 can see all the VLANs, can I achieve the seemingly "Layer 3" ACL filtering of restricting access to Server VLAN on only certain TCP/UDP ports and IP addresses (say, only permitting 3389 to the Terminal Server, 192.168.10.4/32). I say "seemingly" because some of the Layer 2 switches mention the ability to restrict ports and IP addresses through the ACLs; I (perhaps mistakenly) thought that in order to have Layer 3 ACLs (packet filtering), I'd need to have at least one Layer 3 switch acting as a core router. If my assumptions are incorrect, at which point do you need a Layer 3 switch for inter-VLAN routing vs. inter-VLAN switching? Is it generally only when you need that higher-level packet filtering ability between your departments?

    Read the article

  • ssh many users to one home

    - by filippo
    Hiya, I want to allow some trusted users to scp files into my server (to an specific user), but I do not want to give these users a home, neither ssh login. I'm having problems to understand the correct settings of users/groups I have to create to allow this to happen. I will put an example; Having: MyUser@MyServer MyUser belongs to the group MyGroup MyUser's home will be lets say, /home/MyUser SFTPGuy1@OtherBox1 SFTPGuy2@OtherBox2 They give me their id_dsa.pub's and I add it to my authorized_keys I reckon then, I'd do in my server something like useradd -d /home/MyUser -s /bin/false SFTPGuy1 (and the same for the other..) And for the last, useradd -G MyGroup SFTPGuy1 (then again, for the other guy) I'd expect then, the SFTPGuys to be able to sftp -o IdentityFile=id_dsa MyServer and to be taken to MyUser's home... Well, this is not the case... SFTP just keeps asking me for a password. Could someone point out what am I missing? Thanks a mil, f. [EDIT: Messa in StackOverflow asked me if authorized_keys file was readable to the other users (members of MyGroup). Its an interesting point, this was my answer: Well, it wasn't (it was 700), but then I changed the permissions of the .ssh dir and the auth file to 750 though still no effect. Guess it's worth mentioning that my home dir ( /home/MyUser) is also readable for the group; most dirs being 750 and the specific folder where they'd drop files is 770. Nevertheless, about the auth file, I reckon the authentication would be performed by the local user on MyServer, isn't it? if so, I don't understand the need for other users to read it... well.. just wondering. ]

    Read the article

  • How to play individual albums in iTunes?

    - by Herb Caudill
    I know of two ways to play one specific album in iTunes: Do a search that's specific enough to include just that album and no other tracks; press a "Play album" button. (Doesn't work in cover flow or list view.) Go to list view; turn on column browser; in View/Column Browser, make sure "Albums" is showing; double-click an album name. These are fine as far as they go, but: Double-clicking an album in cover flow will play the album, and then keep going (in alphabetical order). That's no good. In playlists like "Purchased" or "Recently Added", you can either view and play whole albums, or sort by date added; you can't do both. In general, there's no straightforward way to get from a track in a playlist to the whole album it belongs to. What I would really, really like, would be to right-click on any song or album cover, anywhere, and choose "Play album". While I'm waiting for Apple to add that, any tips for simple album-centric listening?

    Read the article

  • Xen Bridge only working when IP Assigned

    - by m.sr
    Hey! Just had an (in my sense) obscure situation. I have a Xen Server with bridged networking. Everything works fine since month. A while ago i configuresd a second bridge. only some DomUs get an channel on this bridge - my Dom0 doesn't need to / should'nt use this bridge. So just 5 minutes ago while rebooting the xen host (because of an other problem with the UPS) i decided to removed the fixed ip from the the interface of the Dom0 which belongs to the second bridge. So after reboot i noticed that none of the interfaces on the second bridge is available. I couldn't find a problem. Everything was just like before the reboot, except the interface of the Dom0 had no IP address. After a while i tried to give the DomO interface of this bridge an IP again and ... BOOM ... everything is up and running again! WTF? Why is it important to have the interface of a bridge configured in the Dom0? Even when confiugured 'wrong' (complitely different netowkr settings as the network really hanging on the bridge) everythjing works fine ... I don't get it. Could please someone explain? Tnaks a lot!

    Read the article

  • Router reporting failed admin login attempts from home server

    - by jeffora
    I recently noticed in the logs of my home router that it relatively regularly lists the following entry: [admin login failure] from source 192.168.0.160, Monday, June 20,2011 18:13:25 192.168.0.160 is the internal address of my home server, running Windows Home Server 2011. Is there anyway I can find out what specifically is trying to login to the router? Or is there some explanation for this behaviour? (not sure if this belongs here or on superuser...) [Update] I've run both Wireshark and netmon for a while on my home server. Wireshark captured the traffic, but didn't really show anything useful (or nothing I could make use of). A simple HTTP GET request is sent from the server (192.168.0.160) to the router (192.168.0.1), from a seemingly random port (I've seen examples from 50068, 52883), and it appears to do it twice in quick succession (incrementing port by 1), about every hour. Running netstat around the time of the failure didn't show anything (probably too long after anyway). I tried using netmon as it categorises by process, so I thought it might show a corresponding process for the port. Unfortunately, this comes in under the 'unknown' category, meaning it's basically just a slower, less useful Wireshark. I know there's not much to go on here, but does this help in anyway?

    Read the article

  • Microsoft Office 2003 applications crash on 'Save As' to a network mapped drive

    - by Archit Baweja
    Hey guys, so I'm not sure if it belongs on ServerFault forums so figured I'd ask here first because its a workstation/client side issue. I have a client where we have windows server 2003 setup, with windows xp professional setup on all the workstations. We've setup a 'domain' and all workstations logon to the domain (authenticated by the Windows Domain Controller), and in the logon script we map drives on to each workstation. Everything is working peachy except for one workstation, where when I open a file in excel from a mapped drive, it opens fine, but when I go to hit Save As, the Save As dialog pops and hangs up. I cannot perform any other action in excel. When I try cancel the Save As dialog, excel crashes. The mapped drive opens up fine in Windows Explorer. To further investigate this issue, I created a new blank text document on the network drive in Windows Explorer. I then opened it. Then hit save as, and the Save As dialog opened up fine and it would let me save the document. I repeated the above steps for a word document. However this time the Save As dialog hung/froze again. So I'd imagine its a Microsoft Office Issue. Any ideas?

    Read the article

  • Using the same Windows 8 Upgrade installer on multiple PCs

    - by Karan
    As per this article: You may transfer the software to another computer that belongs to you. … You may not transfer the software to share licenses between computers. But what if I have a bunch of PCs with a mix of XP/Vista/Windows 7? Can I purchase either the Windows 8 Pro Upgrade $40 (download only) or $70 (DVD) version (both of which come without a key) only once and use it to upgrade all the PCs? Since I'm not sharing the license and each PC has its own valid genuine license, it should be allowed, right, or is it illegal? Even if they want people to shell out $40/$70 for each PC, how would they enforce the use of the installer/media on only one PC each? EDIT: I have been given to believe by a source that the installer will only check for the previous OS' key, which is what is confusing me (I have never purchased an upgrade version before this, only full retail or pre-installed versions). Is this true or will I need to enter two keys to make the upgrade work, one for the previous version and then one for Windows 8? If the latter is the case, then the issue is solved since obviously the same Windows 8 key will not be valid for multiple PCs.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >