Search Results

Search found 17856 results on 715 pages for 'setup py'.

Page 378/715 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • Forward Shibboleth Environment Variables to Tomcat via Apache

    - by Deepak Singh Rawat
    I am using Shibbolethv2.3 with Apache web server and Tomcat application server. I am using Apache as a reverse proxy using mod_proxy.so. I am not able to forward the Shibboleth environment variables from Apache to Tomcat. I am able to forward the attributes in the headers but as already mentioned in the wiki this approach is not safe. I have tried forwarding the environment variables by the following directive : SetEnv AJP_username ${username} then at the Java side I can access the attribute by : request.getAttribute("username"); The strange thing here is that, I get a different value instead of the one set by Shibboleth. I get the Windows account name as a result. If I use any other attribute name, I get a null value. I have searched a lot and have run out of options. Please guide me towards the right solution. My setup details : Shibboleth version : 2.3 OS : Windows XP SP3 Webserver : Apache 2.2 Application Server : Tomcat 6 Proxy module : mod_proxy.so

    Read the article

  • How do I set "MaxPermSize" for Atlassian Fisheye/Crucible running as service on Win2k3?

    - by Jeremy
    I have been trying to setup Atlassian Fisheye/Crucible as a service on Win 2K3 R2 for two weeks. I keep getting various "java.lang.OutOfMemoryError: PermGen space" errors, which crash Fisheye and force me to restart the service. I've followed the example on the Atlassian support site to configure MaxPermSize within the service wrapper. However, when I check SysInfo inside the Fisheye Admin pages and the debug log, I don't see any confirmation. The Java Heap info is in both places, so I'd expect the MaxPermSize setting to show up in both places. The error is persisting and Atlassian support has been little help. I appreciate any help.

    Read the article

  • Using Python on Mac

    - by choise
    hi, i want to learn python using my mac. now i want to setup a python version = 3.1.3, because my materials for learning are using this version. typing python into terminal results version 2.6.1, using the dmg installer on python.org (http://docs.python.org/ftp/python/3.1.3/) doesn't have an effect on the python version in terminal, but it's bundled with an own shell under Applications/Python 3.1/Idle.app my question now is, should i use this shell for learing or is there a better way, updating the python version bundled with snow leopard? i already tried defaults write com.apple.versioner.python Version 3.1.3 or defaults write com.apple.versioner.python Version 3.0 without any result. thanks!

    Read the article

  • IPFW not locking people out

    - by Cole
    I've had some brute-forcing of my ssh connection recently, so I got fail2ban to hopefully prevent that. I set it up, and started testing it out by giving wrong passwords on my computer. (I have physical access to the server if I need to unblock myself) However, it never stops me from entering passwords. I see in /var/log/fail2ban.log that fail2ban kicked in and banned me, and there's a ipfw entry for my IP, but I'm not locked out. I've changed the configuration around, and then tried just using the ipfw command myself, but nothing seems to lock me out. I've tried the following blocks: 65300 deny tcp from 10.0.1.30 to any in 65400 deny ip from 10.0.1.30 to any 65500 deny tcp from 10.0.1.30 to any My firewall setup has a "allow ip from any to any" rule after these though, maybe that's the problem? I'm using Mac OS 10.6 (stock ipfw, it doesn't seem to have a --version flag) Thanks in advance.

    Read the article

  • Linux, fdisk: change order of partitions

    - by osgx
    I have a harddrive with 24 logical partitions on it. Half of them are Linux and half are windows. Current ordering is: 3 Linux partitions; 12 windows partitions; 9 Linux partitions. In this setup, Windows can access any of partition (no limits on partition number), but Linux can't access sda16, sda17 ... Can I change numbering of partitions without moving them on disk? I want to put all Linux partitions to be <16; and windows partitions to be 16, so linux will be able to access all linux partitions. I have fdisk/sfdisk and it sees all partitions.

    Read the article

  • Amavis-new whitelist

    - by pigeon
    Hi to the Linux community. I'm coming from a Windows Server background so have mercy. I'm attempting to whitelist some domains and although I know this isn't the best way of doing so its just a one off for a couple of domains so I thought it would be the quickest way of doing so. Current setup: Amavis is used to pass emails off the ClamAV and SpamAssasin, currently I make changes in /etc/amavis/conf.d/50-user, as this will overide other settings. Have created a whitelist file that looks like this: .domaintowhitelist.com .domain2towhitelist.com In the 50-user config file: Have tried variants like this: read_hash(\%whitelist_sender, '/etc/amavis/whitelist'); read_hash(\%virus_lovers, '/etc/amavis/whitelist'); And restarting amavis after making those changes. Am I going about this the wrong way? Any help is appreciated.

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • Can't find instructions how to use Windows 7 drivers on Windows Server 2008 R2

    - by Robert Koritnik
    Maybe I should post this to http://www.serverfault.com. Windows 7 comes with all sorts of signed drivers so there's high probability that all drivers for your machine will be installed during system setup. On the other hand Windows Server 2008 doesn't event though it's practically the same OS when it comes to drivers. But I know that this has a very good reason. It's a server product, not a desktop one. But the thing is that many power users and developers use server OS on their workstations which are normally desktop machines and would need Windows 7 driver spectrum... Question I know I've been reading about some trick on the internet that first installed Windows 7 on the machine, than do something to get either all Windows 7 driver collection or just those installed, and then install Windows Server 2008 and use those drivers. The thing is: I can't seem to find these instructions on the internet any more. If anybody knows where these are please provide the link for the rest of us.

    Read the article

  • Can't find instructions how to use Windows 7 drivers on Windows Server 2008 R2

    - by Robert Koritnik
    Windows 7 x64 comes with all sorts of signed drivers so there's high probability that all drivers for your machine will be installed during system setup. On the other hand Windows Server 2008 R2 doesn't. Event though it's practically the same OS when it comes to drivers. I know there's a very good reason for this difference. It's a server product, not a desktop one. But the thing is that many power users and developers use server OS on their workstations which are usually desktop machines (a bit more powerful though) and would benefit from the whole driver spectrum that Windows 7 offers... Question I know I've been reading on the internet about some trick where you first install Windows 7, than do something to get either all Windows 7 drivers or just those installed, and then install Windows Server 2008 R2 and use those drivers of Windows 7. The thing is I can't find these instructions on the internet any more. If anybody knows where they are please provide the link for the rest of us.

    Read the article

  • Logging on as root without winbind timeouts

    - by Josh Kelley
    How can I set up my Linux box so that, if the Active Directory domain controller is down, I can still log in as root, without any timeouts or delays? Following the example of most of the documentation out there, I've listed pam_winbind.so before pam_unix.so in my /etc/pam.d configurations. I believe that this is the cause of the problem. I remember seeing alternate /etc/pam.d setups that change the order and maybe add either pam_localuser or pam_succeed_if (to see if the uid is less than 500), but I can't find any specifics now (and I'm not enough of an expert in PAM to quickly and easily come up with a robust configuration on my own). What is the recommended setup for PAM with Winbind to avoid timeouts and delays if Active Directory is unavailable?

    Read the article

  • WebLogic Server JMS WLST Script – Who is Connected To My Server

    - by james.bayer
    Ever want to know who was connected to your WebLogic Server instance for troubleshooting?  An email exchange about this topic and JMS came up this week, and I’ve heard it come up once or twice before too.  Sometimes it’s interesting or helpful to know the list of JMS clients (IP Addresses, JMS Destinations, message counts) that are connected to a particular JMS server.  This can be helpful for troubleshooting.  Tom Barnes from the WebLogic Server JMS team provided some helpful advice: The JMS connection runtime mbean has “getHostAddress”, which returns the host address of the connecting client JVM as a string.  A connection runtime can contain session runtimes, which in turn can contain consumer runtimes.  The consumer runtime, in turn has a “getDestinationName” and “getMemberDestinationName”.  I think that this means you could write a WLST script, for example, to dump all consumers, their destinations, plus their parent session’s parent connection’s host addresses.    Note that the client runtime mbeans (connection, session, and consumer) won’t necessarily be hosted on the same JVM as a destination that’s in the same cluster (client messages route from their connection host to their ultimate destination in the same cluster). Writing the Script So armed with this information, I decided to take the challenge and see if I could write a WLST script to do this.  It’s always helpful to have the WebLogic Server MBean Reference handy for activities like this.  This one is focused on JMS Consumers and I only took a subset of the information available, but it could be modified easily to do Producers.  I haven’t tried this on a more complex environment, but it works in my simple sandbox case, so it should give you the general idea. # Better to use Secure Config File approach for login as shown here http://buttso.blogspot.com/2011/02/using-secure-config-files-with-weblogic.html connect('weblogic','welcome1','t3://localhost:7001')   # Navigate to the Server Runtime and get the Server Name serverRuntime() serverName = cmo.getName()   # Multiple JMS Servers could be hosted by a single WLS server cd('JMSRuntime/' + serverName + '.jms' ) jmsServers=cmo.getJMSServers()   # Find the list of all JMSServers for this server namesOfJMSServers = '' for jmsServer in jmsServers: namesOfJMSServers = jmsServer.getName() + ' '   # Count the number of connections jmsConnections=cmo.getConnections() print str(len(jmsConnections)) + ' JMS Connections found for ' + serverName + ' with JMSServers ' + namesOfJMSServers   # Recurse the MBean tree for each connection and pull out some information about consumers for jmsConnection in jmsConnections: try: print 'JMS Connection:' print ' Host Address = ' + jmsConnection.getHostAddress() print ' ClientID = ' + str( jmsConnection.getClientID() ) print ' Sessions Current = ' + str( jmsConnection.getSessionsCurrentCount() ) jmsSessions = jmsConnection.getSessions() for jmsSession in jmsSessions: jmsConsumers = jmsSession.getConsumers() for jmsConsumer in jmsConsumers: print ' Consumer:' print ' Name = ' + jmsConsumer.getName() print ' Messages Received = ' + str(jmsConsumer.getMessagesReceivedCount()) print ' Member Destination Name = ' + jmsConsumer.getMemberDestinationName() except: print 'Error retrieving JMS Consumer Information' dumpStack() # Cleanup disconnect() exit() Example Output I expect the output to look something like this and loop through all the connections, this is just the first one: 1 JMS Connections found for AdminServer with JMSServers myJMSServer JMS Connection:   Host Address = 127.0.0.1   ClientID = None   Sessions Current = 16    Consumer:      Name = consumer40      Messages Received = 1      Member Destination Name = myJMSModule!myQueue Notice that it has the IP Address of the client.  There are 16 Sessions open because I’m using an MDB, which defaults to 16 connections, so this matches what I expect.  Let’s see what the full output actually looks like: D:\Oracle\fmw11gr1ps3\user_projects\domains\offline_domain>java weblogic.WLST d:\temp\jms.py   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'offline_domain'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root. For more help, use help(serverRuntime)   1 JMS Connections found for AdminServer with JMSServers myJMSServer JMS Connection: Host Address = 127.0.0.1 ClientID = None Sessions Current = 16 Consumer: Name = consumer40 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer34 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer37 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer16 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer46 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer49 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer43 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer55 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer25 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer22 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer19 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer52 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer31 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer58 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer28 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer61 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Disconnected from weblogic server: AdminServer     Exiting WebLogic Scripting Tool. Thanks to Tom Barnes for the hints and the inspiration to write this up. Image of telephone switchboard courtesy of http://www.JoeTourist.net/ JoeTourist InfoSystems

    Read the article

  • PO Communication in PDF

    - by Robert Story
    Upcoming WebcastsDate: March 29, 2010 Time: 2 pm London, 9:00 am EDT, 6:00 am PDT, 13:00 GMT Click here to register for this sessionDate: March 29, 2010 Time: 9 am London, 4:00 am EDT, 1:00 am PDT, 8:00 GMT Click here to register for this session Product Family: ProcurementSummary This one-hour session is recommended for technical and functional users who would like to know about the PO Communication functionality in procurement. Topics will include: Introduction to PO PDF communication - 11.5.10 Key ConceptsPrerequisites, Scope Overview of PDF document generation PDF solution overviewTechnical Overview of PDF generation Setup steps Triggering Points of PDF generation PO Output for communication - Concurrent programEnter PO form: View DocIsupplier portal/Contracts preview Enhancements PDF Generation in Custom LayoutsAttachments in fax communicationR12 Communication Nontext Attachments through Email Customizing templates Advantages of PDF communication Troubleshooting (Tips) A short, live demonstration (only if applicable) and question and answer period will be included........ ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Pfsense: Inbound Load Balancing https with sticky connection

    - by Zeux
    first of all I'm very sorry for my English... This is my scenario: Internet Firewall+LB: pfsense_1(Active) + pfsense_2(Passive) in CARP Pool servers: 3 x nginx(PHP5+HTTP+HTTPS) Pfsense 1 and 2 CARP configured with Virtual IP (pubblic). Nginx servers's ips are all private. I want to load balance inbound HTTP and HTTPS connections between the 3 nginx web servers. An importat thing is that the HTTPS connections must be "sticky connections": in HTTPS connections, after login by username and password, I setup a php session and therefore when a client starts a HTTPS connection it will be always redirected to the same nginx server, until it disconnects itself, it closes the page/browser or after a timeout (30minutes?) without activity. Is this possible whit the last release(2.0.1) of pfsense? thank you very much...

    Read the article

  • Making an apache module work only for a particular sub domain

    - by Abhinav Upadhyay
    Hi everyone. I am not a regular webmaster , I am a student working on an internship project which is an Apache2 module, so my knowledge of Apache configuration is limited. I want to configure my module for a particular sub domain of the host server. Lets say I want my module to work only for onclick.localhost , then what are the directives required ? I want the module to be invoked only for the hosts/virtualhosts for which it has been configured explicitly. I know how to setup sub domains in Apache, so that's not a problem. Thanks.

    Read the article

  • Configure Nagios To Alert Depending On Host Group That Service Alert Originates From

    - by StrangeWill
    So my setup: Services are shared between all hosts (CPU/RAM/Disk/Services). Hosts are split into two main groups: "Production" and "Development". We have two contact groups: "Production" and "Development". Lets say my development SQL server runs low on RAM, I want it to only alert those in "Development" contact group (this service is of course assigned to a host in the "Development" host group, using the shared RAM monitoring service). I'm pretty much stumped on this... I can't configure it at the service level (they're shared there), and I can't seem to get escalations to do it for me either... Do I need to use service groups along with escalations and bite the bullet on building that list? Or am I missing something stupidly simple? I'm using Centreon for configuration if that helps.

    Read the article

  • Synchronize pasteboard between remote tmux session and local Mac OS pasteboard

    - by bhargav
    Setup: I use iTerm2 on MacOS to connect to a remote server. The remote server runs tmux, in which I open files and edit in vim sessions. Problem: I can't copy/paste between the remote tmux session and the local iTerm client. I can use iTerm 2's alt/option + mouse selection to select text, but this copies over multiple vim panes/tmux panes - bad. Is there any elegant solution to make selections in tmux panes synchronize between the remote pasteboard and the local (MacOS pasteboard)? I've seen reattach-to-user-namespace, but I'm pretty certain it doesn't do what I want.

    Read the article

  • Setting up a wireless access point with Ubuntu server

    - by Solignis
    I am trying to setup a wifi access point with my Ubuntu server, everything "seems" to be good since I got a better wifi radio (long story). But now I am a little confused, when my Android phone tries to associate with the AP it is able to with no problem but then it cannot get an IP address from my DHCP server. I have tried messing with firewall rules, nothing... I tried making a bridge interface as per what I am told I had to do. ( Not sure if i did it right though). What I am trying to do if make the wireless interface an extention of my eth0 network. Am I on the right track or am I going about the wrong way?

    Read the article

  • Squid traffic tunneled through VPN

    - by NerdyNick
    So what I'm trying to do is have a Squid Proxy run on 1 machine along side a VPN connection. What I want to happen is all traffic running though the Squad Proxy would run though the VPN for its outbound. ie Desktop - (Squid Proxy - VPN) The goal is to allow my desktop selective tunneling through the VPN. So that Instant Messaging and the like that do not need to run through the VPN can go through my normal traffic. Typically I would go though a SSH Proxy but currently am forced to use VPN to gain entry into the office, and a Squid proxy seemed like it might work out the easiest for what I am needing. EDIT Realize I forgot to actually state what problem I'm running into. I have the Squid setup and verified it works, but once I connect to the VPN. All requests to Squid get accepted but Squid is unable to make the request over the VPN. So the client ends up just sitting there.

    Read the article

  • Tunnel only one program (UDP & TCP) through another server

    - by user136036
    I have a windows machine at home and a server with debian installed. I want to tunnel the UDP traffic from one (any only this) program on my windows machine through my server. For tcp traffic this was easy using putty as a socks5 proxy and then connecting via ssh to my server - but this does not seem to work for UDP. Then I setup dante as a socks5 proxy but it seems to create a new instance/thread per connection which leads to a huge ram usage for my server, so this was no option either. So most people recommend openvpn, so my question: Can I use openvpn to just tunnel this one program through my server? Is there a way to maybe create a local socks5 proxy on my windows machine and set it as a proxy in my program and only this proxy then will use openvpn? Thank you for your ideas

    Read the article

  • Using DNS in iproute2

    - by Oliver
    In my setup I can redirect the default gateway based on the source address. Let's say a user is connected through tun0 (10.2.0.0/16) is redirect to another vpn. That works fine! ip rule add from 10.2.0.10 lookup vpn1 In a second rule I redirect the default gateway to another gateway if the user access a certain ip adress: ip rule add from 10.2.0.10 to 94.142.154.71 lookup vpn2 If I access the page on 94.142.154.71 (myip.is) the user is correctly routed and I can see the ip of the second vpn. On any other pages the ip address of vpn1 is shown. But how do I tell iproute2 that all request at e. g. google.com should be redirected through vpn2?

    Read the article

  • OpenVPN vs. IPSec - Pros and Cons, what to use?

    - by jens
    interestingly I have not found any good searchresults when searching for "OpenVPN vs IPSec": I need to setup a private LAN over an untrusted network. And as far as I know, both approaces seem to be valid. But I do not know which one is better. I would be very thankfull If you can list the pro's and con's of both approaches and maybe your suggestions and experiences what to use. Update (Regarding the comment/question): In my concrete case the goal is to have any number of Servers (with static IPs) be connected transparently with each other. But a small portion of "dynamic clients like road warriors" (with dynamic IPs) should also be able to connect. The main goal is however having a "transparent secure network" run top of untrusted network. I am quite a newbie so I do not know how to correctly interprete "1:1 Point to Point Connections" = The solution should support Broadcasts and all that stuff so it is a fully functional network... Thank you very much!! Jens

    Read the article

  • Can connect to Samba server but cannot access shares?

    - by jlego
    I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. Needs to be able to share files with a PC running Windows 7 and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error 0x80070035 " Windows cannot access \SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error "The original item for ShareName cannot be found." When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? UPDATE 2: As the directory I am trying to share is actually an entire internal disk, I have tried to things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. UPDATE 3: Not sure exactly what I did, but now whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Any suggestions? UPDATE 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. UPDATE 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally) UPDATE 6: Figured it out. See answer below

    Read the article

  • centos: nginx + thin webserver, incoming connections not allowed

    - by cbrulak
    I setup a fresh CentOS 5 install, compile nginx from scratch and am using thin as the rails server. If I visit the ip adress on the LAN: (for example) 1.2.3.4 I get the website not found error. However, I can ssh into the machine. If I use links to visit the ip address, I get the landing page. Any suggestions? Thanks EDIT I ran system-config-securitylevel and then was able to change the security settings to allow incoming connections.

    Read the article

  • Can't Install Windows 2008 R2 on Lenovo Laptop With SSD

    - by Ben
    I am trying to install Windows Server 2008 R2 on a new Lenovo X201 or T410 laptop. Setup halts with the following pop-up: A required CD/DVD device driver is missing. If you have a driver floppy disk, CD, DVD, or USB flash drive, please insert it now. Note: If the windows installation media is in the CD/DVD drive you can safely remove it for this step. The CD drive is obviously working, as it's booting from CD to get to this point. The only thing I can think is that it is to do with the fact they have SSD disks in - but that's just a guess. (Edit - One extra thing that may or may not be relevant: it's the 64 bit version of Server 2008 R2)

    Read the article

  • How do you sync tabs between computers with Mozilla Firefox Sync?

    - by 1.21 gigawatts
    Mozilla says that with Firefox Sync if you are working on one computer with 5 tabs open you can then switch to another computer or device and Sync can or will update the second computer with those tabs. How do I do this? BACKGROUND I have setup Firefox Sync on both devices and they have been synced. How do you sync them again? Is it automatically syncing them in the background? How often? How do you sync the tabs? Are the passwords synced? Documentation The documentation below describes how to add another device or computer to Sync. It says that when you add the device it syncs it. But it doesn't describe how and when it keeps it in sync or how to sync the tabs. [1] http://support.mozilla.org/en-US/kb/what-firefox-sync

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >