Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 325/389 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Migrate openldap users and groups

    - by user53864
    I have an OpenLDAP server running on one of my ubuntu 8.10 servers. I used command-line only for OpenLdap installation and some basic configurations, everything else I'll configure with the Webmin gui tool. I'm trying to migrate to ubuntu 10.04 and I was able to migrate all other servies, application and databases but not the ldap. I'm an ldap beginner: I have installed OpenLDAP server and client on ubuntu 10.04 server using the link and used the following command to export and import ldap users and groups To export from 8.10 server slapcat > ldap.ldif To import to 10.04 server Stop ldap and slapadd -l ldap.ldif and Start ldap Then I accessed Webmin and checked in Ldap users and groups and I could see all the users and groups of my old ldap server.Whenever I create an ldap user from the webmin(in 8.10 or 10.04) a unix user is also created with the home directory under /home. But the imported users in 10.04 from 8.10 are not present as a unix user(/etc/passwd). How could I make the ldap users available as a unix user, is there any perfect way to export and import?. I also wanted to check the ldap users from the terminal that if password is exported properly but I don't know how to access the ldap users which are not available as unix users. On 8.10, I just use su - ldapuser and it is not working in the 10.04 as unix users are not created for the exported ldap users. If every thing works fine then the CVS works as it is using ldap authentication. Anybody could help me?

    Read the article

  • Problems installing GIT on Ubuntu through SSH

    - by jamadri
    I'm having trouble installing git using this command: sudo apt-get install git-core It's giving me the problems below and I'm not quite sure how to get this to work correctly. I try running sudo apt-get update and after it just gives me problems. If anyone knows how to solve this or a possible way of getting GIT on your machine differently it would be of much help. I've never had a problem with using apt-get. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! liberror-perl git-core patch Install these packages without verification [y/N]? y Err http://us.archive.ubuntu.com jaunty/main git-core 1:1.6.0.4-1ubuntu2 404 Not Found [IP: 91.189.92.183 80] Err http://us.archive.ubuntu.com jaunty/main patch 2.5.9-5 404 Not Found [IP: 91.189.92.183 80] Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/g/git-core/git-core_1.6.0.4- 1ubuntu2_amd64.deb 404 Not Found [IP: 91.189.92.183 80] Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/p/patch/patch_2.5.9- 5_amd64.deb 404 Not Found [IP: 91.189.92.183 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Anything reply that can help fix this would be helpful. I'm not sure if it's the git servers or my connection that might be the problem. I've used apt-get to pull other things, it's just failing with git.

    Read the article

  • SQL Server architecture - they want to move my database to new instance...Why?

    - by O'MALLEY
    Our current production database environment contains about 10 similarily managed databases. Our agency has just purchased and is installing new blade chasses and wants to move my database to a new instance (leaving the other 9 on another). This decision is being driven by one of our IT staff, not a DBA. I am a project manager, not a DBA but I know enough to not necesarrily have a good feeling about this decision and I am urging our IT department to make a sound decision based on what is best for the database. Our IT department has stated that it is not good to have all our eggs in one basket, and has also stated that my database contains "regulatory data" so it should be on its own instance. A couple of truths: - None of the databases on the current instance are OLTP databases nor are any of them data warehouses - My database currently has joins/views to a couple of the other databases in the production environment So my questions are as follows: Am I wrong to disregard a statement about eggs in baskets? (hello, this is why we have maintenance plans/disaster recovery plans). I'll mention that other databases also have regulatory data too. What types of questions do I need to ask to determine if this is a sound decicion? (A DBA friend mentioned that if the service level agreement of said database does not radically differ from the others then why do they want to do this?) I have done some research on linked servers. What arguments should I bring forth about the fact that I have views setup that rely on data from other DBs currently?

    Read the article

  • Why do you use a 3PAR SAN? [closed]

    - by Starfish
    If you use a 3PAR SAN, I’d like to hear what you think about it, particularly compared to the HP EVA. What do you see as its advantages over other SANs like the EVA? What’s so special about the ASIC? We had HP quote us an EVA P6500 and 3PAR V400 with equivalent storage and the 3PAR was nearly twice the cost. My site has two EVA SANs with a combined capacity of ~80 TB. We want to replace the older and larger of the two. We’ve been looking at the EVA and the 3PAR to see which would be a better fit for us. I’m struggling to understand how the 3PAR differs from the EVA from a practical technical standpoint. When I read the sales literature and speak with the HP sales engineers, they spend a lot of time talking about how the 3PAR is better because of its ASIC. It’s ASIC this and ASIC that, but when I press them on how a 3PAR with thin provisioning is better than an EVA with thin provisioning, I can’t get a straight answer. Meanwhile, one of my colleagues, who has more say regarding which SAN we get, is enamored by the 3PAR, and he can’t explain clearly to me why he wants it over the EVA. Our needs are pretty simple. We have 10 servers running VMware and ~100 VMs. We use VMware’s thin provisioning currently, but we would like to start using thin provisioning on the new SAN. We don’t have a need for SSDs or migration between storage tiers. We plan on having FC or SAS drives for our most used data and SATA/FATA drives for the lesser used data which is how we have the EVAs configured. We also do not need any SAN-level snapshotting or replication.

    Read the article

  • Why does my DD-WRT not accept SSH connections from my laptop?

    - by Vlad Seghete
    So, here is my system: I have a 2Wire AT&T modem/router which I use for wireless and a Buffalo router flashed with DD-WRT which is physically attached to the 2Wire and set in the DMZ. I set everything up on the DD-WRT to be able to connect to it using ssh and also so that it forwards ssh requests on a different port to one of the servers behind it. Now, when I am physically connected to the DD-WRT all this works great and as I would want it to. I ssh into the two different ports using the WAN IP of my network, and I get where I expect to land. If, however, I am connected using wi-fi to the 2Wire, the same commands do not work. I do not get an error, simply a timeout. I have trouble understanding this, since the DD-WRT is set in the DMZ and everything should pass to it. To further complicate the problem, I tried connecting to the same IP using my phone (wireless disabled, so really from the WAN) and surprise, it works! If I go back on the local network by enabling the wifi, the ssh connection times out. To make this even stranger, my WAN IP address always responds to pings (meaning in all the above situations). What could be going on here? I know what I should do, completely disable the 2wire as a router and use it strictly as a modem and them use all the routing capabilities of the dd-wrt. It's what I will probably end up doing anyway, but my question remains, because I really want to know what is happening here.

    Read the article

  • Why *do* windows print queues occasionally choke on a print job

    - by Ian
    Y'know they way windows print queues will occasionally stop working with a print job at the head of the queue which just won't print and which you can't delete? Anyone know whats going on when this happens? I've been seeing this since the NT4 days and it still happens on 2008. I'm talking about standard IP connected laser printers - nothing fancy. I support a lot of servers and loads of workstations and see this happen a few times a year. The user will call saying they can't print. When you examine the print queue, which in my case will generally be a server based queue shared out to the workstations, you find a print job which you cannot cancel. You also can't pause it, reinitialize it, nothing. Stopping the spooler is the usual trick and works sometimes. However I occasionally see cases which even this doesn't cure and which a reboot is the only solution. Pause the queue, reboot, when it comes back up the job can then be deleted. Once gone the printer happily goes back to its normal state. No action is ever necessary on the printer. I regard having to reboot as last resort and don't like it. What on earth can be going on when stopping the process (spooler) and restarting it doesn't clear a problem? Its not linked to any manufacturer either. I've seen this on HPs, lexmark, canon, ricoh, on lasers, on plotters.... can't say I ever saw this on dot matrix. Anyone got any ideas as to what may be going on. Ian

    Read the article

  • VPN on a ubuntu server limited to certain ips

    - by Hultner
    I got an server running Ubuntu Server 9.10 and I need access to it and other parts of my network sometimes when not at home. There's two places I need to access the VPN from. One of the places to an static IP and the other got an dynamic but with DynDNS setup so I can always get the current IP if I want to. Now when it comes to servers people call me kinda paranoid but security is always my number one priority and I never like to allow access to the server outside the network therefor I have two things I have to have on this VPN. One it shouldn't be accessiable from any other IP then these 2 and two it has to use a very secure key so it will be virtually impossible to bruteforce even from the said IP´s. I have no experience what so ever in setting up VPNs, I have used SSH tunneling but never an actuall VPN. So what would be the best, most stable, safest and performance effiecent way to set this up on a Ubuntu Server? Is it possible or should I just set up some kind of SSH Tunnel instead? Thanks on beforehand for answers.

    Read the article

  • Free, simple, configurable SOCKS5 server

    - by Pooria Azimi
    I've been looking (for the past 6-7 hours) for a fast, free and configurable SOCKS5 server. I haven't found anything that matches my needs. They are either too complicated, too bare-bones or simply buggy as hell. This is (all) I need: I want it to run on Linux (and also OS X, preferably) I want it to listen on localhost:8888 When my app (say wget.. or curl --socks5=localhost:8888) requests http://www.google.com/search?q=asd (or any other url - both http and https), I want it to fetch the page not from google's servers, but from http://localhost:4444/cached?uri=http://www.google.com/search%3Fq%3Dasd. Nothing more! I don't need caching, or anything else. I just want a SOCKS5 server, running locally, which redirects all queries to my own (local) server. It could be written in C, C++, Python, PHP, Perl, Node.js or any other language. I don't care, as long as it supports my (very limited) needs, or I can easily change the source to make it so. Thanks a lot

    Read the article

  • Copy UNC network path (not drive letter) for paths on mapped drives from Windows Explorer

    - by Ernest Mueller
    I frequently want to share network paths to files with other folks on my team via email or chat. We have a lot of mapped drives here, both ones we set up ourselves and ones set up by our IT overlords. What I'd like to be able to do is to copy the full real path (not the drive letter) from Windows Explorer to send to folks. Example: I have a file in my "Q:" drive, \\cartman\users\emueller, and I want to send a link to the file foo.doc therein to coworkers. When I copy the file path (shift+right click, "copy as path") it gets the file name "Q:\foo.doc". This is unhelpful to others, who would need to see \\cartman\users\emueller\foo.doc to be able to consume the link. In Explorer it clearly knows it - in the address bar I see "Computer - emueller (\\cartman\users) (Q:) -". Is there a way to say "hey man copy that path as text with the \\cartman\users\emueller not the Q: in it?" I know I could just set up mapped network locations instead of the mapped drives for the ones that I set up personally and avoid this problem, but most of the mapped drives like the "users" share come from our IT policy. I could just make a separate network location and then ignore my Q: drive but that's inconvenient (and they do it so they can move accounts across servers). Sure my emailed path might eventually break because I'm losing the drive letter indirection but that's OK with me.

    Read the article

  • Is having a [high-end] video card important on a server?

    - by Patrick
    My application is quite interactive application with lots of colors and drag-and-drop functionality, but no fancy 3D-stuff or animations or video, so I only used plain GDI (no GDI Plus, No DirectX). In the past my applications ran in desktops or laptops, and I suggested my customers to invest in a decent video card, with: a minimum resolution of 1280x1024 a minimum color depth of 24 pixels X Megabytes of memory on the video card Now my users are switching more and more to terminal servers, therefore my question: What is the importance of a video card on a terminal server? Is a video card needed anyway on the terminal server? If it is, is the resolution of the remote desktop client limited to the resolutions supported by the video card on the server? Can the choice of a video card in the server influence the performance of the applications running on the terminal server (but shown on a desktop PC)? If I start to make use of graphical libraries (like Qt) or things like DirectX, will this then have an influence on the choice of video card on the terminal server? Are calculations in that case 'offloaded' to the video card? Even on the terminal server? Thanks.

    Read the article

  • Cant kill process on Windows Server 2008!! - Thread in Wait:Executive State

    - by adrian
    I hope someone can help me with our issue we are having. We have a major issue with a process that we can not kill and the only way to get rid of the process is to reboot the machine. I have tried killing it from the normal task manager but no joy. I have tried killing it using the taskkill /F command from a command prompt and no joy. The command reports as sucessful but the process remains. I have tried to start task manager with system rights by calling "psexec -s -i -d taskmgr" and attempting to kill the process but no joy I have tried killing it from Process Explorer but again the process remains. I have tried creating a scheduled task that runs under the SYSTEM name to kill the task but that also does not kill it : schtasks /create /ru system /sc once /st 13:16 /tn test1 /tr "taskkill /F /PID 1576" /it Nothing I do will kill this process. Even logging off and logging back on will not kill this process. Using Process Explorer I notice that there is on stubborn thread that is in the Wait:Executive state. I have tried to kill this thread using Process Explorer but again no joy. We are using Windows Server 2008 R2 64-Bit. The server is brand new and windows is freshly installed. Now heres the thing. We have brought two identical servers from Dell with the same specs and the same OS installed and I can not replicate this issue on the other server. Only on this server, under certain circumstances does this server process hang and can not be restarted! I have also changed the compatability mode by setting it the process to "Windows 2003" but this has not helped. I have noticed in Process Explorer that DEP is turned on but im not sure this has got any bearing on the issue ot not. Please, can someone help??

    Read the article

  • limiting connections from tomcat to IIS - proxy? iptables?

    - by Chris Phillips
    Howdy, I've webapp on tomcat6 which is connecting to an M$ PlayReady DRM instance on IIS6.0 The performance is seen to be best when we bench mark (using ab) the DRM service with 25 concurrent connections, which gives about 250 requests per second, which is ace. higher concurrent connections results in TCP/IP timeouts and other lower level mess. But there is no way to control how the tomcat app connects to the service - it's not internally managing a pool of connections etc, they are all isolated http connections to the server. Ideally I'd like a situation where we can have 25 http 1.1 connections being kept alive permanently from tomcat and requesting the licenses through this static pool of connections, which I think would the best performance. But this is not in the code, so was looking for a way to possibly simulate this at the Linux level. I was possibly thinking that iptables connlimit might be able to gracefully handle these connections, but whilst it could limit, it'd probably still annoy the app. What about a proxy? nginx (or possibly squid) seems potentially appealing to run on the tomcat server and hit on localhost as we might want to add additional DRM servers to use under load balance anyway. Could this take 100 incoming connections from tomcat, accept them all and proxy over the the IIS server in a more respectful manner? Any other angles? EDIT - looking over mod_proxy for apache, which we are already using for conventional use on an apache instance in front of this tomcat instance, might be ideal. I can set a max value on the proxy_pass to only allow 25 connections, and keep them alive permanently. Is that my answer? Many thanks, Chris

    Read the article

  • Best Practices for adding Exchange Archive to current 3 server setup

    - by ADquestion
    I'm looking to add an Archive Database (which I know is just a Mailbox Database) to our current Exchange 2010 environment. I have done this in the past at a previous job, but we had a simpler setup than at this current job. I've been trying to find some best practices to make sure it's setup in an ideal way, but so far not finding the details I would prefer. Hoping someone on here can give me a few pointers. Currently we have a 3 server setup, Server1, Server2 and Server3. Three databases of course, DB1, DB2 and DB3. We have a DAG setup between them. Server1 has DB1 and DB3 on it, DB1 is not active, DB3 is active. Server2 has DB1 and DB2 on it, both are active. Server3 has DB2 and DB3 on it, both are not active. All three servers are virtual (VMware). Each one is setup identical to the other as follows: C:\ 60GB - OS E:\ 600GB - DB (currently only 90GB used, pointing to Datastore just for Server2) F:\ 200GB - Log (2GB used, pointing to same Datastore as above) G:\ 200GB - Restore (0 used, pointing to same Datastore as above) The drives are all set to Thin Provisioning, and it looks as though I have 600GB of available space. They have not been on Exchange that long and only have about 70GB worth of PSTs to import back in that will be going to the Archive Database, plus anything older than 2 years from their current inbox that will be moved into there. I was considering placing the Archive DB on the E:\ drive of Server3 (only) like the current DB, but wasn't sure if that was acceptable. I don't plan on setting the Archive DB up with the DAG, just plan on having it as a single repository for older emails and manually back it up every now and then. If anyone has any suggestions on this I would appreciate it the input. I've done it on a slightly smaller scale before and it worked well, but like to think it through before pulling the trigger, especially at a new job. :) Thanks again!

    Read the article

  • TCP Server Memory management: #Connections Vs. #Requests

    - by Andrew
    Given that, there is no theoretical limit to number of concurrent TCP connections a Windows 2008 server can handle. Only thing will happen is, with each connection there will be memory consumption in server. Unfortunately, memory is not unlimited (and I want to utilize only physical memory). For example, lets say we've 2GB server memory. Now there are two extreme cases: Case 1: If we've allocated 64KB buffer for each connection (only to receive incoming request), then 32768 connections can consume all the 2GB of memory. This will not leave any memory to queue/process incoming requests from those connections. Case 2: On the other hand, lets say a single (or very few) connections continuously keeps sending request buffers (for example, video streaming from one connection to other) and server cannot process them within time, those buffers will get piled up in server and eventually will occupy most of the servers memory. And it will not leave any memory for new connection thereafter. This is the real dilemma in server design bugging me badly for last many days. If I can decide on max size of request buffer per connection and max number of requests to allow in queue per connection. Then, based on available server memory, it will then automatically set limit on max number of concurrent connections. How to decide on these limits to achieve best performance and throughput? I am just looking for perfect utilization of server resources. Are there any standard guidelines or empirical data available with someone who can share with me please.

    Read the article

  • CLOSE_WAIT sockets burst - perhaps because of iptables settings?

    - by Fabrizio Giudici
    I have an Ubuntu 12.04 server virtual box where basically the installed software and configuration are the default ones, plus the installation of a jetty 6 server which servers a few websites. To keep things simple I didn't install apache httpd and used iptables for exposing jetty (which runs on the 8080 port) to the port 80. These are the results of /sbin/iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain POSTROUTING (policy ACCEPT) target prot opt source destination I must confess I have a shallow comprehension of how iptables works, in particular for the different kind of chains. This thing works, but sometimes I have an explosion of sockets that stay permanently in CLOSE_WAIT state. I know about what this state means, but since I didn't write the code that manages servlets (they are handled by jetty) I can't fix the problem by patching my code. Eventually the amount of CLOSE_WAIT sockets builds up and makes the server not responsive, so I have to restart jetty. I've looked around for similar problems wth CLOSE_WAIT, and only found cases related to the programmer's code, or problems with Tomcat, not Jetty. I was wondering whether they could be related to a partially broken iptables configuration (the alternative is a bug in Jetty 6, but I first want to exclude other possible causes). Thanks.

    Read the article

  • Monitoring AWS Systems Behind ElasticBeanStalk

    - by A. Avadis
    So I'm getting a company set up in the Amazon Cloud -- creating IAAS protocol/solutions/standardized implementation, etc while also being the SysAdmin for individual systems, app environments, and day-to-day uptime. One of the biggest issues I'm having is tracking various system/application logs, as well as logging/monitoring/archiving system metrics like memory usage, cpu usage, etc etc In a centralized fashion. E.g. -- Nagios + Urchin. The BIGGEST impediment to my endeavors is the following: The company application is deployed in the form of a Java *.WAR file, uploaded to an Elastic BeanStalk application environment, load balancing and auto-scaling between 3(min) and 10(max) servers, and the EC2's that run the application are fired up and disposed of ad-hoc. That is to say, I can't monitor the individual EC2's for very long because so many are being terminated then auto-provisioned/auto-scaled on the fly -- so I'd constantly be having to "monitor what I'm monitoring", and continuously remove/add EC2 machine addresses to my monitoring lists. IS there some sort of way to use monitoring tools like Zabbix or Nagios to monitor the ElasticBeanStalk, and have it automatically add on new EC2's, and remove terminated/failed EC2's from its monitoring list automatically? Furthermore, is there anything I can do with GrayLog to achieve similar results with the aggregation/centralization of my application logs from multiple EC2 instances into ONE consolidated set of logs/events? If not GrayLog, is there ANYTHING LIKE GrayLog that can automatically detect what EC2 members are being added/removed from the environment, and collect the logs from them automatically? Any and all advice or direction is appreciated. Thanks much, and cheers!!

    Read the article

  • Gmail won't forward mail sent to myself.

    - by BHare
    I own a dedicated server with a domain, we'll say foobar.com. I use google apps to manage my email SMTP servers. Now I don't check two gmail inboxes. I have my own personal one, and then I have foobar.com's inbox from google apps. Naturally the easiest thing to do is just have all foobar's emails forwarded to my personal one. So then I am only checking 1 inbox. This is all fine and dandy. I use MSMTP that with a wrapper that uses /etc/aliases. I have it set so any mail attempting to go to root (Things from cron, etc) will go to [email protected]. So when google app's (foobar.com) gets an email from the email I have setup with it ([email protected]), it automatically doesn't forward the message. This is a "feature" to gmail/google apps I suppose. How do I get around it? workarounds? etc. I could just have my alias set to my personal email but I wanted a place to have all foobar related emails archived in one place (googleapps).

    Read the article

  • Windows Server 2012 Hyper-V very slow

    - by Matt Taylor
    I have been running several Hyper-V VMs on Windows Server 2008 R2 for the past couple of years and enjoying perfectly adequate performance for my testing/development/r&d environments. I'm a software developer so my hardware knowledge is basic however I built the rig using: •Gigabyte GA-X58A-UD3R Intel X58 (Socket 1366) DDR3 Motherboard •Intel Core i7 960 3.20GHz (Bloomfield) (Socket LGA1366) •24GB triple channel RAM The host OS is running on an OCZ SSD and all the VMs are running on a 2TB Marvell SATA3 RAID 0 array consisting of 2 Western Digital Caviar Black 7,200rpm drives. I have tested the speed of the 2TB drive and appear to be getting less than 3Mbs but it can adequately run a 4 VM farm including a DC, (SQL) database and IIS application servers. I recently upgraded the SSD on which the host runs to a 256GB OCZ Vertex 4 and took the opportunity to upgrade to Windows Server 2012 and installed the Hyper-V role. I tried importing one of my existing Windows Server 2008 R2 VMs (and converted it to .vhdx) plus I have tried creating a brand new Windows Server 2008 R2 VM but both are running extremely slowly and I can see nothing obvious using the host and guest Task Manager/Resource Monitor tools. In both cases the VM has 8GB RAM (fixed), 4 CPUs, fixed size HD (not expanding) and is using an external virtual network running on a separate NIC to the host. I have upgraded the BIOS to the latest available version and checked the virtualization settings. I have run out of "obvious" (to a developer) things to check/configure and my next option will be to re-install the host OS but before I do I would very much appreciate any advice from any experts out there. Thanks

    Read the article

  • Write permissions on uploaded files - Linux, Apache, PHP

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • Single m0n0wall - Two LAN Subnets - How To Setup

    - by SnAzBaZ
    I have two LAN subnets that I need to link together they are 192.168.4.0/24 and 192.168.5.0/24 There is a m0n0wall running on 192.168.4.1. It's LAN connection goes out to our network switch, and it's WAN port goes out to our ADSL modem. WAN is connected via PPPoE. The 192.168.4.0 subnet contains all of our office workstations. The 192.168.5.0 subnet contains development servers and test machines that need to obtain internet access and be "managed" by computers on the 192.168.4.0 subnet, but need to be on their own subnet as well. I have a Draytek 2820N configured on 192.168.5.1 with it's WAN2 port configured as 192.168.4.25 and a default gateway of 192.168.4.1. Machines on the 5.0 subnet can connect to the internet via the m0n0wall just fine. I configured a static route on the m0n0wall LAN interface, Network 192.168.5.0/24 and Gateway 192.168.4.25. Machines on the 5.0 subnet can ping machines on the 4.0 network but the reverse does not work. I configured a new firewall rule on the m0n0wall that allows any traffic on the LAN interface with a source IP of 192.168.4.25 to be allowed. The DrayTek firewall is currently configured to pass all traffic regardless. When I try to ping a machine in the 5.0 subnet from 4.0 I see this in my m0n0wall log: BLOCK 14:45:27.888157 LAN 192.168.4.25 192.168.4.37, type echoreply/0 ICMP So the reply is being sent from the 5.0 subnet but is not being allowed to reach my workstation because the firewall is blocking it. Why is the firewall blocking it ? I hope the explanation of my network is clear, please ask if you require further clarification. Thank you.

    Read the article

  • How should I configure nginx caching headers for a "baked" static file blog? (Octopress)

    - by Doug Stephen
    I recently deployed an Octopress blog (which is a blogging platform built around Jekyll). It's a static-site blog generator, with no dynamic content or databases to muck about with. It's being served up by nginx. My question is, what is the appropriate expires directive or Cache-Control header that I should set to make sure that visitors get the most up-to-date version of the site when they visit without having to manually refresh? Since the site is just .html files it seems to get cached pretty aggressively. I've tried a million different combinations of expires modified + xxxx and even straight up expires off but I can't seem to wrap my head around it. I'm very new to dealing with caching like this, specifically, on static files that change frequently, and obviously if the site hasn't been changed then I'd like for it to be served up out of the cache. Update (still not solved though): I found open_file_cache, tweaked that. Still no dice. It seems like what I might want to do is use nginx as a proxy cache and use Apache with ETags? Is there really no convenient way to make nginx play nicer with conditional requests from the client? TL;DR: I'm running a static-file blog and I'd like to set up nginx to only serve from the cache if the blog hasn't been updated recently, but I'm too stupid to figure it out myself because I'm relatively new to web servers.

    Read the article

  • Self-connecting printers

    - by Martin Cerny
    Hello, I work as an administrator in a small company using XP Professional on all computers and two servers with Win 2003 Server. Recently a very unusual problam occured one of the computers keeps connecting to all the printers on the network it doesn't matter if it's an administrator or Domain User as soon as somebody logs in the commputer connects all the printers. The printers are either installed on local computers or on the server and shared. There is no log-on script connecting the printers, I install them manualy and none of the other computers shows such behaviour. We have a printer which is installed on two computers and both of them share it (I'm moving it to Server from a small PC which shared it up to now, but some computers still use the old connection), meaning this specific computer connects to one of the printer two times and it can't use either of the connections. How to prevent this self-connecting to all printers (none of the other computers has this problem). If I delte them from the "Printers" folder everything works fine untill I reconnect and the Folder is once again full of all the printers we have. I solved the smaller problem, computer is now capable of printing on all of the printers (it seems there have been some registry issues), after cleaning the registry and reinstalling the printer it seems to work just fine. But the second thing prevails, the computer connects to all the printers in the network (when I remove one/multiple it is reconnected right after the next log-in by any user).

    Read the article

  • Issues with VSFTPD / FTP on Linux Ubuntu server - Steps for Troubleshooting?

    - by jnolte
    I am dealing with an issue I am unclear on how to resolve and have been pulling my hair out for some time. I have been trying to configure an FTP user using the following (we use this same documentation on all servers) Install FTP Server apt-get install vsftpd Enable local_enable and write_enable to YES and anonymous user to NO in /etc/vsftpd.conf restart - service vsftpd restart - to allow changes to take place Add WordPress User for FTP access in WP Admin Create a fake shell for the user add "usr/sbin/nologin" to the bottom of the /etc/shells file Add a FTP user account useradd username -d /var/www/ -s /usr/sbin/nologin passwd username add these lines to the bottom of /etc/vsftpd.conf - userlist_file=/etc/vsftpd.userlist - userlist_enable=YES - userlist_deny=NO Add username to the list at top of /etc/vsftpd.userlist restart vsftpd "service vsftpd restart" make sure firewall is open for ftp "ufw allow ftp" allow modify the /var/www directory for username "chown -R /var/www I have also went through everything listed on this post and no luck. I am getting connection refused. Sorry for the poor text formatting above. I think you get the idea. This is something we do over and over and for some reason it is not cooperating here. Setup is Ubuntu 12.04LTS and VSFTPD v2.3.5 Thank you in advance.

    Read the article

  • Where to put the SPF TXT record?

    - by YellowSquirrel
    I've set up Google apps for my domain: I've registered the domain with Google by adding the CNAME Google asked and I've apparently succesfully setup the MX Google mail servers. So far I haven't yet a dedicated server: I'm just having a domain at a registrar. Now I want to activate SPF and I'm confused. In the following short webpage: http://www.google.com/support/a/bin/answer.py?answer=178723 it is written that I must add a TXT record containing: v=spf1 include:_spf.google.com ~all Where should I enter this? Should this go in the zone (?) file, like I did for the CNAME and the MX records? So far I have something like this: @ 10800 IN A 217.42.42.42 @ 10800 IN MX 5 ASPMX3.GOOGLEMAIL.COM. @ 10800 IN MX 5 ASPMX2.GOOGLEMAIL.COM. @ 10800 IN MX 3 ALT2.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 3 ALT1.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 1 ASPMX.L.GOOGLE.COM. google8a70835987f31e34 10800 IN CNAME google.com. Does adding the SPF TXT record mean I should literally have something like that: @ 10800 IN A 217.42.42.42 @ 10800 IN MX 5 ASPMX3.GOOGLEMAIL.COM. @ 10800 IN MX 5 ASPMX2.GOOGLEMAIL.COM. @ 3600 IN TXT "v=spf1 include:_spf.google.com ~all" @ 10800 IN MX 3 ALT2.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 3 ALT1.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 1 ASPMX.L.GOOGLE.COM. google8a70835987f31e34 10800 IN CNAME google.com. I made that one up and included right in the middle to show how confused I am. What I'd like to know is the exact syntax and where/how I should put this TXT record.

    Read the article

  • Roaming Profiles & Redirected Folders - storage consumption? offline files and caching?

    - by Ben Swinburne
    I understand the concepts of both roaming profiles and folder redirection and have used both separately before. I am about to set up a network from scratch and would ideally like to use both for the following reasons primarily Roaming profiles allow users to log on to any machine and have their profile Redirected profiles allow users to have their My Documents and Desktop etc backed up without the need to log off at the end of the day. The servers can run their backups overnight and there are no missing files due to the user not logging off. Redirected profiles largely alleviate the slow log in times caused by large profiles. My question is if some of the folders are redirected and therefore not part of the roaming profile what happens on machines which truly roam (i.e. laptops)? If there's offline files or a cache does this mean that the problem whereby a user has to log off comes back? By having them both enabled, is there any duplication i.e. if I have a users$ share and a profiles$ share would I have Desktop twice for example?

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >