Search Results

Search found 28280 results on 1132 pages for 'having clause'.

Page 162/1132 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Seperate external and intranet portals using the same functions .htaccess

    - by jezzipin
    We are currently struggling with setting up rules for a .htaccess file for a website built upon our company product. The product is built using PLSQL and procedures can be accessed using URLs. We use this functionality to present different options to our users. These options can be injected into HTML pages using replacement tags. So, the tag [user_menu] is always replaced with: /wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for external sites and /intranet/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for internal sites. The issue we are having is twofold. We need to write our .htaccess rules so that the user can access the functionality whether they are internal or external. So, the links should work as follows: http://www.example.com/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} or http://www.example.com/internal/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} This is the other problem. As you can see for the internal link above, the procedure needs to be prefixed with internal instead or intranet. We cannot change this in our standard tags as this will affect other sites so we need to achieve this also using htaccess. Could anyone assist with this issue? I apologise if this is brief or confusing but it's something i've never done before and have been given the task of doing. I apologise for the lack of code that will be posted above however I am a front end developer and have been left to make these changes having no prior experience of .htaccess to please bare with me.

    Read the article

  • Strange File-Server I/O Spikes - What Is Causing This?

    - by CruftRemover
    I am currently having a problem with a small Linux server that is providing file-sharing services to four Windows 7 32-bit clients. The server is an AMD PhenomX3 with two Western Digital 10EADS (1TB) drives, attached to a Gigabyte GA-MA770T-UD3 mainboard and running Ubuntu Server 10.04.1 LTS. The client machines are taking an extremely long time to access/transfer data on the file server. Applications often become non-responsive while trying to open files located remotely, or one program attempting to open a file but having to wait will prevent other software from accessing network resources at all. Other examples include one image taking 20 seconds or more to open, and in one instance a user waited 110 seconds for Microsoft Word 2007 to save a document. I had initially thought the problem was network-related, but this appears not to be the case. All cables and switches have been tested (one cable was replaced) for verification. This was additionally confirmed when closing down all client machines and rebooting the server resulted in the hard-drive light staying on solid during the startup process. For the first 15 minutes during boot, logon and after logging on (with no client machines attached), the system displayed a load average of 4 or higher. Symptoms included waiting several minutes for the logon prompt to appear, and then several minutes for the password prompt to appear after typing in a user name. After logon, it also took upwards of 45 seconds for the 'smartctl' man page to appear after the command 'man smartctl' was issued. After 15 minutes of this behaviour, the load average dropped to around 0.02 and the machine behaved normally. I have also considered that the problem is hard-drive-related, however diagnostic programs reveal no drive problems. Western Digital DLG, Spinrite and SMARTUDM show no abnormal characteristics - the drives are in perfect health as far as the hardware is concerned. I have thus far been completely unable to track down the cause of this problem, so any help is greatly appreciated. Requested Information: Output of 'free' hxxp://pastebin.com/mfsJS8HS (stupid spam filter) The command 'hdparm -d /dev/sda1' reports: HDIO_GET_DMA failed: Inappropriate ioctl for device (the BIOS is set to AHCI - I probably should have mentioned that).

    Read the article

  • What is my BaseDN supposed to be with the following configuration of OpenLDAP?

    - by fuzzy lollipop
    I have the following in my OpenLDAP configuration. Using the latest version OpenLDAP on Centos 5.3. Installed using yum. From my /etc/openldap/slapd.conf database bdb suffix "dc=company,dc=com" rootdn "cn=Manager,dc=company,dc=com" From my /etc/openldap/ldap.conf BASE dc=company,dc=com I have successfully added an entry with ldapadd and retrieved it with ldapsearch from a local bash shell on the box. Now I am trying to get a Graphical Editor to connect to this server remotely so I can enter people from my laptop. But I am having no luck. I tried JXplorer, and it connects with Anonymous bind without me having to specify a BaseDN but I can't edit anything that way. If I try and give it a user name and password, using Manager and my rootpw I have in clear text just for testing, every GUI Client on my remote laptop complains about my BaseDN not being the correct format when I enter dc=company,dc=com and I tried cn=Manager,dc=company,dc=com. Error opening connection: [LDAP: error code 34 - invalid DN] I have tried multiple clients and all of them connect as anonymous, none let me connect authenticated where I can actually create or edit anything. I am using Manager as my username and the password from rootpw, is that correct?

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • Excel controls not visible for certain users

    - by Nossidge
    One of the users of an Excel program I've written is having a weird problem. None of the control objects (Command Button, ComboBox, etc.) are visible to him when he opens the file on his laptop. He is using Excel 2003, the same version I used to create the program, and enables macros using the pop-up when the file loads. I have Googled this, and have found these people who seem to be having the exact same problem, with various versions of Excel. Unfortunately, none of their questions were answered. I can't really explain it any better than this user: If I enter design mode and pull a control from the control toolbar onto a sheet all I see are the drag handles. When not in design mode I have to feel around with the mouse and can click the button which executes the button click code correctly and opens another sheet where again I have to feel around for the buttons to return me to the original sheet. The button I managed to click is now visible but as soon as I click anywhere on the sheet it disappears. I have verified that the visible property of the buttons is set and that the Show All Objects on the Options View tab is selected. If I pull buttons from the Forms toolbar onto a sheet they are visible. If I try to find Objects using F5 when not in design mode Excel reports no objects on the sheet. So, Super Users, can you help? UPDATE: Thanks for your replies, but much like the person in the ozgrid link, the problem has gone away. Not sure why it went, but I can confirm that the user rebooted again and also started up other Excel files that didn't contain controls in the interim. Perhaps that fixed it, or maybe it'll be back again. I'll keep udating with progress, and close if the problem doesn't reoccur for the next few days. Thanks again.

    Read the article

  • Router intermittently failing

    - by nomen
    My old Asus router died a few weeks ago, so I thought I'd set up my Debian box to deal with routing my home network. I have a few complications, but I adapted my configuration from a previously working configuration, and I don't see why I am having intermittent problems. But I am having them! Every so often, my SSH connections to the router (and to the Xen virtual machines hosted by the router) just drop. I am unable to use the router's dns server. I can't ping the router. Etc. All of these things work most of the time, but break down intermittently, for a few minutes at a time. (I can provide more details, but I'm not sure what will be helpful) /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # Gigabit ethernet, internal network auto eth0 allow-hotplug eth0 iface eth0 inet manual # USB ethernet, internet auto eth1 allow-hotplug eth1 iface eth1 inet dhcp # Xen Bridge auto xlan0 iface xlan0 inet static bridge_ports eth0 address 10.47.94.1 netmask 255.255.255.0 As I understand it, this is sufficient to create the network interfaces, and even do some switching between Xen hosts and my eth0 interface. I installed and configured Shorewall to manage routing between the bridge and my internet-facing interface: /etc/shorewall/zones fw firewall net ipv4 lan ipv4 /etc/shorewall/interfaces net eth1 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians lan xlan0 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians,routeback,bridge /etc/shorewall/policy net all DROP info fw net ACCEPT info all all REJECT info /etc/shorewall/rules DNS(ACCEPT) fw net DNS(ACCEPT) lan fw Ping(ACCEPT) lan fw ... and so on, these all work, when the router is accepting traffic at all. /etc/shorewall/masq eth1 10.47.94.0/24 Also, the router is currently "working", and I checked on a problematic client: arp infrastructure infrastructure.mydomain (10.47.94.1) at 0:23:54:bb:7d:ce on en0 ifscope [ethernet] I tried it when the router was down, and I (eventually) got the same response. It took about 30 seconds to return, though.

    Read the article

  • Server format & Reinstall while keeping Server & domain ID

    - by Chris
    Hi Everyone, I want to reinstall my 2008 R2 server from scratch, due to multiple Active Dir issues. I have only 1 server running AD and a spare machine to use if necessary. Is there a way to save just the user accounts and the domain SID, so that I can start with a clean server that uses the same name as before? I can reassign file security, but I do not want to have to rejoin all the users to a new domain. Also all users are mapped to folders on the server. What I hope to do is a clean install of the server without having to mess with the users machines. can someone please tell me the procedure to accomplish this? any help appreciated! Thanks guys, but I could be here all day telling you every error I am getting. can we please keep this to the question of how to do a reinstall and keep the same SID? I just want to start over without having to rejoin all the clients to a new domain. Is there such a tool that can backup the Server SID and the AD domain name so that I could restore them, without restoring any other data? I might not be using the correct terminology here, but hopefully you understand what I am asking. Thanks

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • What causes a switch port to receive data not destined for it?

    - by user1693454
    We are having an intermittent fault which is effecting one of our control systems on one of our HP Procurve switches. For some reason, this PLC (10mbit port - 192.168.6.56) which is attached directly to the HP Switch intermittantly start's receiving data which is not destined for it. The data is being sent from a Thecus NAS with latest firmware (192.168.6.218) to a physical IBM Server running Win2003R2 and SAP (192.168.6.225). The problem does not just send to this server, it has been to other physical servers in the past too, but always from the Thecus NAS. I am using a monitor port to wireshark what is going in/out of the PLC - normally there would be about 1mb in/out per 2 or 3 minutes - only a server asking the state of the coils. When the problem occurs, there is a flood of data being put onto the PLC line - in this captured instance, about 67mb in less than a minute. Due to this, there is no way that the PLC can be queried as the port is effectively DOSed, in turn killing part of our factory. I know that having Production on the same vlan as IT is not a good idea - I agree, however it cannot be changed at the moment (will have to wait 3 months), as well as the problem has only started happening in the last 3 months. Here is a screen cap of one of the packets being sent from the Thecus NAS which was captured from the PLC port on the HP Switch: And there are over 700 of these in this one 1024kb file. If anyone has any idea on what could be going on, some help would be greatly appreciated. If you need to know anything more, let me know! Cheers!

    Read the article

  • MySQL : table organisation for very large sets with high update frequency

    - by Remiz
    I'm facing a dilemma in the choice of my MySQL schema application. So before I start here is a picture extremely simplified of my database : Schema here : http://i43.tinypic.com/2wp5lxz.png In one sentence : for each customer, the application harvest text data and attached tags to each data collected. As approximation of the usage of each table, here is what I expect : customer : ~5000, shouldn't grow fast data : 5 millions per customer, could double or triple for big customers. tag : ~1000, quite fixed size data_tag : hundred of millions per customer easily. Each data can be tagged a lot. The harvesting process is permanent, that means that around every 15 minutes new data come and are tagged, that require a very constant index refreshing. A lot of my queries are a SELECT COUNT of DATA between specific DATES and tagged with a specific TAG on a specific CUSTOMER (very rarely it will involve several customers). Here is the situation, you can imagine with this kind of volume of data I'm facing a challenge in term of data organization and indexing. Again, it's a very minimalistic and simplified version of my structure. My question is, is it better: to stick with this model and to manage crazy index optimization ? (which involves potentially having billions of rows in the data_tag table) change the schema and use one data table and one data_tag table per customer ? (which involves having 5000 tables on my database) I'm running all of this on a MySQL 5.0 dedicated server (quad-core, 8Go of ram) replicated. I only use InnoDB, I also have another server that run Sphinx. So knowing all of this, I can't wait to hear your opinion about this. Thanks.

    Read the article

  • Oracle: 1 Large Server vs. 2 Smaller Servers?

    - by nvahalik
    We are in the planning stages of setting up our production Oracle 10gR2 environment. Our budget gives us the ability to buy 2 processor licenses of Oracle DB Standard Edition. We have minimal experience with Oracle so I'll defer to anyone who has used it. We are trying to decide if we should set up a single dual quad-core box or 2 individual quad-core boxes in a RAC configuration. Our DB right now is about 60 GB, and at our peak, we'll have up to 150 concurrent users. Most of the big stuff is done via batch processing at night. My gut tells me that having 2 boxes in a RAC configuration can't be a bad thing because it provides a true hardware failover solution. DB stored in a shared LUN on a SAN via iSCSI. Plus if we ever need to add capacity, we already have boxes in place that can be upgraded with extra procs (I assume with zero downtime, since it's set up in a RAC config) if we add extra licenses, or RAM. Does RAC have any performance penalties? Will it add extra latency? Is there any true advantage for having dual processor boxes running these systems? If we build out the Oracle boxes with special hardware: hardware iSCSI cards, TOE NICs, will these boxes be solid? We are deploying on 64-bit Windows. So what would you do? One box or two?

    Read the article

  • Can't access certain web sites - reset router, any ideas?

    - by IniTech
    EDIT: This problem was resolved by my ISP - had to do with damaged fiber in one of their locations. Thanks to everyone that helped. Not sure if this is the right site (I'm a StackOverflow user) so I thought I'd give it a shot. I'm having trouble connecting to certain sites on any of the 3 machines that are on my LAN. The following sites are returning "Problem Loading Page - The connection has timed out" Sourceforge.net CNet.com Microsoft.com OpenDNS.com even my company's webiste I was worried about possible malware/virus, but I don't think that is the case (given the inability to access my company's site and the fact that all 3 machines are having the same issues.) I've tried with IE8, FF, and Chrome I have reset my router (WRT54G) and my machine(s) multiple times. EDIT: It is also worth noting that this page spins constantly and no avatars show up (I'm assuming it is trying to access gravatar.com with no success.) EDIT: I have the same issues directly connected to the modem. So, any router config is probably not the issue I'm a programmer, not a network guy - any ideas?

    Read the article

  • Alias for Drupal "Sites" folder with Apache on Windows Server 2008

    - by sgtbeano
    I'm having to move a number of sites from a LAMP stack to a WAMP one, provided by Zend, and I've hit a problem. Our architecture is a number of loadbalanced web servers which have their own local webapp drives which are kept in sync with one server performing as the master copy. There is then a separate DFS share provided to all web servers from our pillar san. Usually a Drupal install under our LAMP cluster would have the main Drupal web app in a local HTDOCS mount for each server and the SITES directory within Drupal would then be symlinked out to the DFS or NFS share so that there is a common FILES and TMP directory. The problem I'm having is that there seems to be no equivalent of symlinks on Win Server 2008, shortcuts have a .ink at the end making Apache see them as a distinct file. So I've tried using an alias call in the vhost file like this; <Location /drupal-626/sites> Order deny, allow Allow from all </Location> Alias /drupal-626/sites "Z:\Path to alternate sites directory" The root for this test is; http://main-domain-url/drupal-626/ Unfortunately this isn't work so I'm wondering if any of you have a solution which would work? Many thanks for taking the time to read this.

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • Windows 7 pc freezes for an indeterminate amount of time after unlocking

    - by pikes
    Not sure if this type of question is appropriate for this forum, but I've tried everything I can think of to solve this problem aside from format/reinstall. I recently got a new work PC (Dell optiplex 755) with windows 7 professional x64. Standard developer software installed for .net development: VS2008, VS2005, SQL management studio, office 2007, etc. Recently I've been having this weird problem where after I lock my pc, when I try to unlock it, the screen will be black for awhile after unlocking. I can ctl+alt+del and put my password in but then it just goes black. The amount of time on the black screen seems to be related to the amount of time I am away from my PC. If only away a few minutes, it'll take about a minute to get to the desktop. If away for an hour, could take up to 15 minutes. If I lock it and go home for the night, I have to restart my PC in the morning (I've let it sit for an hour after a night of being locked and nothing happened). It doesn't do it every time but definitely the majority of the time. One weird thing I've seen is that if I remote into my machine before trying to log back in it does not do it. I uninstalled all software back to the point when I remember it started happening and it still does it. I was using this PC for a few weeks without this problem happening at all. Anyone know what my next troubleshooting steps could be? My IT department tried to fix it by moving my old profile to another disk and having me log in, effectively recreating a profile from scratch but that didn't solve it. As I said above if this isn't the right forum for these types of questions please let me know. Thanks in advance!

    Read the article

  • Wireless Internet Connection Sharing in Ubuntu

    - by klutch2
    As the title states, I need to share a wireless connection with a laptop running Ubuntu as the AP. The setup will be as follows: Corporate WiFi <<== Laptop <<== Other Devices i.e. (iPad, iPhone) I want to be able to connect the "Other Devices" via WiFi to the laptop. I have thought of setting up an ad-hoc network by connecting to the Corporate WiFi and then setting up a new network and hoping the connection to both would stay, but that doesn't seem to work. If I set up the ad-hoc network by itself, I can see it from my "Other Devices". The reason I need this is because for some reason, my iPad and my iPhone will not connect to my corporate WiFi and I need to use them so I want to use my laptop to share the connection and act as an AP for my "Other Devices". My laptop is a Chrome CR-48 running Ubuntu and as some of you might know, it does not have an ethernet port, so having a wired connecting and then setting up a network is out of question. I want to connect to the Corporate WiFi and share that connection by having the laptop act as an AP for other devices.

    Read the article

  • Does OS X support linux-like features?

    - by Xeoncross
    I have been using XP for almost a decade. Contrary to popular belief, it has served me well. In the last 4 years I don't remember ever having it crash on me. It has the most stable GUI I have ever used. However, an OS is only as good as it's GUI AND command line combined. Windows command line is awful and totally useless. So I have been using Ubuntu for a couple years and Debian on my servers. The only problem is that Gnome applications (ubuntu 6-10) constantly crash on me (Ubuntu Studio was the most unstable OS I ever used). I have high quality Gigabyte, MSI, and Asus motherboards and CPU's from old Semprons/Athlons to Celerons/Core 2 Quads. What are the odds that every PC I have ever owned can't remain stable with a linux GUI? Not to mention that Adobe CSx Suite doesn't work on linux. Anyway, I am now looking at moving to a MAC in the hope of finding a stable GUI and a feature-packed command line. Does Mac OS have an integrated command line where I can do linux-like-awesomeness like rsync, ssh, wget, crong jobs, package updates, and git without having an unstable GUI? Basically, until the linux GUI applications get a little better, is OS X what I need?

    Read the article

  • Troubleshoot port forwarding. Could it be ISP blocking incoming connections?

    - by Gravy
    Had a new Axis IP camera delivered yesterday. Plugged into Cisco E2400 Wireless Router but having problems. Example topology: WAN IP: 10.10.10.10 (example) Cisco Router: 192.168.1.1 Axis Camera: 192.168.1.10:80 Port forwarding rules set up on router External Port: 999 Internal Port: 80 Protocol: TCP & UDP Device IP: 192.168.1.10:80 Enabled: True Trying to connect from within the lan to 192.168.1.1:80 from within browser - Works properly. Trying to connect from within the lan to 10:10:10:10:999 from within browser - Works properly. Trying to connect from outside the LAN (e.g. via 3g or another isp) to 10:10:10:10:999 from within browser - Doesnt work. I get the following errors from different machines / browsers: Safari could not open the page because the server stopped responding (IOS) The server at xx.xx.xx.xx is taking too long to respond. (firefox) This problem is not just for the Axis camera. I am also having similar problems connecting to my NAS drive. After using a web based port scanning tool, it appears as though port 999 is closed. Not certain why when I have set up port forwarding within the router. Any troubleshooting suggestions to help me determine whether the problem is with my Cisco settings / firewall or whether it could be my ISP blocking incoming connection requests? Many thanks

    Read the article

  • ubuntu 12.04 kvm virtual server network setup, can't get the machine to be connectable

    - by xyious
    I have worked on my Ubuntu Server host for weeks now and I just can not manage to get the virtual machines into the network.... here's what I need to do: I need to be able to create virtual machines that have IP addresses that can be reached from the outside (192.168 network). I need to be able to connect to the virtual machines through ssh, ftp, http and preferably https, anything else doesn't matter that much. So far everything seems simple enough and I have a lot of leeway in terms of IP address range and server/client configuration. I have the option of taking part of a /24 net as most IPs aren't used, and if it's absolutely necessary I have the option of creating a new /24 subnet. Also have the option of reformatting and reinstalling OS on the host and recreating the virtual machines as nothing has been done other than trying to get virtual machines to work. I would prefer if the virtual machines were just part of the normal network which would be 192.168.5.0/24. The host machine has 2 network cards so I don't even necessarily need the Host to be connectable in the same /24 network. I have tried (I think) just about everything from about 5 different tutorials on bridging (giving br0 the same IP that eth0 used to have (Host is able to connect to VM and vice versa, VM doesn't have outside network access), having eth0 set up like it always was and having br0 have a different IP (same as above), NAT with port forwarding (which I would have preferred not to use but will if it works), turning off one of the hosts network cards and just using one of them, different subnets.... etc. I do know my way around iptables fairly well.... Host is 64bit Ubuntu Server 12.04, using libvirt/kvm. edits: Local network is 192.168.5.0/24, host has static ip 192.168.5.254, GW .5.1 which is also nameserver. We have a second Local network at 192.168.10.0/24 with .10.1 GW, but both hosts and VMs were supposed to go into the .5 subnet. The .10 subnet isn't required, but it wouldn't be horrible if the Host were only accessible in the .10 subnet.

    Read the article

  • Server periodically freezing - Help Stabilizing

    - by JonDog
    We run an asp.net/sql server data collection website with a hand full of clients dumping data in and running reports. We moved to a new server (specs below) and have had issues with it freezing and having to reboot it a dozen times over the pass six months. The hosting company has mentioned possible causes (listed below) but cant give a definite answer on what is going wrong. They have offered to reconfigure how ever I like. We have benefited from having a much faster system and really dont want to get rid of the ssd's unless they are the issue. Two possible setup changes that I've talked with them about are also listed below. Any suggestions on what maybe causing the freezing issue as well as suggestion on a new setup would be great. My main questions are: Do SSD generally have problems running the OS & SQL Server on the same RAID Array? and Are the new SSD's still unrefined enough to be running in a production environment? Thanks Current: Xeon Quad Core E3-1270 3.40 Ghz 16 GB DDR3-1333 ECC SDRAM First Hard Drive: 120GB Intel SSD Second Hard Drive: 120GB Intel SSD Third Hard Drive: 120GB Intel SSD Fourth Hard Drive: 120GB Intel SSD SAS 4 Port RAID Card Windows 2012 Standard Edition - 64 Bit MSSQL 2008 Web Edition Possible Causes: Running Sql Server & OS on same RAID Array OS Software Issues Using SSD's CPU Underpowered Not enough RAM Option 1 2x Xeon Quad Core E5-2603 1.80 GHz 16 GB DDR3-1333 ECC SDRAM 1 x 240GB Intel SSD - OS 3 x 1 TB SATA HDD (7200 RPM) - SQL Server SATA 4 Port RAID Card Windows 2012 Standard Edition - 64 Bit Option 2 Dell PowerEdge E3-1270v2 3.5GHz 4 Cores 16 GB DDR3-1600 UDIMM 4 x 128 GB Samsung 840 Pro SSD Add-in H200 (SAS/SATA Controller), 4 Hard Drives - RAID 10 Windows 2012 Standard Edition - 64 Bit

    Read the article

  • How many bootable partitions are possible to have on one hard drive?

    - by draiden
    This may not be the correct place to post this; if that's the case, just let me know and point me in the right direction please! I'm thinking of building a box that needs to be lightweight and portable, and would need to be able to boot multiple installations of windows. I am needing to have multiple installations so that I can, for example, plug the box in to the network at one location, boot in to that location's partition, and have full access to everything I would normally need to do on a computer that has already been set up on that network. Then, when I go to the next client, I would be able to do the same thing, with the new location's partition, and have all of those network settings, drive mappings, etc., available there. Obviously I'd need to go through and set them all up on the different locations/networks, I'm not expecting it to magically know where I am and what I'm doing. It would be like I'm carrying around a computer that is configured for each place I need to go in one little box, instead of having to have multiple computers or having to reconfigure all the settings and such every time I go to another client. Or is there an easier way to do this that I haven't learned of?

    Read the article

  • ASP.NET, IS7 and IE8 caching?

    - by jdege
    We're suddenly having problems with some of our sites having old versions of .css and .js files show up in the browser. Generally, these problems go away, when the user clears cache in the browser. Is there something we can do either in the code or in IIS7, to convince the browser to not used the cached files? In our weirdest case, we have one customer whose users hit our site, and get an old version of a js file. They clear cache, load the page, get the current version, and the page runs fine. Then they load the file again, and suddenly have the old version, again. Any ideas as to how that might be happening? I can think of three: The browser is somehow holding on to the old version, when we clear cache, and is putting it back in the cache, before the second page load. One of our servers has an old version of the file, and while the first page load after a clear cache pulls it from one of the servers with the current version, second and subsequent page loads pull it from the server that has the old version. The first load after a clear cache goes straight to our servers, while subsequent loads pull the file from the cache on the customer's web proxy. I have to say, all three of those scenarios seem outlandishly unlikely, but it's a repeatable behavior. Any ideas?

    Read the article

  • Page pool memory

    - by legiwei
    I'm currently using Windows XP SP3 32 bit, using C2D E6320 with 2GB RAM. When I am playing Starcraft 2, I encounter an error where it says my system is running low on page pool memory. Starcraft graphic settings suggested a high settings for me. I do not think it has to do with my GC but with my RAM. I then made a search to try to rectify the problem. Apparently, it's something to do with my virtual memory. I then proceed to try to the suggested solution which is to temper the registry and limit the page pool memory to 384MB. However, having done so, I still could not achieved it. I've seen screenshot settings of windows XP with 2GB having 384MB of page pool memory. My default settings puts it at 195MB whereas when I try to increase the pool limit, it can only go to a max of 229MB. I tried increasing my RAM capacity to 3GB but the pool limit still remains. I like to know how to increase my page pool memory. I've tried searching for solution but to no avail other than the one that I've mentioned above (which didn't solve my problem completely).

    Read the article

  • How many bootable partitions are possible to have on one hard drive?

    - by draiden
    This may not be the correct place to post this; if that's the case, just let me know and point me in the right direction please! I'm thinking of building a box that needs to be lightweight and portable, and would need to be able to boot multiple installations of windows. I am needing to have multiple installations so that I can, for example, plug the box in to the network at one location, boot in to that location's partition, and have full access to everything I would normally need to do on a computer that has already been set up on that network. Then, when I go to the next client, I would be able to do the same thing, with the new location's partition, and have all of those network settings, drive mappings, etc., available there. Obviously I'd need to go through and set them all up on the different locations/networks, I'm not expecting it to magically know where I am and what I'm doing. It would be like I'm carrying around a computer that is configured for each place I need to go in one little box, instead of having to have multiple computers or having to reconfigure all the settings and such every time I go to another client. Or is there an easier way to do this that I haven't learned of?

    Read the article

  • Is is possible to guide installation of new programs using %ProgramFiles%? [closed]

    - by ??????? ???????????
    The purpose of this is to have the default "program files" (32 and 64 bit) folders located under an arbitrary path, possibly on a drive separate from where windows lives. Initially I thought that this may be done using a system environment variable through the dialog located under Control Panel - System - Advanced - Environment Variables. These variables turned out to be set in the registry under the key HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion. However, one particular entry is confusing. The ProgramFilesPath entry seems to point at an environment variable that is not defined under the same registry key. I could assume that the difference between ProgramFilesDir and ProgramFilesPath is none and that one of them exists as a backwards compatibility, but having some legitimate resource from Microsoft to look at would be better than guessing. After receiving some worrying feedback about having both 32 and 64bit applications in the same folder, I have decided not to ask about the feasibility of this to avoid discussion. The real question is if the desired effect is possible to attain by "cutting into" the windows setup process and modifying those registry entries as early as possible. These settings should be system wide and not only for software installed by a particular user. If this is indeed something that can be done, I wonder if there are any subtle pitfalls. Programs that expect libraries and other resources to be in default locations can probably be dealt with using the same technique as employed by Windows to re-map the "Documents and Settings" folders and the like (i.e. breaking legacy applications is not real concern).

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >