Search Results

Search found 3414 results on 137 pages for 'happy emi'.

Page 93/137 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Am I able to forward traffic from an external subdomain to a specific local host?

    - by George Bowman
    I apologise in advance if the question doesn't make sense, please let me know. I've got a small LAN (~10 Virtual Servers) using Win Server 2008 as a DNS server. This is behind a smoothwall express 3.0 firewall with ports forwarded for specific services. I have a domain (123-reg) with the NS's that of afraid.org (DynamicDNS) and subdomains pointed to my (Dynamic) IP address e.g. subdomain1.example.com - 123.456.789.101. I think that adequately explains my set up. My question is, am I able to have subdomains e.g. subdomain1.example.com only point to a specific local host? Like so: subdomain1.example.com:80 - firewall(external facing) - server1.example.com:80 subdomain2.example.com:80 - firewall(external facing) - server2.example.com:80 I don't actually necessarily want to use port 80, otherwise I would just use VirtualHosts on apache, it is just an example port. Currently I can use either subdomain1.example.com OR subdomain2.example.com and they will both point to server1.example.com:80 I do not have to stay using Win Server 2008 for DNS, I am more than happy to move over to BIND if needs be, it was just easier to use Win Server 2008's DNS. I do not know if this is even possible, I have a feeling it isn't as I've only got one external IP address but any information is useful!

    Read the article

  • Can not open ports in iptables on CentOS 5??

    - by abszero
    I am trying to open up ports in CentOS's firewall and am having a terrible go at it. I have followed the "HowTo" here: http://wiki.centos.org/HowTos/Network/IPTables as well as a few other places on the Net but I still can't get the bloody thing to work. Basically I wanted to get two things working: VNC and Apache over the internal network. The problem is that the firewall is blocking all attempts to connect to these services. Now if I issue service iptables stop and then try to access the server via VNC or hit the webserver everything works as expected. However the moment I turn iptables back on all of my access is blocked. Below is a truncated version of my iptables file as it appears in vi -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5801 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5901 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 6001 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5900 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT Really I would just be happy if I could get port 80 opened up for Apache since I can do most stuff via putty but if I could figure out VNC as well that would be cool. As far as VNC goes there is just a single/user desktop that I am trying to connect to via: [ipaddress]:1 Any help would be greatly appreciated!

    Read the article

  • What causes PHP pages to consistently download instead of running normally

    - by Jonathan
    Hi, I'm running a Ubuntu Server on a VM, to test out different web forum solutions. I have set up a ~/public_html/ to be accessible with the apache2 web server, and that works fine. However when I go to a .php file on a browser (using my VM's ip-address/~username/phpfile.php) it does not display it as it should. Instead it offers to save to file/asks what program to open it with. Interestingly though that dialog box does recognise that it is a php file. I have the following version of php installed on the system: PHP 5.3.2-1ubuntu4.5 with Suhosin-Patch (cli) (built: Sep 17 2010 13:49:46) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies And the following server: Server version: Apache/2.2.14 (Ubuntu) Server built: Nov 18 2010 21:19:09 If anyone knows what might be causing this/potential solutions it would make me very happy :) EDIT: Turns out files this behaviour was only apparent on files in the ~/public_html/ directory. All php files in /var/www/ work fine. Prizes go to whoever can explain why? :D (And by prizes I just mean a well done, no actual prizes I'm afraid.)

    Read the article

  • Ngins wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • Is it okay to use an administrator account for everyday use if UAC is on?

    - by Valentin Radu
    Since I switched to Windows 7 about 3 years ago, and now using Windows 8.1, I have become familiar with the concept of User Account Control and used my PC the following way: a standard account which I use for every day work and the built-in Administrator account activated and used only to elevate processes when they request so, or to ”Run as administrator” applications when I need to. However, recently after reading more about User Account Control, I started wondering if my way of working is good? Or should I use an administrator account for every day work, since an administrator account is not elevated until requested by apps, or until I request so via the ”Run as administrator” option? I am asking this because I read somewhere that the built-in Administrator account is a true administrator, by which I mean UAC doesn't pop up when logged in within it, and I am scared of not having problems when potential malicious software come into scene. I have to mention that I do not use it on a daily basis, just when I need to elevate some apps. I barely log in into it 10 times a year... So, how's better? Thanks for your answers! And Happy New Year, of course! P.S. I asked this a year ago (:P) and I think I should reiterate it: is an administrator account as safe these days as a standard account coupled with the built-in Administrator account when needed?

    Read the article

  • Task scheduled to wake laptop - only works when lid is open

    - by JD Pack
    I am running Windows 7 Starter on an Acer Aspire One laptop. I want my laptop to automatically run a task (backup the HDD to a network drive) once a week in the middle of the night. I scheduled the task in "Task Scheduler" and checked the box to wake the computer to run the task. I also changed the advanced power settings to allow wake timers. This was half of the solution. It now works flawlessly when the lid is open... the computer can wake itself up from either sleep or hibernate mode to perform the backup. When the lid is closed however, its sleeping beauty. Any ideas? I don't want to have to remember to open the lid once a week. It sort of defeats the purpose of an "automatic" backup. Update: I discovered that it can wake from sleep (or hybrid sleep), but not from hibernate when the lid is closed. This is good news. I'd still be curious about how to get it to work from hibernate, but I'm pretty happy about waking from sleep at least.

    Read the article

  • Fixed ruby/mysql connection with new libmysql.dll, and broke Apache in the process

    - by jmtoporek
    Ok so bit of background - all my development has been on a local Windows 7 machine. I had Apache with PHP/MySQL running with no issues. Been using ruby (1.9.3 and latest rails release 3.2.9) with built in webrick server, but had a devil of a time connecting to mysql. Did some research, updated my libmysql.dll file in c:/ruby/bin and it worked! Very happy... except now Apache stopped working. In my attempt to resolve the issue I found an older copy of libmysql.dll, renamed the new file, copied the old file back to c:ruby/bin and apache works, ruby does not. So I can take this ass backwards approach but obviously this seems pretty stupid. I was surprised that Apache was using the dll file in ruby/bin folder. I presume this is related to path variables perhaps? I guess I was hoping someone could direct me as to how I can use one dll file for apache and another for ruby. Or if you have some other smarter approach - I've smart enough to follow directions to install apache from scratch and enable php on windows as well as ubuntu, but I'm not much of a sys admin, just a semi competent web developer.

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • Is the Windows VPN secure?

    - by Tor Haugen
    I have used a few VPN solutions over the years. Most are hard to set up, slow to connect and/or rather ill-behaved (replacing system drivers, disrupting each other etc). One solution I have never used earlier is the one built into Windows. This is mostly because the infrastructure guys always refuse to use it because they claim it's 'not secure'. Now I have finally had the chance to use it (on Windows 7), and wow, it's a breeze! Easy to set up, well-behaved, it connects almost instantly, automatically authenticates with my logged-in credentials, and integrates excellently with the UI. I have to say, unless it really isn't secure, I'll be happy if I never have to use another VPN product ever again. I gather the Windows VPN used to rely on PPTP, which is not considered secure. But in Windows 7/2008, it supports L2TP/IPSec, SSTP and IKEv2, and authenticates with EAP or CHAP/CHAPv2. That seems pretty up-to-date to me. But I'm just a lowly developer. Can someone in the know give me the lowdown on this?

    Read the article

  • How to restore Active Desktop if Themes is locked via GPO?

    - by pepoluan
    My company forces Active Desktop upon everybody so that it can display a (monthly-rotated) corporate wallpaper.jpg. Problem is, some computers (including my laptop) somehow experienced some errors resulting in the dreaded "Active Desktop Recovery" screen to show up... and clicking the "Restore My Active Desktop" button always resulted in "Internet Explorer Script Error". Various workarounds I found in the Internet either does not work or requires me to change the theme first to something else... and the latter I can't do because the Desktop Settings screen is locked via GPO. As it happens, due to the nature of the programs I use, I'm granted Administrator-level access on my computer. The question is: How do I fix my situation? Note: I don't need to put on my own wallpaper, but watching the "Active Desktop Recovery" screen (with its BLANK WHITE!!1! OH MY EYES!1!!one!eleven!! background) gets tiresome. I'm quite happy with the corporate wallpaper. I just need to somehow 'recover' my Active Desktop. More information: OS: Windows XP Professional SP3 (yeah, company's too afraid to even experiment with Windows 7) Antivirus: Symantec Endpoint Protection If you need any additional information, feel free to ask.

    Read the article

  • Every month, scheduled task fails and password must be reset - why?

    - by Ducain
    [NOTE: I posted this originally at StackOverflow but it got no traction there - reposting here.] We have a bit of software installed at a few client locations that runs (via Windows task scheduler) a few times each day. In ONLY ONE of the client locations, we have a unique problem: each month, the task will stop working, after running every day for weeks. Twice now it's failed on the 2nd of the month. When I walk the client through troubleshooting it, we've found that it can't start - access denied. To fix it, we simply re-enter the same exact password, and then off it goes happy as a clam. I've never heard of this issue, and their IT people say they don't have anything running once a month that might cause that. I'm at a complete loss here. Any ideas as to why this might be happening? Further details: Windows XP pro machine. Task is being fired with credentials from a local admin account. Computer is always on, and connected to the net.

    Read the article

  • Basics about file/folder permissions on Win 7

    - by Altar
    Hi. Under Win XP I never touched the permissions of a file/folder. I was happy with the way it worked. But recently I installed Windows 7 on a drive that previously hosted Windows XP. Now, some programs do not have 'read' and/or 'write' access to their own folders - and I am not talking about system folders like 'Program Files' but normal folders like 'C:\my data\my own folder\program folder'. I see that for folders created under Win XP I have some user groups that do not exist for 'normal' folders (folders created by me recently under Windows 7). For example, for the Win XP folder I have: Creator owner System Account unknown(S-1-5-21 blablabla... Admins Users For Win7 folders I have: Authenticated users System Admins Users How should I proceed? Should I give the right to the "Users" account to write to XP folders? Should I make the old folders (the XP folders) to have the same groups of users as the normal (Win7) ones by adding the "Authenticated users" account to those folders? Should I delete the "Account unknown" account from my system? (In this case, how?) Many thanks.

    Read the article

  • Can next hop address be same as destination address?

    - by Raj
    Like if host address is 100.0.0.1 and next hop address is 100.0.0.2 and destination ip address is also 100.0.0.2 Is this a valid use case? Any real life usage? <dest ip> <next hop> ip route 100.0.0.2 255.255.255.255 100.0.0.2 weight 1 next-hop-vrf GlobalRouter Above is the command on a router inside a VRF. 100.0.0.2 is pingable from host. 100.0.0.1 & 100.0.0.2 are an ip address assigned to a VLAN on host & destination respectively. On a linux box, Such configuration is valid. [root]# netstat -r -n Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 55.55.55.55 55.55.55.55 255.255.255.255 UGH 0 0 0 eth0 [root]# ip route show 55.55.55.55 via 55.55.55.55 dev eth0 As per my understanding, If a destination IP is reachable (i.e in the same subnet of host IP) we dont need a next hop. I came across one application for using next hop for destination IP in same subnet (i.e for VPN) See this: Will packets send to the same subnet go through routers? If next hop != destination IP but they are in same subnet as that of host, is a valid scenario for VPN, then i am wondering what are the applications of next_hop==dest_ip & subnet same as host? This is my first post in Super User. Extremely happy with the quick and warm response.

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • Free blog sites where the blogs can (if the template does) validate as XHTML Strict 1.0?

    - by Deleted
    I'm looking for a site where I may register a free blog and have the blog validate as XHTML Strict 1.0. On the surface this may seem like a trivial problem only related to the theme/template in use, but that's unfortunately not the case. One example of a provider which can't fullfill this requirement is Blogger. Altough the pages of the blogs there presents themselfs as XHTML 1.0 Strict it is impossible to actually comply with the requirements inheritied by that markup type in the blog (as the XHTML which is generated by Blogger makes the page as a whole invalid). I've sent a mail to Tumblr to see if it was possible with them, but so far my reply consists of them having forwarded my mail with a "suggestion" to the development department. I don't know if we had a communication error or if I'm actually going to receive a proper answer later. Time will tell. I haven't had time to investigate Tumblr myself, so they may very well be the solution to this problem. To sum things up, I'm looking for any provider: Of a free blog. The blog must have the capability to validate as XHTML Strict 1.0. With capability I mean that the system shouldn't get in the way of creating/using a theme which complies as XHTML Strict 1.0. Preferably is large or at least likely to stay around for a couple of years to come. But I'm willing to take my chances if none of the established providers are up to the task. Thank you for reading! I hope you know of any provider which would be suitable, preferably with proof by linking to a blog there which validates. I'm not looking for suggestions to look into, as there are far to many to investigate and far too little time. If you know of something for sure, I'd be very happy to know about it.

    Read the article

  • VirtualName-based local development host behind corporate proxy (MAMP)

    - by geerlingguy
    I am behind a corporate proxy server/firewall, and this firewall seems to not be too happy with my idea of local development. On my home computer (Mac/Leopard), I have MAMP running, with a rule in /etc/hosts that directs dev.example.com to 127.0.0.1, and I have a virtualhost set up in the httpd.conf file which works great for me. However, at work, I set up the exact same configuration, but am not able to access dev.example.com, likely due to some address/DNS translation going on via the proxy server. Here are the relevant details from Terminal: $ ping dev.example.com PING dev.example.com (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.025 ms $ host dev.example.com Host dev.example.com not found: 3(NXDOMAIN) I've tried adding dev.example.com to the list of bypass addresses in System Preferences (the 'Bypass proxy settings for these Hosts & Domains' list), but that had no effect. Is there any way I can develop locally using name-based hosts at work? I can access localhost, but can't get to the dev.example.com (or any other custom virtualhosts) here at work, which complicates other matters related to the sites on which I'm working...

    Read the article

  • Why is piping dd through gzip so much faster than a direct copy?

    - by Foo Bar
    I wanted to backup a path from a computer in my network to another computer in the same network over a 100MBit/s line. For this I did dd if=/local/path of=/remote/path/in/local/network/backup.img which gave me a very low network transfer speed of something about 50 to 100 kB/s, which would have taken forever. So I stopped it and decided to try gzipping it on the fly to make it much smaller so that the amount to transfer is less. So I did dd if=/local/folder | gzip > /remote/path/in/local/network/backup.img.gz But now I get something like 1 MB/s network transfer speed, so a factor of 10 to 20 faster. After noticing this, I tested this on several paths and files and it was always the same. Why does piping dd through gzip also increase the transfer rates by a large factor instead of only reducing the bytelength of the stream by a large factor? I'd expected even a small decrease in transfer rates instead, due to the higher CPU consumption while compressing, but now I get a double plus. Not that I'm not happy, but just wondering. ;)

    Read the article

  • Laptop seemingly randomly "freezes" to the point of no longer executing applications

    - by Aierou
    After upgrading to Windows 8 pro on my Samsung Series 7 Chronos NP700Z5C-S04US (may be relevant, I'm not sure), my computer began to stop allowing the execution of any service or application, as well as discontinuing the update of the clock until a hard shutdown was performed. This seems to occur randomly after periods of inactivity and I've no idea the cause. These are measures I have already taken in order to attempt to stop this: -Obviously Googling potential answers to this problem -Updating all drivers -Researching all events that have occurred around the time of the failure to respond (with no results) -I tried applying "bcdedit /set disabledynamictick no" which was a hotfix for what seemed to be the same error but was not. Here is some more, potentially related, information about the error: -No BSOD (actually, I haven't at all experienced a BSOD with Windows 8) -Computer seems to have a problem shutting down/restarting most of the time (Hangs at the point where it should completely turn off) -New sound instances are not able to play, but previously loaded containers function properly -As mentioned before, the clock freezes at the time of the error -USB devices function properly -Servers that I was running fail to respond on my end, but stay online. If you require more information, please request it specifically and I will be happy to oblige. Thanks.

    Read the article

  • Best solution top keep data secure

    - by mrwooster
    What is the simplest and most elegant way of storing a small amount of data in a reasonably secure way? I am not looking for ridiculous levels of advanced encryption (AES-256 is more than enough) and I am only looking to encrypt a small number of files. The files I wish to encrypt are mostly comprised of password lists and SSH keys for servers. Unfortunately it is impossible to keep track of ever changing passwords for my servers (and SSH keys) and so need to keep a list of the passwords. Obviously this list needs to be secure, and also portable (I work from multiple locations). At the moment, I use a 10MB encrypted disk image on my mac (std .dmg AES-256) and just mount it whenever I need access to the data. To my knowledge this is very secure and I am very happy using it. However, the data is not very portable. I would like to be able to access my data from other machines (especially ones running linux), and I am aware that there are quite a few issues trying to mount an encrypted .dmg on linux. An alternative I have considered is to create a tar archive containing the files and use gpg --symmetric to encrypt it, but this is not a very elegant solution as it requires gpg to be installed on every system. So, what over solutions exist, and which ones would you consider to be the most elegant? Ty

    Read the article

  • How to resolve 'No internet connectivity issues' with a Virtualised 2008 R2 Server using Forefront UAG

    - by user684589
    I have spent some considerable time reading up on as many possible blogs and articles as I can to help me solve why my VM (Running on Hyper-V) for DirectAccess has suddenly stopped being able to access the internet. The VM setup shares the same internet connection on which I have written and submitted this question so I know that the actual underlying internet connection is fully functional. Previous to last week the DirectAccess was fully functional and had no issues. This is a recent problem which was led up to by a number of consistent crashes on the DA machine when access was attempted. Upon reboot all seemed well until recently. I am not certain whether it is relevant, but previously to this I had a number of power issues where the entire VM host shutdown unexpectedly leaving around 8 VM's in a bad way. Upon restart, the UAG DirectAccess machine was unable to access its configuration service (although the service was started) but this seemed to relate to the Light-Weight Active Directory Service AD LDS which had a corrupted database. Having repaired this database, I restarted the service and could subsequently reconnect to the configuration service again. For good measure I re-bound the network adapters (virtualised through Hyper-V) and DirectAccess claimed to be all happy again. However as it stands my machine is still unable to access the internet showing the "No internet connectivity" exclamation mark for the external facing NIC. I have also tried removing the adapters, disabling, re-enabling and the problem persists. The intranet part of the VM CorpNet seems to be fully functional as before and I'm running out of ideas. Any input would be greatly appreciated. I am not an advanced Domain Administrator so please be gentle.

    Read the article

  • Computer freezes after watching Youtube videos

    - by Roberts
    I had Windows 7 installed all september. But I installed Windows XP Professional back because my computer couldn't handle the new OS. After first boot I tried to install newest flash player (from Adobe website), but it failed. I had my old setup on USB drive and it worked. I don't know is it important or not. I am watching Youtube videos in my free time (almost every hour). After few days the computer started to freeze when I open a page with the video or close the page with video, not while I watch a video. No BSODs. Nothing in Event viewer. I use Firefox only. When computer freezes the sound wont. If iTunes is playing a radio station or is it another video playing in background, the sound wont freeze. Last few days the mouse wont freeze. Its a strange symptom. If I click few times then the cursor will actually freeze. I just want to know where does this problem come from (hardware - graphics card, old motherboard or it's just some glitch in setups). If it's not graphics card then I will be happy. The graphics card is ATI Radeon HD 4650 - brand new. Catalyst 11.8 installed. Things I have tried: Installed newest flash player after a week (the setup didn't fail this time) Installed latest video drivers Deleting cookies Defragmenting hard drive Using TuneUp utilities for computer cleenup Installed latest Mozilla Firefox Cleaned the PC Changed CPU Fan speed almost to max (just to be sure) Things I haven't tried yet: Didn't try playing videos on other browsers What can I do now?

    Read the article

  • cheap gigabit switch for small business

    - by neoice
    my friend's business is currently borrowing my Adtran 1224R and is very happy with it. it's configured with a few VLANs to segment customers, internal traffic and public wifi. port 1 is a "trunk" port to the router, a chunky Linux box with iptables+NAT. they push a lot of traffic over the LAN (data backups) and really need gigabit. besides, I'd like my Adtran back :P my goal is to find a cheap(ish) switch that can function as a drop-in replacement. it looks like VLAN trunking is actually part of the 802.1q spec, so anything with VLAN support should cover the current trunk-to-router setup. it's nice to have both a web interface and SSH, but I can configure it either way if needed. things like the Netgear GS724T have caught my eye, but it seems like none of the hardware in the $300-500 range have really solid reviews. I'm concerned that "cheaper" hardware might not work for a network full of power users. does anyone have a recommendation for the Netgear GS724T or a switch that will meet my needs?

    Read the article

  • Revamping an old and unstable IT-solution for a customer?

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • Performance-optimizing Oracle 10g on a server that is also a Tomcat JSP app server?

    - by PKHunter
    I have inherited a simple RedHat 5 - 64bit platform. It has SCSI disks on RAID1, with 16GB of RAM. Double Core CPU. Oracle 10g, Release 2. This would be a decent platform for running the DB only, perhaps, but the same server in an "A-A mode" clustering (very simple) also runs Tomcat and there are several Java servlets running on this. Sadly there is no caching platform etc. We only use an external CDN for some html caching. I am personally more familiar with web environments on the LAMPP platform (apache, php, mysql, postgresql). PROBLEM: Because the server has both Tomcat JSP/Java and Oracle 10g running on the same server, with no caching, I have some issues of the server going down. Often, sadly. QUESTION: What are my options in terms of improving performance of all these different apps? Connection Pooling? Example, in Postgresql world we have PgBouncer, which really helps things. Does Oracle have something similar? Or is there a famous Java-based external pooler that people use in production environments? (I'm not familiar with Java) Any "SQL cache" as in the MySQL and Postgresql world? Any other kind of application cache, as "APC" or "eAccelarator" in the PHP world? The "OSCache" stuff from the Java world (JSP thingie I found on Google: http://onjava.com/pub/a/onjava/2005/01/05/jspcache.html?page=2) ... What else? Sorry if this is a noob question. I have googled and googled, but problem is I don't know what to google for, other than the broad general concepts above. So if not full answers, I would even appreciate basic pointers and I am happy to JFGI myself. Thanks!

    Read the article

  • How should I configure backup of my server?

    - by ed209
    I have just rented a dedicated server. If it helps this is the config I have: CPU1 Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz (Cores 8) RAM 15975 MB Disk /dev/sda doesn't contain a valid partition table (=> /dev/sda doesn't) Disk /dev/sdc doesn't contain a valid partition table (=> /dev/sdc doesn't) Disk /dev/sdb doesn't contain a valid partition table (=> /dev/sdb doesn't) Disk /dev/sda: 120.0 GB (=> 114 GIB) Disk /dev/sdc: 3000.6 GB (=> 2861 GIB) Disk /dev/sdb: 3000.6 GB (=> 2861 GIB) /dev/sda is a 120GB SSD. This is where I have Ubuntu/lamp installed. It's the drive that will run my site. With the account I got two other drives of 3000GB each which I really don't need but they came with the account. I figured I could use these to back up my main 120gb drive. So a couple of things I wondered were: Should I use these for backups? How should I back up. The data I want to back up is a user uploads directory full of images and the database. Everything else is either in a code repo or backed up some other way. For example, it would be nice to know there is a disk image of the 120gb drive somewhere that I can copy over should there be any problems but equally I don't mind doing a fresh install of all the software and copying over just the images and database dump. Thanks for your advice! (also, happy to not use the two other drives and backup elsewhere if it's more sensible)

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >