Search Results

Search found 14184 results on 568 pages for 'peter small'.

Page 417/568 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Why is my cat5e cable not okay?

    - by torbengb
    My LAN cable seems to work (indicator LED lights up) but the computer can't find a connection. What's wrong? Setup: I had to run a network cable from a router in one room to a computer to another room, through a hole in the wall that was too small to pass the RJ-45 plug through. The plug was cut off and the cable passed through the wall. Then a new plug was crimped on using this detailed explanation. The connection didn't work because the (factory-made!) plug in the other end used a non-standard wire order. I crimped a new plug on again, using the exact same order of the factory-made plug. The LED indicator lights up on both ends, but the computer cannot find a connection. What can be wrong? How can I find out? I don't have a cable tester. By visually inspecting my new plug, I think it's good; the wire order matches the other end, and all wires are all the way inside the plug and reach the connector piece. I've used the cable before (with both factory-made ends) so i don't think that the cable itself has a defect.

    Read the article

  • o3d javascript uncaught referenceerror

    - by David
    hey, im new to javascript and am intersted in creating a small o3d script: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Test Game Website</title> </head> <body> <script type="text/javascript" src="o3djs/base.js"></script> <script type = "text/javascript" id="myscript"> o3djs.require('o3djs.camera'); window.onload = init; function init(){ document.write("jkjewfjnwle"); } </script> <div align="background"> <div id="game_container" style="margin: 0px auto; clear: both; background-image: url('./tmp.png'); width: 800px; height:600px; padding: 0px; background-repeat: no-repeat; padding-top: 1px;"></div> </div> </body> </html> the browser cant seem to find o3djs/base.js in this line <script type="text/javascript" src="o3djs/base.js"></script> and gives me an uncaught referenceerror at this line o3djs.require('o3djs.camera'); Obviously, because it can't find the o3djs/base.js... I have installed the o3d pluggin from google and they say that should be IT ive tried on firefox, ie and chrome thanks

    Read the article

  • Unable to connect to EC2 instance after "reboot"

    - by KPL
    I am not able to connect to my m1.small instance after rebooting it. I have already associated the public IP with this instance. Upon checking the system log, this seems to be the issue: cloud-init-nonethttp://11.84: waiting 10 seconds for network device cloud-init-nonethttp://21.85: waiting 120 seconds for network device cloud-init-nonethttp://141.85: gave up waiting for a network device. Cloud-init v. 0.7.3 running 'init' at Sun, 18 May 2014 07:02:55 +0000. Up 142.54 seconds. ci-info: +++++++++++++++++++++++Net device info++++++++++++++++++++++++ ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: | Device | Up | Address | Mask | Hw-Address | ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . | ci-info: | eth0 | False | . | . | 02:43:xx:xx:xx:xx | ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! A bunch of these follow the above message: 2014-05-18 07:02:56,178 - url_helper.pyWARNING: Calling http://169.254.169.254/2009-04-04/meta-data/instance-id failed 0/120s: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by : Errno 101] Network is unreachable) This is obviously related to the network interface not being working correctly. I have tried this so far: Relaunch a new instance from the custom AMI (created from EBS) of the failing instance. The same error shows up in the logs. Attach a new network interface to the EC2 instance. The error still persists. eth1 shows up in the list but the "up" column is False.

    Read the article

  • NAT, iptables and problematic ports

    - by Rajie
    I am building a small office network with virtual machines. My schema is this: Computer A: gateway, ip 1.1.1.1, iptables used for NAT [eth0=public internet dhcp, dhcp; eth1=gateway] Computer B: client, ip 1.1.1.2, using gateway from Computer A. NAT is working, and Computer B can access the internet using the A's gateway. I redirected some incoming ports from A to B (for instance, if A receives a request to port 80, it goes automatically to Computer B's Apache). The thing is that I do not really understand how to open/close ports for Computer B from Computer A. I know how to close a port: iptables -A INPUT -p tcp --dport 80 -j DROP And it will refuse all incoming (not output) connections to port 80. However, this works for main interface eth0. I tried to, for instance, drop ingoing and outgoing connections for Computer B, port 80: iptables -A FORWARD -i eth1 -o eth0 -p tcp --dport 80 -j DROP iptables -A FORWARD -i eth0 -o eth1 -p tcp --dport 80 -j DROP But it does not work. And I cannot figure out what I am doing wrong. Any clue?

    Read the article

  • SMB access from XP to Windows 2008 R2

    - by Pablo
    Here's the thing... I have a very slow file copy performance from Windows XP clients to Windows 2008R2 servers. Here are the facts: Windows XP to Windows 2K3: Fast Windows XP to Windows 2K8: Very Slow Windows 7 to Windows (any): Fast Despite the fact that the obvious solution would be to upgrade to Windows 7, well, we have 900 desktops so it's not an option in the short time. I have tried everything: Disabling SMB2.0, disabling security signatures, changing the TCP Window size, disabling the W2K8 auto tuning, upgraded the drivers, etc. We eliminated the network; both the server and the client are connected to the same core switch (no hops, no routers, same VLAN). Upon monitoring the network with a packet capture utility, we see that the SMB packets being exchanged between the W2K8 and the XP machines are very small packets (256 bytes); despite the fact that the MTUs are properly set (1500) and there is no fragmentation whatsoever. In fact, those SMB packets show, on the IP datagram, that the window is 65535 or close. The same trace, made using the same application but instead of using a W2K8 share uses a Windows XP share (and that goes FAST) shows SMB packets of 4096 bytes. I can post the traces if necessary. So, why does XP-W2K8 negotiation arrange for 24-bytes SMB payload, whereas the XP-XP negotiation arranges for 4096 SMB packets? Any ideas? I am running short of those...

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) What else is out there? Are you in a similar situation? What do you use?

    Read the article

  • Parallel port blocking

    - by asalamon74
    I have a legacy Java program which handles a special card printer by sending binary data to the LPT1 port (no printer driver is involved, the Java program creates the binary stream). The program was working correctly with the client's old computer. The Java program sent all the bytes to the printer and after sending the last byte the program was not blocked. It took an other minute to finish the card printing, but the user was able to continue the work with the program. After changing the client's computer (but not the printer, or the Java program), the program does not finish the task till the card is ready, it is blocked until the last second. It seems to me that LPT1 has a different behavior now than was before. Is it possible to change this in Windows? I've checked BIOS for parallel port settings: The parallel port is set to EPP+ECP (but also tried the other two options: Bidirectional, Output only). Maybe some kind of parallel port buffer is too small? How can I increase it?

    Read the article

  • Dynamically add Server 2008 NLB Nodes

    - by Nick Jacques
    Hi All, I have a small NLB cluster for Terminal Servers. One of the things we're looking at doing for this particular project (this is for a college class) is dynamically creating Terminal Servers. What we've done is create policies for a certain OU, that sets the proper TS Farm properties and installs the Terminal Server role and NLB feature. Now what we'd like to do is create a script to be run on our Domain Controller to add hosts to the preexisting NLB cluster. On our Server 2008 R2 Domain Controller, I was thinking of running the following PowerShell script I've kind of hacked together. Any thoughts on if this will work? Is there any way I can trigger this script to run on the DC once all the scripts to install roles are done on the various Terminal Servers? Thanks very much in advance!! Import-Module NetworkLoadBalancingClusters $TermServs = @() $Interface = "Local Area Connection" $ou = [ADSI]"LDAP://OU=Term Servs,DC=example,DC=com" foreach ($child in $ou.psbase.Children) { if ($child.ObjectCategory -like '*computer*') {$TermServs += $child.Name} } foreach ($TS in $TermServs) { Get-NlbCluster 172.16.0.254 | Add-NlbClusterNode -NewNodeName $TS -NewNodeInterface $Interface }

    Read the article

  • Public DNS redirect subdomain to Windows Server 2003 DNS

    - by user125248
    I'm a programmer by trade but often dabble in sysadmin tasks and responsibilities. I have recently been tasked with setting up a Windows Server 2003 networking environment for a small business with multiple branches. The business already has a domain name they use to host a website at www.example.com. Currently the DNS nameservers are at Zerigo and I would very much like it to remain that way (as they specialize in just providing DNS services and they do this very well). We also have a bunch of other subdomains we use to conviniently connect to the various branches that have static IPs assigned from ISPs, so we're able to connect easily to branch1.example.com. Is it possible to 'redirect' all intranet.example.com DNS requests to a Windows box? I've been doing a little reading and I see there are NS records that might be able to do this, and the Windows DNS server could then perform all of the lookups for that subdomain, say, server1.intranet.example.com or client5.intranet.example.com. This would seem better to me, than registering a new domain name for the organisation, as keeping a single domain name makes more organizational sense.

    Read the article

  • Xbox360 Universal Media Remote - out of sync?

    - by Traveling Tech Guy
    Hi, I have the Universal Media Remote from Microsoft, which was included with my HD-DVD package. I've been using it for over a year to watch videos/DVDs on my Xbox360 and it saved me the hassle of navigating with the game controller (which turns itself off every 5 minutes).All of a sudden (it didn't fall or suffer any severe trauma), it does not communicate with the Xbox anymore: it is on, I replaced batteries several times, but the Xbox does not respond to commands. The TV does - volume, channels, etc. - but I need the Xbox functionality.As far as I can see, there's no way to sync the remote with the Xbox - it lacks that small sync button that the game controllers have.I called Microsoft Support and spoke for an hour to someone who, I guess didn't know what to do at all. Bottom line - since it's been over a year, they won't fix/replace it - I have to get a new one.Before I do (if I do), I need to know if there's anything I can do with the existing remote, and will I have the same problem with a new one (i.e. the problem is with the Xbox itself)? Thanks!

    Read the article

  • compose-key mappings differ between gtk and qt apps

    - by intuited
    I'm noticing that there is an inconsistency in the output of one of the compose-key combos. When I type ( [Compose] . . ) under Chrome, gedit, gnome-terminal, or roxterm I get the character '?'. This is a small raised dot: $ echo -n '?' | xxd 0000000: cb99 .. When I type the same combo under konsole, yakuake, or kate, I get the character '…'. This is an ellipsis: $ echo -n '…' | xxd 0000000: e280 a6 ... This is not a font issue: if I copy-paste the characters from an app using one toolkit to an app using the other, its appearance is maintained. I use a few other combos pretty regularly and they seem to work consistently across toolkits. I think this is a pretty recent phenomenon. I upgraded from Ubuntu 8.10 to 9.10 fairly recently so this might be related. I'm not sure if this will reoccur if I restart X, and I'd rather not find out. Can someone explain how this is possible, and what I can do to resolve it? I'd like to have the ellipsis appear in all apps when that combo is entered.

    Read the article

  • How to enable telnet with port 3306 during Master to master replication on MySQL Server

    - by Mainio
    I am trying to do Master to Master Replication in Windows Server 2008. I am successfully able to replicate all the database of Master 1 to Master 2. But I am unable to replicate the changes made on Master 2 to Master 1. Later on I found that, I can telnet to Master 1 from Master 2 with port 3306 but I am not able on telnet from Master 1 to Master 2. When I check netstat on both Master. I found the following result. I couldn't publish my public IP so I put name as Master 1 and Master 2 for their respective IP Master 1 C:\Users\XXXXX>netstat Active Connections Proto Local Address Foreign Address State TCP Master 1:3306 Master 2:61566 ESTABLISHED TCP Master 1:3389 My remote:56053 ESTABLISHED TCP 127.0.0.1:3306 Master 1:60675 ESTABLISHED TCP 127.0.0.1:3306 Master 1:60712 ESTABLISHED TCP 127.0.0.1:60675 Master 1:3306 ESTABLISHED TCP 127.0.0.1:60712 Master 1:3306 ESTABLISHED Master 2 C:\Users\XXXX>netstat Active Connections Proto Local Address Foreign Address State TCP Master 2:3389 My remote:56124 ESTABLISHED TCP Master 2:61566 Master 1:3306 ESTABLISHED TCP Master 2:61574 bil-sc-cm02:http ESTABLISHED TCP 127.0.0.1:3306 Master 2:61562 ESTABLISHED TCP 127.0.0.1:3306 Master 2:61563 ESTABLISHED TCP 127.0.0.1:61562 Master 2:3306 ESTABLISHED TCP 127.0.0.1:61563 Master 2:3306 ESTABLISHED TCP 127.0.0.1:61573 Master 2:3306 TIME_WAIT All shows that In my master 2, port 3306 is not activate. Now I need solution over here. How can I figure it. Your small suggestion would be million for me. Thank you Regards, Udhyan

    Read the article

  • How to use most of memory available on MySQL

    - by Zilvinas
    I've got a MySQL server which has both InnoDB and MyISAM tables. InnoDB tablespace is quite small under 4 GB. MyISAM is big ~250 GB in total of which 50 GB is for indexes. Our server has 32 GB of RAM but it usually uses only ~8GB. Our key_buffer_size is only 2GB. But our key cache hit ratio is ~95%. I find it hard to believe.. Here's our key statistics: | Key_blocks_not_flushed | 1868 | | Key_blocks_unused | 109806 | | Key_blocks_used | 1714736 | | Key_read_requests | 19224818713 | | Key_reads | 60742294 | | Key_write_requests | 1607946768 | | Key_writes | 64788819 | key_cache_block_size is default at 1024. We have 52 GB's of index data and 2GB key cache is enough to get a 95% hit ratio. Is that possible? On the other side data set is 200GB and since MyISAM uses OS (Centos) caching I would expect it to use a lot more memory to cache accessed myisam data. But at this stage I see that key_buffer is completely used, our buffer pool size for innodb is 4gb and is also completely used that adds up to 6GB. Which means data is cached using just 1 GB? My question is how could I check where all the free memory could be used? How could I check if MyISAM hits OS cache for data reads instead of disk?

    Read the article

  • Server taking too long to respond error

    - by DCJones
    This is my first post on serverFault and my first entry in to web server configuration. The hardware and software. CPU: GenuineIntel, Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz OS: Linux 2.6.18-128.el5 Memory: 2Gb Background. I am running a small database (MySQL), around 1000 records with each record containing 44 fields. At the start of each day “00:01” the tables are cleared and populated with fresh data. The are 10 remote PCs all running Winodws XP and Firefox internet browser. All remote PC’s are connected to the internet using a min 4Gb broadband connection. Each remote PC runs a URL which displays a dynamic page of data which is refreshed every 20 seconds. This is a continual process 24 hours a day. I problem I am having is on odd occasions throughout the day the PC browser error with “Server taking too long to respond error”. What I am trying to find our is if I have the correct setting in the httpd.conf file on the server. Any help or advice anyone can provide would be very helpful. Best regards Dereck Server config file: httpd.conf ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 120 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 5 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 254 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>

    Read the article

  • Unable to add datasource in ColdFusion 9 and SQL Server 2008 R2

    - by Evik James
    I just installed SQL Server 2008 R2 and ColdFusion 9.0.1 on my Windows 7 machine for development use only. I have ColdFusion running well and serving pages (that aren't connected to a database). I can view my databases in SQL Server Management Studio. I successfully restored a few small databases and now I am trying to set up datasources for them through the ColdFusion Administrator. On my other machine, this was super easy. Not so much this time. The database I just added is named "Test". I am getting this error: Connection verification failed for data source: Test java.sql.SQLNonTransientConnectionException: [Macromedia][SQLServer JDBC Driver]Error establishing socket to host and port: localhost:1433. Reason: Connection refused: connect The root cause was that: java.sql.SQLNonTransientConnectionException: [Macromedia][SQLServer JDBC Driver]Error establishing socket to host and port: localhost:1433. Reason: Connection refused: connect It looks like the connection between ColdFusion and SQL Server is being refused. I know, brilliant observation, right? On my other machine, I was able to create datasources with just the default settings, no server name, username, or password. Any clue as to what might be the cause and how I might fix it?

    Read the article

  • external drive enclosure -> software RAID 5?

    - by memilanuk
    Hello all, I have two older PCs on my LAN posing as 'servers'... one running FreeNAS off a USB stick using three 500GB hdds in a ZFS RAID-Z pool serving as storage for the LAN and one running Debian Lenny with an 80GB drive used as a general purpose 'tinker' box that I can ssh into, etc. Problem is that the SMART report for one of those 500GB drives in the FreeNAS box is showing some pre-failure attributes, and the whole array is a little small anyways. Rather than simply replace one 500GB drive with another 500GB drive, and have no backup of the file server, I'd like to upgrade all the drives to 2TB ones - but I have no where to store that much data in the mean while. As such, I started looking at getting a 4-bay external drive enclosure with an eSATA card for the Debian box, with the hopes of creating a RAID5 + LVM setup using those drives and backing the data up to that external drive enclosure. After the backup is done, replace the drives in the FreeNAS box and rebuild the array there and mirror the data back. Then, I'd have both the primary storage (on the FreeNAS box) and a backup (which I don't have currently) using the external drive enclosure on the Debian box. My big question is... most of these external drive boxes seem to claim support for JBOD, RAID 0, 1, 10, 5, etc. - should I presume that is simply fake RAID like many commodity mobos have, and not really usable in Linux? In that case, with all the drives hanging off the one eSATA connection, will Linux (specifically Debian Squeeze, as I plan on upgrading that box here shortly) see all four drives, or just the first one? Will I be able to configure them in a RAID5 array as desired? Thanks, Monte

    Read the article

  • Digital Asset Management, iPhoto / Aperture server... alternative

    - by Sisyphus
    Afternoon, Clients, 10 : All Apples running either Leopard or Snow Leopard Server : Snow Leopard server, (and I have a old Dell Poweredge 650 at home running Gentoo 2.6, if anybody as a Linux solution). The situation: I work in small design company with 8 people, at present we are looking to consolidate all our image files onto one location, at present we each use our preferred single user DAM solution, be it, Adobe Bridge, iPhoto/Aperture (some don't bother at all) The filetypes commonly used are .psd, .pdf, .eps, .tiff, .jpg and RAW image files. Ideally what is needed: Centralised on one server, but allows us to search via spotlight (not essential, but would be nice) Include searchable metadata information such as date, location, title Open-source or as low cost as possibly Allow simultaneous users to import files So far, I have looked at a few open source DAM, systems, such as Razuna, Gallery (not strictly DAM), ResourceSpace, Notre-DAM, while these are brilliant and open-source, they don't integrate as smoothly with the Desktop as iPhoto and aperture. For iPhoto and aperture, I have tried creating a Shared library on the server (a tad laggy), and also using a drive with no permissions, put a library and letting each client read from it, however if they want to put images onto the library only, it's only supports one user at a time writing to the library... Any ideas what could fulfill our needs? Or is it time to bite the bullet for FinalCut Server? Thanks in advance.

    Read the article

  • Outlook slow to open attachments

    - by Alistair McMillan
    When a colleague tries to open attachments in her email (Outlook 2003 talking to an Exchange 2007 server) they talk ages to open. The files are relatively small, all less than 1MB. We've tried creating a new Windows profile for the user and tried creating new Outlook profiles, however that hasn't made any difference. And we've tried accessing her account from someone else's PC, and the attachments open immediately there. The only thing that might provide a clue is that Process Monitor shows Outlook on her PC trying to write the file to a folder within the user's "Temporary Internet Files" folder with FAST I/O DISALLOWED errors. Can't find a lot of useful information on that message online though. What causes the FAST I/O DISALLOWED errors? And would that make opening attachments so incredibly slow that opening a < 1MB file can take a matter of minutes? UPDATE: Discovered that this isn't just an issue with Outlook. Other files being accessed over the network show the same FAST I/O DISALLOWED errors in Process Monitor. The problem is just more noticeable with Outlook, because although other applications take a while to open files it isn't a matter of minutes.

    Read the article

  • How do I host multiple independent, secured SharePoint sites (WSS 3.0) without using Active Director

    - by Kyle Noland
    I have a SharePoint site set up on one of my networks to service Active Directory users. To be clear, this is a Windows SharePoint Services 3.0 installation running on Windows Server 2003 Standard. It is not an option to upgrade the server or SharePoint version. Management would like to create several new sites, one for each of a handful of clients. These sites will be used like "dropboxes" or FTP sites so that my company can make large files available to outside contacts, and vice versa. Here are my requirements: I do not want to have to create Active Directory accounts for each external contact. If possible, I would like to store the external usernames and passwords in a database that I can write a small GUI for so that management can handle adding their own external contacts. Each client site must be sandboxed from each other and from my main company SharePoint site. I would like to keep everything running on port 80 and be able to access the sites as either clientname.mycompany.com or www.mycompany.com/clientname If anybody has ever done this I would really appreciate hearing about any lessons you learned and suggestions for how to set this up. Kyle

    Read the article

  • ZFS: Mirror vs. RAID-Z

    - by John Clayton
    I'm planning on building a file server using OpenSolaris and ZFS that will provide two primary services - be an iSCSI target for XenServer virtual machines & be a general home file server. The hardware I'm looking at includes 2x 4-port SATA controllers, 2x small boot drives (one on each controller), and 4x big drives for storage. This allows one free port per controller for upgrading the array down the road. Where I'm a little confused is how to setup the storage drives. For performance, mirroring appears to be king. I'm having a hard time seeing what the benefit would be of using RAIDZ over mirroring would be. With this setup I can see two options - two mirrored pools in one stripe, or RAIDZ2. Both should protect against 2 drive failures, and/or one controller failure...the only benefit of RAIDZ2 would be that any 2 drives could fail. The storage should be 50% of capacity in both cases, but the first should have much better performance, right? The other thing I'm trying to wrap my mind around is the benefit of mirrored arrays with more than two devices. For data integrity what, if any, would be the benefit of a RAIDZ over a three-way mirror? Since ZFS maintains file integrity what does RAIDZ bring to the table...doesn't ZFS's integrity checks negate the value of RAIDZ's parity?

    Read the article

  • Can I use this power supply + case combination without causing problems?

    - by evan
    I am putting together a computer with a Antec P180 case and a Thermaltake TR2 RX 650 W power supply. The problem is that the Antec P180 case has a separate compartment for the power supply. With an opening for the on/off switch + ac connector to one side, a wall with a small hole for cables to route through on top, a wall on the bottom, and on the other side a fan which pushes air from the hard drive compartment to the power supply compartment. I think the design of the case assumes the power supplies fan is on the side next to the on/off switch, but the fan on the power supply I have is on top, which makes me worry about overheating the power supply. There is about half an inch between the top of the power supply and the wall and the other fan should keep air flowing to push out the air that the power supply pushes upwards. Do you think this setup should work, or should I go get another power supply? Thanks!! PS: This computer will be running an Ubuntu server, so it will always be on, but the rest of the components shouldn't be generating as much heat as they would on say a gaming machine.

    Read the article

  • Plesk wildcard subdomain not working

    - by avdgaag
    I'm trying to set up a wildcard subdomain on my VPS. Ultimately I want to end up with this: main site: my.domain.tld subdomain: sub1.my.domain.tld - should end up serving my.domain.tld/sub1 I am using plesk 8.6. I have created a DNS A record pointing at my VPS' IP. I have then restarted the DNS server and waited up to 24 hours. But trying ping sub1.my.domain.tld results in an unknown host error. So I know there's more stuff involved, configuring apache etc. But so far, I cannot even get the subdomain working at all, let alone serve up the right content. I have also tried a CNAME record, to no effect. I have also tried creating a regular subdomain with a fixed name, which also does not work. Pre-configured subdomains DO work, like ftp.my.domain.tld or mail.my.domain.tld. I am clearly missing something here, but my hosting provider charges a small fortune for any support request not involving hardware physically burning down, so I'm hesitant to ask them. Any ideas?

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • How to bypass firewall to connect to a proxy server?

    - by Bruce
    I am conducting a small experiment on my office network. I have setup a proxy server on my desktop machine (connected to my LAN) and I have volunteers access the internet via my proxy server. Everything is working well. The problem is people cannot connect to the proxy server through their laptops. I asked my network admin and he said the wireless network has a firewall which prevents users from connecting to my proxy. He said I could tunnel the traffic or use SSH though. I am afraid I do not understand fully what is going on. Is there a way by which users connected on the wireless network can connect to my desktop? I am using FreeProxy on Windows as my proxy server: http://www.handcraftedsoftware.org/index.php?page=download FreeProxy allows me to create a SOCKS 4/4a/5 proxy. Is that what I need? Part of the experiment involves logging the URL requests of the users. I am doing a measurement study. So, any solution must allow me to log the URL requests of users. Also, what changes do I need to make in the browser configuration.

    Read the article

  • Setting up dnsmasq for a local network

    - by WishCow
    Me, and a small group of developers have just moved to a new office, and I'd like to set up dnsmasq on our development server, so when we deploy web apps there, we don't have to edit our own hosts files. We have a router at 192.168.3.1 which we don't have access to. I figured I'd install a DNS server on the development box, and we all record it's IP as a secondary DNS server. Unfortunately I'm strugling to make this work. The name of the devel server is devbox, it's IP is 192.168.3.99, and it's running the latest Ubuntu Server (Karmic) My computer is running Ubuntu Desktop (Karmic) What I'd like to achieve Let's say I have three websites, website1, website2, website3, running on the development box. I'd like to access them by the urls: http://website1.devbox http://website2.devbox http://website3.devbox So I have configured Apache on the devel box, installed dnsmasq, and put the following lines into it's hosts file: 192.168.3.99 website1.devbox 192.168.3.99 website2.devbox 192.168.3.99 website3.devbox and edited my own resolv.conf file to include the devel box as a nameserver: nameserver 192.168.3.99 It's working fine, I can access the sites. The problem is that it doesn't scale well. I'd like all the domains ending with .devbox forwarded to the development box, and this is what I'm struggling with. I have tried putting 192.168.3.99 devbox into the hosts file, and editing the line in dnsmasq.conf: # Add local-only domains here, queries in these domains are answered # from /etc/hosts or DHCP only. local=/devbox/ But I cannot get it working. If I try any url that is not explicitly present in the development box's hosts file, the dns lookup fails. Is the local directive for something else? Am I looking at the wrong place?

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >