Search Results

Search found 13293 results on 532 pages for 'small ticket'.

Page 402/532 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Concatenating gziped Apache logs

    - by markdrayton
    We rotate and compress our Apache logs each day but it's become apparent that this isn't frequently enough. An uncompressed log is about 6G, which is getting close to filling our log partition (yep, we'll make it bigger in the future!) as well as taking a lot of time and CPU to compress each day. We have to produce a gziped log for each day for our stats processing. Obviously we could move our logs to a partition with more space but I also want to spread the compression overhead throughout the day. Using Apache's rotatelogs we can rotate and compress the log more often -- hourly, say -- but how can I concatenate all the hourly compressed logs into a running compressed log for the day, without decompressing the previous logs? I don't want to uncompress 24 hours' worth of data and recompress it because that has all the disadvantages of our current solution. Gzip doesn't seem to offer any append or concatenate option but perhaps I've missed something obvious. This question suggests straight shell concatenation "works" in that the archive can be decompressed but that gzip -l doesn't work seems a bit dodgy. Alternatively, perhaps this is still a bad way to do things. Other suggestions are welcome -- our only constraints are our relatively small log partitions and the need to provide a daily compressed log.

    Read the article

  • Understanding Netbook Partitions & UNR Installation

    - by Wesley
    Hi all, I have a Samsung N120 netbook (with upgraded 2GB RAM). I'm just looking at the Disk Management right now (in Windows XP) and I'm trying to understand what partition holds what. There is "Local Disk (C:)" which is 40GB, "RECOVERY" (no drive letter) which is 6GB and then "TEMP_PART01 (D:)" which is 103.05GB. XP is installed on Local Disk (C:) and I've only used this hard drive for all my files, etc. Recovery is recovery... probably not removable anyways. Now, what bugs me is the TEMP_PART01 (D:) partition, which contains quite a bit of random junk, such as EULA text documents, an "external installer", UI Wrapper Resource DLLs, a "VC_RED" Windows Installer Package and a few more files. I have no clue what any of it means, but I'm assuming that this was probably stuff that could have been on the Local Disk (C:), along with the WINDOWS, Program Files, and Docs and Settings folder. So, how should I go about this? Should I have kept all my data on D: and left all OS related files/folders on C:? Now, I want to install Ubuntu Netbook Remix. Question is, will this install within Windows, if I want to dual boot it? If not, would I partition D: into two small chunks, one on which I would install UNR? There are basically two questions in here, but it'd be great to get answers for both! Thanks in advance.

    Read the article

  • Fedora Core 6 Migration

    - by Matthew Sprankle
    I am at a loss as to what I should to for this server. I need it to run php5.3 and corresponding version of mysql. I received a client today through work that is using Fedora core 6 running 10 very small websites on some very hodge podge setup. My original idea was just upgrade to php5.3. I have yum (installed 3.0.8) reconfigured for the fedora archive. The latest version of php it allows is 5.1.8. I am still relatively new to server setups and am nervous about wiping their server to upgrade it. Since it is about 6-8 years old I'm not sure if it will even support the newest version of fedora. The server specs are: Parallels Plesk Panel version 9.5.4 Operating system Linux 2.6.9-023stab048.4-smp CPU GenuineIntel, Intel(R) Xeon(R)CPU E5335 @ 2.00GHz (10gb disk space and 1gb of memory). I use fedora for my personal server so I was a little familiar with it. I haven't done anything too extravagant. Is there a way I can escape this nightmare with installing php5.3 or do I need to migrate these sites to a new server?

    Read the article

  • Locate devices within a building

    - by ams0
    The situation: Our company is spread between two floors in a building. Every employee has a laptop (macbook Air or MacbookPro) and an iPhone. We have static DHCP mappings and DNS resolution so every mobile gets a name like employeeiphone.example.com, every macbook air gets a employeelaptop.example.com and every macbook pro gets a employeelaptop.example.com on the Ethernet interface (the wifi gets a dynamic IP from a small range dedicated for the purpose). We know each and every MAC address of phones and laptops, since we do DHCP static mapping (ISC DHCP server runs on linux). At each floor we have a Netgear stack of two switches, connected via 10GB fiber to each other. No VLANs so far. At every floor there are 4 Airport Extreme making a single SSID network with WPA2 authentication. The request: Our CTO wants to know who is present at which floor. My solution (so far): Every switch contains an table listing MAC address and originating port. On each switch stack, all the MAC addresses coming from the other floor are listed as coming on port 48 (the fiber link). So I came up with: 1) Get the table from each switch via SNMP 2) Filter out the ones associated with port 48 3) Grep dhcpd.conf, removing all entries not *laptop and not *iphone 4) Match the two lists for each switch, output in JSON or XML 5) present the results on a dashboard for all to see I wrote it in bash with a lot of awk and sed, it kinda works but I always have for some reason stale entries in the switch lookup tables, making it unreliable; some people may have put their laptop to sleep, their iphones drop connections after a while, if not woken up and so on..I searched left and right, we are prepared to spend a little on the project too (RFIDs?), does anybody do something similar? I can provide with the script if needed (although it's really specific to our switches and naming scheme). Thanks! p.s. perhaps is this a question for stackoverflow? please move if it so.

    Read the article

  • Does Mac OS X throttle the RATE of socket creation?

    - by pbhogan
    This may seem programming related, but this is an OS question. I'm writing a small high performance daemon that takes thousands of connections per second. It's working fine on Linux (specifically Ubuntu 9.10 on EC2). On Mac OS X if I throw a few thousand connections at it (roughly about 16350) in a benchmark that simply opens a connection, does it's thing and closes the connection, then the benchmark program hangs for several seconds waiting for a socket to become available before continuing (or timing out in the process). I used both Apache Bench as well as Siege (to make sure it wasn't the benchmark application). So why/how is Mac OS X limiting the RATE at which sockets can be used, and can I stop it from doing this? Or is there something else going on? I know there is a file descriptor limit, but I'm not hitting that. There is no error on accepting a socket, it's simply hangs for a while after the first (roughly) 16000, waiting -- I assume -- for the OS to release a socket. This shouldn't happen since all prior the sockets are closed at that point. They're supposed to come available at the rate they're closed, and do on Ubuntu, but there seems to be some kind of multi (5-10?) second delay on Mac OS X. I tried tweaking with ulimit every-which-way. Nada.

    Read the article

  • Does Mac OS X throttle the RATE of socket creation?

    - by pbhogan
    This may seem programming related, but this is an OS question. I'm writing a small high performance daemon that takes thousands of connections per second. It's working fine on Linux (specifically Ubuntu 9.10 on EC2). On Mac OS X if I throw a few thousand connections at it (roughly about 16350) in a benchmark that simply opens a connection, does it's thing and closes the connection, then the benchmark program hangs for several seconds waiting for a socket to become available before continuing (or timing out in the process). I used both Apache Bench as well as Siege (to make sure it wasn't the benchmark application). So why/how is Mac OS X limiting the RATE at which sockets can be used, and can I stop it from doing this? Or is there something else going on? I know there is a file descriptor limit, but I'm not hitting that. There is no error on accepting a socket, it's simply hangs for a while after the first (roughly) 16000, waiting -- I assume -- for the OS to release a socket. This shouldn't happen since all prior the sockets are closed at that point. They're supposed to come available at the rate they're closed, and do on Ubuntu, but there seems to be some kind of multi (5-10?) second delay on Mac OS X. I tried tweaking with ulimit every-which-way. Nada.

    Read the article

  • How to make my Ubuntu an internet gateway for my Android phone

    - by yacine
    I want to use the internet of my school on my Android, the problem is they have a Squid proxy, and many applications on my phone don't use the proxy at all. The obvious solution is to install a transparent proxy on my Android to force all applications to connect through it. The problem is that I need to root the phone to make it work, and I don't want to do it because it's not really my phone and rooting is a little risky- Another solution, which is safer, is to make my computer run as a gateway, so I put my Ubuntu IP in the gateway parameter of the phone. I'm running a small proxy on my ubuntu (cntlm), so I redirect the Android traffic to it. I did it with "iptables" as follows: iptables -t nat -A PREROUTING -s 10.0.1.118 -p tcp -j REDIRECT --to-ports 8888 iptables -t nat -A PREROUTING -s 10.0.1.118 -p udp -j REDIRECT --to-ports 8888 10.0.1.118 is the IP of the phone, 8888 is the port of cntlm (proxy on my PC). Now, on the phone: When I enter www.google.com on the navigator I get nothing (web site not found, error message of Firefox). But, when I enter http://74.125.143.101 (IP of Google) I get an error message from the school proxy (so it worked in some way – my PC redirected the traffic of the phone to the Squid proxy). The error message is : The requested URL could not be retrieved while trying to process the request get / http/1.1 host 74.125.143.101 user-Agent ... ... I think the problem is in the "GET" header,it should be GET 74.125.143.101 HTTP/1.1. But I don't understand what's happening, and I'm a certified CCNA.

    Read the article

  • How to prevent response to who-has requests on virtual eth interface?

    - by user42881
    Hi, we use small embedded X86 linux servers equipped with a single physical ethernet port as a gateway for an IP video surveillance application. Each downstream IP cam is mapped to a separate virtual IP address like this: real eth0 IP address= 192.168.1.1, camera 1 (eth0:1) =192.168.1.61, camera 2 (eth0:2) =192.168.1.62, etc. etc. all on the same eth0 physical port. This approach works well, except that a specific third-party windows video recording application running on a separate PC on the same LAN, automatically pings the virtual IPs looking for unique who-has responses on system startup and, when it gets back the same eth0 MAC address for each virtual interface, freaks out and won't allow us to subsequently manually enter those addresses. The windows app doesn't mind, tho, if it receives no answer to the who-has ping. My question - how can we either (a) shut off the who-has responses just for the virtual eth0:x interfaces while keeping them for the primary physical eth0 port, or, in the alternative, spoof a valid but different MAC address for each virtual interface? Thanks!

    Read the article

  • HP LaserJet 1515: Disable "refill" warning

    - by Pekka
    I have a HP LaserJet 1515 connected to a Windows 7 PC. The Magenta cartridge is empty; the printer shows a warning to that effect, and won't let me print even black-and-white documents any more. I can't turn the warning off manually using the printer's small console: When I try to enter any menu, the display says "Menu access disabled". I have no idea why. There is a setting to override the warning, but it can't be changed using the Network interface in the browser (Although it is there on the status page) According to the manual,the HP printing tool is supposed to offer a switch for this, but it won't install on my Windows 7. It just rumbles about for half an hour, to magnificently exit with an "unknown error" requiring a reboot. On second look, the problem seems to be that Windows 7 just isn't supported. There is no download link for the tool when you specify Windows 7 as your OS. I just want to print a black-and-white-document on a printer whose black cartridge is still 65% full. Is this indeed impossible? On second thought, I'm cross-posting this on the HP support forum. I'll update here if anything comes up.

    Read the article

  • Install Windows 8 on SSD and Program & Users on HDD

    - by Foe
    I have been dealing with a few problems while installing Windows 8 on my computer. On my old configuration, I had Windows 7 installed on my 60Go SSD, and my programs and user data on my 1To HDD, thanks to relative links. Yet, while installing Windows 8 on my SSD, he made a small partition on my HDD "System related". Plus, I'm afraid using only links is a bit cheap, and I saw lots of people messing with their registry when trying to put user data on another drive. I read a lot about optimizing Windows 8 for SSD, putting Users on another drive, and very similar situation that didn't quite correspond to what I was trying to achieve. Here's what I tried : http://www.eightforums.com/tutorials/4275-user-profiles-relocate-another-partition-disk.html Booting on Audit mode and using an XML to relocate Users didn't work as the specified version in the file is a test one, and I don't what to enter if I'm using the last release. Booting with the install DVD in repair mode to do a copy of the User and create a relative link, resulting in an error on the logon screen while entering my password saying that "The profile can't be load" (average translation of my error from french to english) Do anyone know how to do a clean separated install of Windows 8, with OS on a drive, and the other data on a second one ? Thanks.

    Read the article

  • Where is SQLite on Nokia N810? How to get?

    - by klausnrooster
    I need SQLite3 with the TCL interface for a simple command-line app I use on all my PCs and want to use on my new (to me) Nokia N810. But "find" does not find any SQLite on the Nokia N810. Several apps use it, supposedly, like the maps/gps (poi.db), and others. I got sudo gainroot before using find, tried with wildcards * and dot . No luck. I guess it's embedded in the apps that use it? The N810's app manager doesn't offer SQLite. Apt-get has it and then fails with error: "Unable to fetch some archives, ..." The "update" and "--fix-missing" options do not help. I don't really want to get scratchbox and compile SQLite and TCL extension for it from sources. I looked at README and it seems like it might be straightforward. However [1] it's not clear what scratchbox download I would need (there are dozens in the Index of /download/files/sbox-releases/hathor/deb/i386), [2] I've never compiled any C sources successfully before, [3] and the whole mess entails a lot of learning curve for what is a very small one-off personal project. http://www.gronmayer.com/ has a few (sqlite (v. 2.8.17-4), libsqlite0 (v. 2.8.17-4), and libsqlite0-dev (v. 2.8.17-4)) but they appear to part of some huge package, may or may not include the TCL interface, ... and no I don't want to port my tiny app to Python. Is there some option I've overlooked? Must I get scratchbox and compile for ARM myself? Thanks for your time.

    Read the article

  • Cannot access Domain Controller through VPN

    - by Markus
    In our small network there is a Windows 2008 R2 Domain Controller that also serves as Remote Access Server. For years, we could access this server and the resources in the network over a VPN connection without any problem. Since some time however, I am able to connect to the VPN, but my Windows 8 client (and another one I used for testing purposes) is not able to connect the domain controller afterwards. I can access any other server in the network, but there seems to be a problem regarding the trust between the client(s) and the server. If I connect the client to the network directly over a LAN cable, everything works as expected. Also I can connect to another server over VPN and open a RDP prompt to the DC without a problem. On the client, whenever I try to access the DC, I get an access denied message. I've tried to update the group policies both over VPN and LAN. Also, I've removed the client from the domain and re-added it. The client shows a message that Windows requires valid login information when connected to the VPN - but my credentials are valid. They work when I logon to the client when not connected to the VPN and also when connected to the LAN. Turning off the firewall on the client and the server did not change anything. DNS resolution works both on the server and the client. What else can I do to diagnose and solve the problem?

    Read the article

  • How are large companies handling the storing and cataloging of software installation disks?

    - by CT
    I just started working in the IT department of a small-medium sized construction company with about 200 users. One of my responsibilities is to setup and configure all new machines that come in. I would like suggestions on how to best manage the installation disks and licenses of the software that comes with them. Plus any additional licensed software such as Autocad, Photoshop, etc as well as peripheral driver disks such as printers and scanners. Right now every machine is associated and labeled with an asset id. All asset ids are kept in a spreadsheet with applicable serial numbers, current user, warranty info, and software licenses. The physical disks are then kept within a folder in a cabinet. Each folder is marked with the asset id number as well as the current user. My problems with this is that the system was not maintained very completely before I came to the company. There are plenty of software folders with no asset ids labeled on them. Plenty of missing software folders (most likely are a lot of the unlabeled folders). Folders with names but not asset ids. Machines get switched to different users without the folders and spreadsheet being updated. I am not saying this method would necessarily be bad if it was better implemented and managed, but if I am going to have to take a lot of time to fix this system currently in place. I thought I would ask the community first on how others manage this process in case there are easier, more efficient ways of doing so. Thank you.

    Read the article

  • Virtual folder for multiple sites

    - by Cups
    I am creating a very simple flat file CMS for small (multilingual) websites. The little file writing that goes on is handled by 4 scripts in a publicly available folder in each site named /edit. Given that I have 2 websites now working on that simple system: websiteA/index.php (etc) websiteA/edit/ websiteB/index.php (etc) websiteB/edit/ What is the best way of making that /edit folder "virtual" in order that these and each subsequent website owner can login to their view of /edit and yet the code only exists in one place. I do not want the website owners to have to login from a central website, but from their own /edit directory. I have already read about different solutions seemingly using the <Directory> directive in my httpd.conf declaration for each website, and also using straight mod_rewrite but admit to now becoming confused about some of the terminology. Each website has its own config file which contains path settings and so on. What in your opinion is the best way to handle this? EDIT In light of a reply, I suppose that given a virtual host directive such as this: <VirtualHost 00.00.00.00:80> DocumentRoot /var/www/html/websitea.com ServerName www.websitea.com ServerAlias websitea.com DirectoryIndex index.htm index.php CustomLog logs/websitea combined </VirtualHost> Is it possible to create an alias inside that directive for the folder websitea.com/edit ?

    Read the article

  • Easiest way to find out if user has either Windows 7 or Vista (through telephone support)?

    - by Rabarberski
    If you have to provide some initial troubleshooting support by phone [or email], and you don't have access to the PC itself, what is the easiest and most foolproof question to find out if the 'dumb' user is using either Windows 7 or Windows Vista? For example: determining if the user has either Windows XP or Windows Vista/7 is easy. Just ask the user if the button at the left bottom corner is (a) either square with the word 'Start' on it, or (b) it is a round button. But how to determine the difference between Vista and 7? Edit: For all the existing answers the user has to type something, and do it correctly. Sometimes even that is already hard for a computer illiterate user. My XP example just requires looking. If it exists (although I am afraid it doesn't), I think a solution that is just based on something this is visually different between Vista and 7 would stand above all others. (Which makes Dan's suggestion to turn over the box and look at the label" not so stupid). Perhaps the small 'show desktop' rectangle at the right side of the task bar (was that present in Vista)?

    Read the article

  • OSX server setup suggestions

    - by Tom
    I am looking into the possibility to setup an OSX server for my employees, and would like some input on what is the best approach to meet my needs, and perhaps some suggestions if I am moving in the wrong direction. I am thinking of a Mac Mini OSX server, and are not sure if my needs will be met, and what possibilities are out there. I want these capabilities: - Groups/Users managed on server - Shared folders and private folders for users/groups - Access to activated services - Server hosting software for the users (developing tools ++) - Similar to Windows Terminal Server - Virtual desktop environment (both local and over internet/VPN) - Possible to access trough Mac and Windows The reason I am looking at OSX server is that my employees almost only work in OSX environment, and I want to offer the capabilities to logon to the server trough some kind of terminal software, and have full access to their work OSX environment and software on their mac or pc, from anywhere they might be. Instead of having to have multiple setups and need for spending alot of time installing and setting up needed software on every client. This is a small business, where some work on local network, and others from the internet, preferably trough VPN. But a terminal server solution, that are fast and easy to manage would be perfect for our needs. So if anyone have any experience with a similar setup, please let me know what you did, and your experiences with your setup.

    Read the article

  • Combine multiple network interfaces to connect to a dedicated server

    - by Dženis Macanovic
    this is an underpaid employee writing, who's apparently responsible for all the IT stuff in a very small (non-IT) company. Today said company got a bunch of PCs/workstations, a switch, a computer that's supposed to be used as a router, two DSL connections (each 16 MBit/s downstream and 1 MBit/s upstream) and a dedicated server which is hosted and managed professionally by a larger local company with some decent connection speed (1 GBit/s both directions if I'm not mistaken). This is what I've set up (note I'm not making use of the second DSL connection at all)... ETH0 ETH1 [ SWITCH ]---[LINUX DEBIAN ROUTER]---[DSL MODEM 1]---[INTERNET] | | | PC1 | | PC2 | ... ... when my boss asked me, if it was somehow possible to get 32 MBit/s downstream and 2 MBit/s upstream. At that time I replied "no" without thinking too much about it. Now I've just had the following idea... ETH1 ETH0 ETH0 ,---[DSL MODEM 1 (NON-STATIC IP)]---, ,---, ETH0 [ SWITCH ]---[LINUX DEBIAN ROUTER] [INTERNET] [LINUX DEBIAN SERVER]---[INTERNET] | | | '---[ DSL MODEM 2 (STATIC IP) ]---' '---' PC1 | | ETH2 ETH0 PC2 | ... ... but I have absolutely no clue how to implement that. Would that even be possible? What would the masquerading rules look like on the router? What about the server? I didn't find anything on the internet, mainly because I couldn't come up with any good keywords to search for to begin with. English obviously isn't my first language. Thanks in advance for your time!

    Read the article

  • Windows 7, network transmit (send) not working

    - by user326287
    My Win 7 works 2 years without problem. But now, I can't transmit (send) big data on LAN/Internet. I can: - Ping anything - Browse Internet, download files at full speed - Send e-mails with very small attachments. - Testing download speed on Speedtest.net, measure stable full speed. I can't: - Testing upload speed on Speedtest.net. Upload stuck.. - Save/send email messages with big (128k) attachment, independent from e-mail provider or e-mail box. THIS IS NOT A HARDWARE/CABLE/CARD OR OTHER NETWORK DEVICES PROBLEM! When I boot from a Linux Live CD, without ANY hardware change, all data sending, testing works correctly, at full speed. I have tried already in Win 7: - Disable Windows/3rd party Firewall completely - Reset IP stack parameters (netsh int ip reset c:\resetlog.txt) - Computer restore - Reinstall LAN driver When I inspect the packets in Wireshark in Windows, I see lot's of (maybe 60% of sent packets) "TCP Retransmission". Sometimes receive "TCP Dup Ack" or "TCP Out-of-Order". Linux don't do this. Thank you for the help.

    Read the article

  • How to decouple development server from Internet?

    - by intoxicated.roamer
    I am working in a small set-up where there are 4 developers (might grow to 6 or 8 in cuople of years). I want to set-up an environment in which developers get an internet access but can not share any data from the company on internet. I have thought of the following plan: Set-up a centralized git server (Debian). The server will have an internet access. A developer will only have git account on that server, and won't have any other account on it. Do not give internet access to developer's individual machine (Windows XP/Windows 7). Run a virtual machine (any multi-user OS) on the centralized server (the same one on which git is hosted). Developer will have an account on this virtual machine. He/she can access internet via this virtual machine. Any data-movement between this virtual machine and underlying server, as well as any of the developer's machine, is prohibited. All developers require USB port on their local machine, so that they can burn their code into a microcontroller. This port will be made available only to associated software that dumps the code in a microcontroller (MPLAB in current case). All other softwares will be prohibited from accessing the port. As more developers get added, providing internet support for them will become difficult with this plan as it will slow down the virtual machine running on the server. Can anyone suggest an alternative ? Are there any obvious flaws in the above plan ? Some key details of the server are as below: 1) OS:Debian 2) RAM: 8GB 3) CPU: Intel Xeon E3-1220v2 4C/4T

    Read the article

  • Shrinking windows and recovery partitions on the samsung new series 9

    - by bobbaluba
    I just bought a samsung NP900X3C, and as I was going to install linux, I noticed the windows partitions and recovery partitions occupied a major portion of the disk. The disk is a 128 GB SSD, and I want to keep the windows partition in order to play some games once in a while, but the windows disk is already 45GB full (with no installed programs) and the recovery partition is 20GB. That leaves under 60 GB for linux, which is not optimal, since that is what I'm going to be using most of the time, and there would be no room for games on the windows partition. There are also two small partitions that I don't know what are doing, one 100mb at the start of the disk that I'm guessing is some kind of boot partition, and one 5GB, that is described as an OS/2 hidden C: drive What I'm wondering is: can i delete the recovery partition? What about the mystical 5gb partition? Here is what fdisk reports: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 128.0 GB, 128035676160 bytes 255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x83953ffc Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 198273023 99033088 7 HPFS/NTFS/exFAT /dev/sda3 198273024 207276031 4501504 84 OS/2 hidden C: drive /dev/sda4 207276032 250068991 21396480 27 Hidden NTFS WinRE

    Read the article

  • apache subdomain configuration

    - by terrid25
    I seem to be having a small problem with setting up a subdomain in apache under CentOS. I have the following: <VirtualHost *:80> ServerName www.domain.co.uk ServerAlias domain.co.uk dev.domain.co.uk DocumentRoot "/var/www/html/domain/web" DirectoryIndex index.php Alias /sf /var/www/html/symfony14/web/sf <Directory "/var/www/html/domain/web"> AllowOverride All Allow from All </Directory> </VirtualHost> <Directory "/var/www/html/symfony14/web/sf"> AllowOverride All Allow from All </Directory> <VirtualHost *:80> ServerName test.domain.co.uk DocumentRoot "/var/www/html/domain_test/web" DirectoryIndex index.php Alias /sf /var/www/html/symfony14/web/sf <Directory "/var/www/html/domain_test/web"> AllowOverride All Allow from All </Directory> </VirtualHost> So going to www.domain.co.uk and domain.co.uk display the contents from /var/www/html/domain, but going to test.domain.co.uk also displays the same folder contents. Is this because of the ServerAlias ? Thanks

    Read the article

  • ZFS Data Loss Scenarios

    - by Obtuse
    I'm looking toward building a largish ZFS Pool (150TB+), and I'd like to hear people experiences about data loss scenarios due to failed hardware, in particular, distinguishing between instances where just some data is lost vs. the whole filesystem (of if there even is such a distinction in ZFS). For example: let's say a vdev is lost due to a failure like an external drive enclosure losing power, or a controller card failing. From what I've read the pool should go into a faulted mode, but if the vdev is returned the pool should recover? or not? or if the vdev is partially damaged, does one lose the whole pool, some files, etc.? What happens if a ZIL device fails? Or just one of several ZILs? Truly any and all anecdotes or hypothetical scenarios backed by deep technical knowledge are appreciated! Thanks! Update: We're doing this on the cheap since we are a small business (9 people or so) but we generate a fair amount of imaging data. The data is mostly smallish files, by my count about 500k files per TB. The data is important but not uber-critical. We are planning to use the ZFS pool to mirror 48TB "live" data array (in use for 3 years or so), and use the the rest of the storage for 'archived' data. The pool will be shared using NFS. The rack is supposedly on a building backup generator line, and we have two APC UPSes capable of powering the rack at full load for 5 mins or so.

    Read the article

  • The instruction at “0x7c910a19” referenced memory at “oxffffffff”. The memory could not be “read”

    - by ClareBear
    Hello guys/girls I have a small issue, I receive the following error before the .vbs terminates. I don't know why this error is thrown. Below is the process of the .vbs file: Call ImportTransactions() Call UpdateTransactions() Function ImportTransactions() Dim objConnection, objCommand, objRecordset, strOracle Dim strSQL, objRecordsetInsert Set objConnection = CreateObject("ADODB.Connection") objConnection.Open "DSN=*****;UID=*****;PWD==*****;" Set objCommand = CreateObject("ADODB.Command") Set objRecordset = CreateObject("ADODB.Recordset") strOracle = "SELECT query here from Oracle database" objCommand.CommandText = strOracle objCommand.CommandType = 1 objCommand.CommandTimeout = 0 Set objCommand.ActiveConnection = objConnection objRecordset.cursorType = 0 objRecordset.cursorlocation = 3 objRecordset.Open objCommand, , 1, 3 If objRecordset.EOF = False Then Do Until objRecordset.EOF = True strSQL = "INSERT query here into SQL database" strSQL = Query(strSQL) Call RunSQL(strSQL, objRecordsetInsert, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) objRecordset.MoveNext Loop End If objRecordset.Close() Set objRecordset = Nothing Set objRecordsetInsert = Nothing End Function Function UpdateTransactions() Dim strSQLUpdateVAT, strSQLUpdateCodes Dim objRecordsetVAT, objRecordsetUpdateCodes strSQLUpdateVAT = "UPDATE query here SET [value:costing output] = ([value:costing output] * -1)" Call RunSQL(strSQLUpdateVAT, objRecordsetVAT, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) strSQLUpdateCodes = "UPDATE query here SET [value:costing output] = ([value:costing output] * -1) different WHERE clause" Call RunSQL(strSQLUpdateCodes, objRecordsetUpdateCodes, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) Set objRecordsetVAT = Nothing Set objRecordsetUpdateCodes = Nothing End Function It does both the import and update and seems to throw this error after. If I comment out the ImportTransactions it doesnt throw a error, however I have produced similar code for another vbs file and this does not throw any errors Thanks in advance for any help, Clare

    Read the article

  • CNAME vs A records

    - by deb
    I built a small rails app that allow users to make a simple site. It uses subdomain accounts ex: deb.myapp.com Whenever an user wanted to have a domain name associated with their site, they would change their NS records to point to slicehost where the application is hosted and I would manage the DNS records myself. However, as more people are using the application this is not an option for me anymore. I prefer users to keep their nameservers at goddady, register.com, etc, so they can log in and manage their own MX records or whatever else they need to change. My question is, should I have them change the A records to point to my server's ip, or should I have them create a CNAME record? Do they need to delete the default A records to allow the CNAME record to work? Will the A record take precedence and overrule the CNAME record? Thanks in advance. Sorry if this is a very basic question. I've read other posts and I can't find a definite answer.

    Read the article

  • Methods and practices for managing a network that has no internet connection

    - by FaultyJuggler
    Originally asked in Super User but realized this belongs here. Long story short, I am setting up a network with 32 servers of varying specs that will be used for testing and development. We will be using RedHat Linux, we also do not have a router as of yet and were looking into making one of the servers act as our router/DHCP etc. The small cluster will be on an isolated network with no internet. I can use external harddrives and discs to transfer anything from external sources into machines on the network, so this isn't a locked down secure network, it just won't have a direct connection to the outside world. I've worked on such setups before, but always long after they were setup. So I'm reaching out to see what everyone knows as far as how groups have handled initial setup and maintenance of such a situation. What is the best way to get them all configured and up to date? What are the best ways to automate updates, network wide installs, etc. With the only given that I have large multi-terabyte external hard drives that would be used to drop whatever files are needed onto a central server, how do i then distribute those files and install their contents? I've done perl scripting, some teammates have played with puppet, so we aren't completely in the dark, I just wanted to avoid reinventing the wheel since this is a common challenge.

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >