Search Results

Search found 18648 results on 746 pages for 'left join'.

Page 588/746 | < Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >

  • How can I regain control over a "stuck" scroll bar?

    - by jonsca
    I normally have a lot of tabs open in the browser at one time, which normally doesn't cause any problems. Occasionally, though, the page will have fully rendered, the scroll bar is at the top of its track on the right, but placing the the mouse pointer on the scroll bar does not allow me to get control of it (and using the arrow at the bottom of the scroll bar doesn't work either). This seems to happen to me in both Firefox and Chrome, both latest versions. There is normally no load on the CPU, and while my memory usage can be high (especially with Chrome), I still usually have 25% left. I fear that this is one of those "for your own good" features, to prevent scrolling down a page that may still have elements loading and/or a stuck script, but I'm wondering whether or not there is a setting in Firefox or Chrome (or perhaps Windows in general) that will allow the scroll bar to be "released" even though the browser may be busy? It may be "2002" of me, but I wouldn't mind being able to scroll along gradually, even if the page hasn't had time to fully render all of the images, etc., yet. Does such an advanced setting exist "under the hood"?

    Read the article

  • Cannot send email outside of network using Postfix

    - by infmz
    I've set up an Ubuntu server with Request Tracker following this guide (the section about inbound mail would be relevant). However, while I'm able to send mail to other users within the network/domain, I cannot seem to reach beyond - such as my personal accounts etc. Now I have no idea what is causing this, I thought that all it takes is for the system to fetch mail through our exchange server and be able to deliver in the same way. However, that hasn't been the case. I have found another server setup in a similar fashion (CentOS 5, Request Tracker but using Sendmail), however it is a dated server and whoever's built it has kindly left no documentation on how it works, making it a pain to use that as a reference system! :) At one point, I was told I need to set up a relay between the local server's email add and our AD server but this didn't seem to work. Sorry, I know next to nothing about mailservers, my colleagues nothing about Linux so it's a hard one for me. Thank you! EDIT: Result of postconf -N with details masked =) alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = myhost.mydomain.com, localhost.mydomain.com, , localhost myhostname = myhost.mydomain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = EXCHANGE IP smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes Sample log message: Sep 4 12:32:05 theedgesupport postfix/smtp[9152]: 2147B200B99: to=<[email protected]>, relay= RELAY IP :25, delay=0.1, delays=0.05/0/0/0.04, dsn=5.7.1, status=bounced (host HOST IP said: 550 5.7.1 Unable to relay for [email protected] (in reply to RCPT TO command))

    Read the article

  • Uninstall IIS on Windows 7

    - by CJM
    I've just rebuilt my development machine and installed IIS. I then installed the Web Deployment tool and used this to restore my previously-backed-up websites to the clean machine. Unfortunately the restoration didn't work correctly/fully. I couldn't easily correct the problem, so I decided to uninstall/reinstall IIS and recreate the sites manually. I uninstalled IIS and rebooted, but there was still plenty of stuff left around such as various files in /windows/system32/inetsrv/ which I tried to delete manually (with limited success!). I rebooted again and tried to reinstall IIS - it reported an error (no meaningful message) and requested another reboot. The event log includes the following errors: The World Wide Web Publishing Service (WWW Service) did not register the URL prefix http://*:80/gallery for site 1. The site has been disabled. and Unable to bind to the underlying transport for [::]:80. The IP Listen-Only list may contain a reference to an interface which may not exist on this machine. I'd like to avoid another rebuild. Can I completely remove IIS, such that I can reinstall it from scratch? Or can I 'fix' the current setup so that IIS will reinstall over what is already there?

    Read the article

  • Ubuntu xrandr rotate issue

    - by user83544
    I've just bought a second monitor for my PC which happens to be a pivot monitor. I've already read lots of forums related to my problem but haven't come across a solution - I have the same symptoms as dozens of posts but no matter whatever I try it just doesn't work. I've already changed the xorg.conf file and added in the device section just under Driver "nvidia" the following for my second monitor: Option "RandRRotation" "on" When I save and reboot I try to rotate my screen with the nvidia X server settings by choosing the second monitor and clicking either "left" or "right" for the rotation. It immediately exits the nvidia settings window and does nothing. I tried within the terminal by typing: xrandr -o right I get the following error: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 154 (RANDR) Minor opcode of failed request: 2 (RRSetScreenConfig) Serial number of failed request: 14 Current serial number in output stream: 14 I actually manage to rotate it with Option "Rotate" "CCW" instead of "RandRRotation". The problem with this solution is that you get the second monitor in the right position, but any window you open on that screen is practically unchangeable. You can't change the size nor move it, making it useless for reading PDFs, which is the main reason why I bought this second screen to help me write my thesis. Any help is really appreciated. sudo lshw -c video hiram@hiram-linux:~$ sudo lshw -c video *-display description: VGA compatible controller product: nVidia Corporation vendor: nVidia Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:f8000000-f9ffffff memory:d8000000-dfffffff memory:d4000000-d7ffffff ioport:dc00(size=12 memory:fbd80000-fbdfffff

    Read the article

  • DansGuardian/Squid Traffic doesn't get back to user

    - by DKNUCKLES
    I've purchased a Squid appliance that I'm attempting to implement, however the lack of documentation has left me a bit high and dry. Forgive me if this is a silly question, but this is my first attempt at implementing Squid. From what I can ascertain from the documentation (or lack thereof), the users connect to DansGuardian first at port 8080 where the filtering is done, at which point it forwards it to the Squid appliance at port 3128. The traffic is then sent to the internet. The setup I have is as follows Gateway (MikroTik router) : 192.168.88.1 Squid/DansGuardian :192.168.88.100 Client : 192.168.88.238 Client --- Gateway --- Proxy --- Internet I have set up a simple NAT rule to forward all traffic from the client machine (for testing purposes) to go to the DansGuardian. The traffic seems to get there, although I see a lot of SYN_RECV w/ a netstat -antp command on the virtual appliance machine. From this I gather that the traffic is NOT being routed back to the client machine. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN - tcp 0 0 192.168.88.100:8080 192.168.88.238:55786 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55787 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55785 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55788 SYN_RECV - tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - Is this a routing issue or an issue with the Squid Appliance?

    Read the article

  • Should one have a separate user account for work use? [closed]

    - by Tyler Wayne
    This question examines the practice of using a separate OS-level user account to divide work use from personal use (specifically, in a creative profession and on a personal computer). I recently left my in-the-flesh job to go to school, but I'm carrying on with the work remotely. I do all of my work on my laptop, and I currently have a separate user account called "Work" where I do exactly that. However, I'm now starting to question that practice. Because my hobby is the same as my job, I want to save notes of the things I learn while working. Because ideas come at any moment, I often want to throw something into my personal task manager's inbox and look at it again later. That task manager is well-suited to handle both the work and personal aspects of my life. Only my personal account has admin rights, but work sometimes requires me to install programs. My employer has no preference regarding my choice, so that is a non-issue. My work is essentially freelance web development, so advice given with that in mind will be much appreciated. Back up all opinion with some personal experience, please. Ideally, give a list of pros and cons and then name reasons for your position.

    Read the article

  • Getting started with webserver clustering.

    - by Ernie
    I work for a small ISP, and we host about 250 domains and all the stuff that goes along with that: DNS, mail, spam filtering, and backups. Currently, we have separate DNS servers (two of them) and mail servers (outgoing mail is actually on the secondary DNS server, but was previously on its own server). In the past, this was done as an insurance measure. The last thing we need is for some doofus (usually yours truly) to hose a server, taking out DNS and mail right along with it, or for spammers to jam our incoming SMTP server, preventing outgoing mail from being sent too. In the past, this was a problem, and our servers were set up the way they are now to combat it. However, clustering solutions like Sun's Cobalt RAQ (in days of olde) and Virtualmin appear to cater to an all-in-one approach, then deal with failures through redundant servers. I have avoided this thus far, but we've been using Virtualmin on our web server for a while now, and I'd like to expand into using it for a high availability cluster. Our networking partner has recently built a datacenter that has eliminated all of our other bugaboos like network, cooling, and power issues, so now the only thing left to go wrong is me hosing a server, which happened earlier this month. One of the bigger reasons we've avoided going this route is because our hardware requirements aren't particularly high. One server easily handles all the sites we host (most of them are flat sites). Also, load-balancing routers tend to be expensive and complicated. All that I'm really expecting to do is building a two-node cluster for redundancy so that when I hose a server (however rare that might be), we're not out for 8-12 hours while I rebuild it. What I need to know is how to get started, and if I'm really in a position to bother with this kind of thing at all.

    Read the article

  • Sun-JRE on CentOS-4.8 RPM error: post-install scriptlet failed, exit status 5

    - by Emyr
    I have a server with CentOS 4.8 installed. The provided is rubbish, but there's only a few months left, and they're busy being sued by Chase bank, so I doubt I can get CentOS 5. I wiped the server clean using Virtuozzo, and found that the default image is VERY empty. I even had to install yum myself. I've reached the point where I want to install TomCat. I downloaded the Sun JRE as a .rpm.bin file, did chmod a+x and ran it. That produced a .rpm file, which I tried installing: [root@host java]# rpm -Uvh jre-6u20-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... localedata.jar... plugin.jar... javaws.jar... deploy.jar... error: %post(jre-1.6.0_20-fcs.i586) scriptlet failed, exit status 5 [root@host java]# rpm -qi jre Name : jre Relocations: /usr/java Version : 1.6.0_20 Vendor: Sun Microsystems, Inc. Release : fcs Build Date: Mon Apr 12 19:34:13 2010 Install Date: Thu May 6 06:36:17 2010 Build Host: jdk-lin-1586 Group : Development/Tools Source RPM: jre-1.6.0_20-fcs.src.rpm Size : 50708634 License: Sun Microsystems Binary Code License (BCL) Signature : (none) Packager : Java Software <[email protected]> URL : http://java.sun.com/ Summary : Java(TM) Platform Standard Edition Runtime Environment Description : The Java Platform Standard Edition Runtime Environment (JRE) contains everything necessary to run applets and applications designed for the Java platform. This includes the Java virtual machine, plus the Java platform classes and supporting files. The JRE is freely redistributable, per the terms of the included license. [root@host java]# I couldn't find any results on Google for any parts of that error message, and I have very little experience of rpm (I usually use Debian). Is this a broken package, or am I missing something or some setting?

    Read the article

  • Any e-mail client with additional grouping functionality on Mac OS X?

    - by harald
    Hello, I'm very unhappy with my mail experience. I'm receiving a lot of mails from various clients and projects and need a way to better organize them. I would really love to have additional grouping-functionality with the e-mail client. Currently I'm using Mac OS X's Mail.app, but I am not bound to this. So I am open to any Mail.app-plugin or independent mail application commercial or not for Mac OS X -- should support IMAP, though -- but I think this should not be a problem nowadays? With Mail.app i'm doing the following: group by thread sort by datetime descending What I would love to have is not only additional tagging-functionality for e-mails -- I know, that at least thunderbird and postbox support them. I would love to have some additional grouping functionality for these tags -- inside the main mail window. So maybe I can summarize the important points: "native" Mac OS X mail client (no web-mailer please) automatic-tagging functionality (eg.: auto-apply tags by some kind of filter) easy access to tagged mails Easy access to tagged mails: I would really love to have some additional grouping functionality in the mail folders. The mail application should put all tagged mails in a group -- the groups should be sorted by last received e-mail. Inside the group I would still like to have the possiblity to group by thread. or It would be ok to have a list of tags (topics) on the left pane of the mail client. For example postbox: There is the 'accounts-section', there is the 'folders-section' -- but why is there no 'topics-section'? Thanks very much,

    Read the article

  • IIS Manager - Connect to Another Server (Win7 to Win2008 server)

    - by Matt
    I am running Windows 7 Ultimate. If I open up IIS Manager, I see a list of "connections" on the left hand side. In previous versions, I would be able to select an option to "connect to another server" or "connect to another machine", but there is no such option visible anywhere here. The only thing in the list is my local machine. Even in the address bar, if I manually type in the server location (\servername, even tried just servername), nothing happens (it just reverts back to my current local computer) The documentation at http://technet.microsoft.com/en-us/library/cc732466%28WS.10%29.aspx seems to imply the very same steps... but there is just no button or menu option anywhere to do this. Am I missing something? I'm not even seeing a grayed out menu option. EDIT: Under the "File" menu, I see 2 options: Save Connections (grayed out) Exit Under the "Connections" pane, I see 1 button, grayed out. When I hover the mouse, it simply says "Up", appears to be usable if I browse into an element in my local computers IIS settings If I right click inside the pane itself, I see Refresh Add website (to the current host) Start Stop Rename Switch to Content View UPDATE: I downloaded and installed the Remote Server Administration tools from http://www.microsoft.com/downloads/details.aspx?FamilyID=7D2F6AD7-656B-4313-A005-4E344E43997D&displaylang=en, and I enabled everything listed under "Remote Server Administration Tools" under "Turn Windows Features On or Off". Still nothing.

    Read the article

  • Can a hardware firewall block a server accessing its OWN UNC shares?

    - by Simon
    I need to set up a UNC share for my hosted dedicated server to access a share on itself. Unfortunately TFS requires a UNC share. I am on a Windows Server 2008 Standard SP2 64bit dedicated server behind a PIX 501 firewall hosted with GoDaddy. I just cannot get the server to access itself and get this error: Windows cannot access \\SERVER\SHARE Check the spelling of the name.. etc. I've found numerous questions about this but no answer to my problem. Server 2008 Standard x64 SP2 Workgroup - not domain Windows Firewall is off Computer browser service is on I am trying to access \\MYMACHINE\TFS-BUILDS by typing in - or double clicking. Neither works. Machine has single network card Filesharing wizard says share was ok Share was showing under 'Computer management' Permissions are set to 'everyone' full control No obvious errors in eventlog Reboot didn't fix it Unfortunately I cannot try to access other shares in or out of this machine because it is a hosted dedicated server and the only machine behind a hardware firewall. The only thing left i can think of is that the hardware firewall needs to be configured. Is this possible? Does 'UNC traffic' go out of the machine and then back in again?

    Read the article

  • How do I make XTerm not use bold?

    - by mike
    I like using XTerm, I like its default "fixed" font, and I like using terminal colors rather than having a monochromatic terminal. However, XTerm seems to insist on using a bold version of the font whenever it's displaying a bright color: I hate hate hate the bold version of the font, but I like the brightness. The man page seems to suggest that adding "XTerm.VT100.boldMode:false" to my ~/.Xresources would disable this "feature", but it doesn't seem to have any effect. I've had it in there for months, so it's not a rebooting issue. How can I force XTerm to always use the standard, non-bold version of the fixed font, even when it's displaying bright text? Edit: Some have suggested putting "XTerm*boldMode: false" in my ~/.Xresources. That didn't help either. I've confirmed that the changes have taken effect with xrdb, though: $ xrdb -query | grep boldMode XTerm*boldMode: false And if i run xprop and click an xterm, I get "WM_CLASS(STRING) = "xterm", "XTerm"" .. so i'm definitely running real xterms. BTW, this is just a plain-vanilla Ubuntu Intrepid box. If anyone else here is running the same, can you try running: echo -e '#\e[1m#' ...and let me know whether the # on the right has a black pixel in the middle like the one on the left does?

    Read the article

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • Proper Network Infastructure Setup DMZ, VPN, Routing Hardware Question

    - by NickToyota
    Greetings Server Fault Universe, So here's a quick background. Two weeks ago I started a new position as the systems administrator for an expanding health services company of just over 100 persons. The individual I was replacing left the company with little to no notice. Basically, I have inherited a network of one main HQ (where I am situated) which has existed for over 10 years, with five smaller offices (less than 20 persons). I am trying to make sense of the current setup. The network at the HQ includes: Linksys RV082 Router providing internet access for employees and site to site VPN connecting the smaller offices (using an RV042 each). We have both cable and dsl lines connected to balance traffic (however this does not work at all and is not my main concern right now). Cisco Ironport appliance. This is the main gateway for our incoming and outgoing emails. This also has an external IP and internal IP. Lotus domino in and out email servers connected to the mentioned Cisco gateway. These also have an external IP and internal IP. Two windows 2003 and 2008 boxes running as domain controllers with DNS of course. These also have both an external IP and internal IP. Website and web mail servers also on both external and internal IPs. I am still confused as why there are so many servers connected directly to the internet. I am seriously looking to redesign this setup with proper security practices in mind (my highest concern) and am in need of a proper firewall setup for the external/internal servers along with a VPN solution about 50 employees. Budget is not a concern as I have been given some flexibility to purchase necessary solutions. I have been told Cisco ASA appliance may help. Does anyone out in the Server Fault Universe have some recommendations? Thank you all in advance.

    Read the article

  • Script to unstick VNC keys?

    - by MidnightLightning
    I've run into dropped key events when connecting via VNC clients, leading to a "stuck key" (usually a meta key like CTRL or ALT) and searching around the common answer on how to solve it is often "press and release each meta key individually until the problem resolves". However, I've found this to be annoying and time consuming to try and solve it this way. Plus on a bad connection, it sometimes will miss the "key up" event for the meta key again, and still keep the key stuck. So I'm looking for an automated way to do this: From a script on the client side or the server side, is there a way to trigger "key up" events for all the meta keys (CTRL, ALT, SHIFT, and WIN/CMD, both Left and Right versions)? Or just a command to release all keys the server thinks are down at the moment? Or some scripted way to at least list which keys the server end thinks are down so I know which key to keep pressing and releasing to try and release it? I've got a Mac on the server end, so a Mac/Linux solution would be needed for my situation.

    Read the article

  • get ubuntu terminal to send an escape sequence (control+shift+up)

    - by user62046
    This problem starts when I use emacs ( with -nw option). Let me first explain it. I tried to define hotkey (for emacs) as following (global-set-key [(control shift up)] 'other-window) but it doesn't work (no error, just doesn't work), neither does (global-set-key [(control shift down)] 'other-window) But (global-set-key [(control shift right)] 'other-window) and (global-set-key [(control shift left)] 'other-window) work! But because the last two key combinations are used by emacs (as default), I don't wanna change them for other functions. So how could I make control-shift-up and control-shift-down work? I have googled "(control shift up)", it seems that control-shift-up is used by other people, (but not very few results). In the Stack Overflow forum, Gille answered me as following: Ctrl+Shift+Up does send a signal to your computer, but your terminal emulator is apparently not transmitting any escape sequence for it. So your problem is in two parts. First you must get your terminal emulator to send an escape sequence, which depends on your terminal emulator, and is Super User material, or Unix.SE if you're using a unix system. Then you need to declare the escape sequence in Emacs, and my answer explains that part So I come here for this question: How do I get my terminal (I use ubuntu 10.04, and the built-in terminal) to send an escape sequence for Control+Shift+Up Control+Shift+down

    Read the article

  • Css absolute position don't work in MS WORD

    - by Tim
    Hello! This is a sample: <html> <head> <title>word test</title> </head> <body> <div style='position: absolute; width: 30px; height: 50px; top: 100px; left: 20px; border-color: black; border-width: 1px; border-style: solid;'> <p>Hello!</p> </div> </body> </html> Save it as "word.doc" and open in MS WORD. Absolute position don't work! Div is rendered on the top of document and with 100% width. Why? I can't use a html tables. Version on ms word: 2003

    Read the article

  • Win XP error 0x80041003 using GetObject/winmgmts

    - by John Lewis
    My computer is called "neil" and I want to set some values using WMI in vbScript. I adapetd the script below from one supplied by Microsoft. When I run it in my browser I get Error Type: (0x80041003) /dressage/30/pdf2.asp, line 8 I suspect it is some registry/security setting. Any advice? John Lewis FULL SCRIPT call Print_HTML_Page("http://neil/dressage/ascii.asp", "ascii") Sub SetPDFFile(strPDFFile) Const HKEY_LOCAL_MACHINE = &H80000002 strKeyPath = "SOFTWARE\Dane Prairie Systems\Win2PDF" strComputer = "." Set objReg=GetObject( _ "winmgmts:{impersonationLevel=impersonate}!\\" & _ strComputer & "\root\default:StdRegProv") strValueName = "PDFFileName" objReg.SetExpandedStringValue HKEY_LOCAL_MACHINE,_ strKeyPath,strValueName,strPDFFile End Sub Sub Print_HTML_Page(strPathToPage, strPDFFile) SetPDFFile( strPDFFile ) Set objIE = CreateObject("InternetExplorer.Application") 'From http://www.tek-tips.com/viewthread.cfm?qid=1092473&page=5 On Error Resume Next strPrintStatus = objIE.QueryStatusWB(6) If Err.Number 0 Then MsgBox "Cannot find a printer. Operation aborted." objIE.Quit Set objIE = Nothing Exit Sub End If With objIE .visible=0 .left=200 .top=200 .height=400 .width=400 .menubar=0 .toolbar=1 .statusBar=0 .navigate strPathToPage End With 'Wait until IE has finished loading Do while objIE.busy WScript.Sleep 100 Loop On Error Goto 0 objIE.ExecWB 6,2 'Wait until IE has finished printing WScript.Sleep 2000 objIE.Quit Set objIE = Nothing End Sub

    Read the article

  • Basic Apache setup is not seeing my site

    - by Jakobud
    Sorry that is a horrible thread subject, but I cannot think of a better more descriptive subject. We are running a Fedora 11 server that is currently hosting some CRM on it. I want to use a VirtualHost directive to add another site to the server. So I created this conf: /etc/httpd/conf.d/mysite.ourdomain.com.conf And here is the content: <VirtualHost *:80> ServerName mysite.ourdomain.com DocumentRoot /www/mysite ServerAdmin [email protected] ErrorLog /var/log/mysite.ourdomain.com-error.log CustomLog /var/log/mysite.ourdomain.com-access.log common </VirtualHost> I restarted apache, getting the following warning: [warn] NameVirtualHost *:80 has no VirtualHosts From what I read, this warning is not related and I can ignore it and my site should still be up and running, correct? (I'll troubleshoot this error later if so) Well I have our DNS server setup to point mysite.ourdomain.com to goto this server. I can ping it and it points to the correct LAN IP, etc.. Now when I try to access it in the browswer I get nothing. It just says Connecting... and never gets there. If I try mysite.ourdomain.com or the IP address, neither one doesn't get there. It's a very simple and basic apache setup so I'm not sure what I'm doing wrong... Like I said, the other thing that is running on this server is a crm and it's .conf looks something like this: Listen x.x.x.x:443 <VirtualHost x.x.x.x:443> ServerAdmin [email protected] ServerName crm.ourdomain.com ErrorLog /var/log/httpd/ourdomain/crm-error.log CustomLog /var/log/httpd/ourdomain/crm-access.log common DocumentRoot /www/ourdomain/crm <IfModule mod_dir.c> DirectoryIndex /index.php </IfModule> </VirtualHost> There is also some LDAP authentication stuff in that config but I left it out cause I assumed it wasn't necessary to post. Anyone have any clue where I should start or what settings I can post from httpd.conf that would help?

    Read the article

  • Is it possible to have a wireless in-house NAS with wireless data transfer rates of equivalent to SATA speeds?

    - by techaddict
    Basically I would like to know, if it is possible to set up an NAS in my house to be accessed wirelessly, that can reach equivalent real-life data transfer speeds to USB 3.0 or an internal SATA hard drive. I have been wanting to do this for some time ( a couple of years now). Basically, this is what I want to do: Plug in a number of hard drives in an array, somewhere in my house, to be left plugged in and never have to be monitored. Ideally several terabytes. Whenever I am home, to have my computer and laptop configured to automatically find the NAS, as easy as plugging in an external hard drive - except completely wirelessly. Data transfer needs to be as seamless and quick as having added another internal hard drive in my laptop. Moreover, data should be able to accessed without having to copy it over - I should be able to wirelessly access the NAS and browse files, and open files directly from the NAS. For example, say I wanted to open a video - I should be able to play the video that is located on the NAS, directly from the NAS, completely wirelessly. If I wanted to open a .pdf file, I should be able to open it and read it directly from the NAS, as if it were located on my physical internal hard drive. Cost is important as well. Please tell me what equipment I need for this to be possible. I know you geniuses out there who can tell me if this is possible.

    Read the article

  • simple and reliable centralized logging inside Amazon VPC

    - by Nakedible
    I need to set up centralized logging for a set of servers (10-20) in an Amazon VPC. The logging should be as to not lose any log messages in case any single server goes offline - or in the case that an entire availability zone goes offline. It should also tolerate packet loss and other normal network conditions without losing or duplicating messages. It should store the messages durably, at the minimum on two different EBS volumes in two availability zones, but S3 is a good place as well. It should also be realtime so that the messages arrive within seconds of their generation to two different availability zones. I also need to sync logfiles not generated via syslog, so a syslog-only centralized logging solution would not fulfill all the needs, although I guess that limitation could be worked around. I have already reviewed a few solutions, and I will list them here: Flume to Flume to S3: I could set up two logservers as Flume hosts which would store log messages either locally or in S3, and configure all the servers with Flume to send all messages to both servers, using the end-to-end reliability options. That way the loss of a single server shouldn't cause lost messages and all messages would arrive in two availability zones in realtime. However, there would need to be some way to join the logs of the two servers, deduplicating all the messages delivered to both. This could be done by adding a unique id on the sending side to each message and then write some manual deduplication runs on the logfiles. I haven't found an easy solution to the duplication problem. Logstash to Logstash to ElasticSearch: I could install Logstash on the servers and have them deliver to a central server via AMQP, with the durability options turned on. However, for this to work I would need to use some of the clustering capable AMQP implementations, or fan out the deliver just as in the Flume case. AMQP seems to be a yet another moving part with several implementations and no real guidance on what works best this sort of setup. And I'm not entirely convinced that I could get actual end-to-end durability from logstash to elasticsearch, assuming crashing servers in between. The fan-out solutions run in to the deduplication problem again. The best solution that would seem to handle all the cases, would be Beetle, which seems to provide high availability and deduplication via a redis store. However, I haven't seen any guidance on how to set this up with Logstash and Redis is one more moving part again for something that shouldn't be terribly difficult. Logstash to ElasticSearch: I could run Logstash on all the servers, have all the filtering and processing rules in the servers themselves and just have them log directly to a removet ElasticSearch server. I think this should bring me reliable logging and I can use the ElasticSearch clustering features to share the database transparently. However, I am not sure if the setup actually survives Logstash restarts and intermittent network problems without duplicating messages in a failover case or similar. But this approach sounds pretty promising. rsync: I could just rsync all the relevant log files to two different servers. The reliability aspect should be perfect here, as the files should be identical to the source files after a sync is done. However, doing an rsync several times per second doesn't sound fun. Also, I need the logs to be untamperable after they have been sent, so the rsyncs would need to be in append-only mode. And log rotations mess things up unless I'm careful. rsyslog with RELP: I could set up rsyslog to send messages to two remote hosts via RELP and have a local queue to store the messages. There is the deduplication problem again, and RELP itself might also duplicate some messages. However, this would only handle the things that log via syslog. None of these solutions seem terribly good, and they have many unknowns still, so I am asking for more information here from people who have set up centralized reliable logging as to what are the best tools to achieve that goal.

    Read the article

  • Windows XP restarting repeatedly after a fresh restore

    - by DWilliams
    I don't normally deal with Windows but someone wanted me to fix their computer with Windows XP on it. It was just restarting every time it tried to boot. It would not go into safe mode, the result was the same regardless of the selected mode. The computer is like 4 years old and has been running the same installation for that entire time, so I figured the easiest solution was just to back up their files and re-install. I loaded the computer up with a live CD and copied their files off to a USB drive, then proceeded to run HP's "factory restore" feature (which I'm not particularly fond of, I'd rather have a disk to install from than reload all the crapware HP gets paid to install for you). It restored, and I put all their files back, installed their programs, and started the full windows update process. Everything seemed great so I left and told them what to do once it finished. A few hours pass, and my phone rings. Apparently it started doing the exact same thing as before once the updates finished. I don't have the computer sitting in front of me now so I can't really provide any more information than that. What could be causing this and, more importantly, how do I fix it? The fact that the same problem resurfaced after the restore makes me think it's either a hardware problem or an update breaking the computer.

    Read the article

  • Windows 7 Not Starting and System Repair Not Loading

    - by Mark
    I have a Dell Inspiron 1545 running Windows 7 When turning on my PC I keep receiving a black screen with the option to use System Repair or Start Normally. Both options lead me to the System Repair background except no matter how long I wait the system restore options never show up. Choosing F8 and running all of the options including safe mode encounters the same result above. I tried to to use 2 system recovery disks 32x and 64x I downloaded and both lead to similiar results. When I choose System Repair running from the disk the System Repair Question asking to select a language pops ups but after this no matter how long I wait no other options appear. Next after restarting and selecting F8 (after hitting f12 and running from CD) I choose 'Run From Safe Mode with Command Prompt' I am able to run all of the options from System Restore with differing results: Startup Repair: Choosing this ends up in system repair indefinitely (left running 12 hrs) System Restore: Does Nothing. PC thinks for a second and then stops. When selecting ShutDown I see an error message stating there are no restore points. System Image Recovery: Service Cannot be started in Safe Mode Windows Memory Diagnostic: Runs test but then leads to system repair background which never loads system repair Command Prompt: chkdsk /r -Cannot Lock Current Drive...write protected. chkdsk /f -Cannot Lock Current Drive...write protected. bootcfg - Cannot open Boot.Ini file bootcfg - Ran all 3 (rebuildBcd, FixMbr and Fixboot) but PC still goes to System Repair background with no repair options popping up upon restart (without recovery CD). I'm on the verge of purchasing a boot utility disk for $50 unless there is anything else short of "take it to a computer shop" that somebody can suggest I try.

    Read the article

  • Webservice randomly dropping connections - possibly due to firewall nonevent data?

    - by adam
    I have a hosted webapp which requests data from a REST webservice in our office. Each page calls one (or several) webservices, which go from our host, via our firewall (a Watchguard Firebox) to a server in our office. All of a sudden, the app has dramatically slowed. We have determined that the webservice is timing out at random when called externally (it's fine when called within the office network). I'm pretty certain it's our connection which is dropping the webservice call, so I've written a quick php/curl script which calls the webservice over many iterations and shows the various timings. Below is an example output, showing both a failed and a successful call (with a 5 second timeout): http_code namelookup_time connect_time pretransfer_time starttransfer_time total_time 1 0 0.000096 0.0342 0.0000 0.0000 0.0342 2 200 0.000052 0.0332 0.1327 0.1751 0.1752 As per iteration #1 above, failed requests seem to be failing between connect and pretransfer. I'm not sure if this shows that the connection is successfully past the firewall, or could the firewall still cause an issue? Our firewall is showing a series of nondata event log messages for the relevant access rule. Our IT team tells me these are routine, although I can find no mention of these in Google. I'm not sure if this fits in between connect and pretransfer. Having elinated the webservice server (by testing internally) and the live webapp (by testing different code on different external servers, I am left suspecting the connection to the office. Could the firebox nondata events be causing a problem between connect and pretransfer?

    Read the article

< Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >