Search Results

Search found 29426 results on 1178 pages for 'user99572 is fine'.

Page 122/1178 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Indefinite hang when restoring SQL 2005 database on a SQL 2008 server in EC2

    - by erinloy
    I'm trying to restore a 25 GB database backup taken from a Windows 2003/SQL 2005 machine to a Windows 2008/SQL 2008 machine in the Amazon EC2 cloud, using a .bak file and the SQL Management Studio. SQL Management Studio reports the restore reaches 100% complete, and then just hangs indefinitely (24+ hours) using a lot of CPU, until I restart the SQL Server service. Upon restart, SQL again uses a lot of CPU activity for what seems to be an indefinite amount of time, but the DB never comes online. Here are some details: - I have created two EBS volumes, one for DATA and one for LOGS, and I have set the default directories in SQL Server to the \DATA and \LOG directory on these respective volumes. (I wonder if the issue could be related to this, but the DB is too big to restore on the root drive.) - I have given the SQL Server user group full access to these directories. - The server can create a new empty test DB in these directories just fine, and can backup and restore the test DB. - I have tried both restoring of a .bak file and attaching directly to copies of the original .mdf/.ldf files, and the result is the same in both cases. - Both the .bak restore and the .mdf/.ldf attach occur from/to the EBS volumes. - I've also tried the above via SQL script, and "WITH RECOVERY", with no difference in the result, just less UI. - The backup contains two full text indexes. - I have to use "WITH MOVE" for most of the files in the backup. - There's nothing wrong with the backup or .mdf/.ldf files, as this works just fine on a Windows 2003/SQL 2005 machine in the Amazon EC2, but not Windows 2008/SQL 2008. - The DB is NOT marked as "Restoring" in the SQL Management Studio - it is just listed as a normal database, but throws errors when I try to do anything with it (expand the object browser tree, view properties, etc.) Any ideas?

    Read the article

  • Using %v in Apache LogFormat definition matches ServerName instead of specific vhost requested

    - by Graeme Donaldson
    We have an application which uses a DNS wildcard, i.e. *.app.example.com. We're using Apache 2.2 on Ubuntu Hardy. The relevant parts of the Apache config are as follows. In /etc/apache2/httpd.conf: LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog In /etc/apache2/sites-enabled/app.example.com: ServerName app.example.com ServerAlias *.app.example.com ... CustomLog "|/usr/sbin/vlogger -s access.log /var/log/apache2/vlogger" vlog Clients access this application using their own URL, e.g. company1.app.example.com, company2.app.example.com, etc. Previously, the %v in the LogFormat directive would match the hostname of the client request, and we'd get several subdirectories under /var/log/apache2/vlogger corresponding to the various client URLs in use. Now, %v appears to be matching the ServerName value, so we only get one log under /var/log/apache2/vlogger/app.example.com. This breaks our logfile analysis because the log file has no indication of which client the log relates to. I can fix this easily by changing the LogFormat to this: LogFormat "%{Host}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog This will use the HTTP Host: header to tell vlogger which subdirectory to create the logs in and everything will be fine. The only concern I have is that this has worked in the past and I can't find any indication that this has changed recently. Is anyone else using a similar config, i.e. wildcard + vlogger and using %v? Is it working fine?

    Read the article

  • Problem booting virtual machine after converting VMDK to VHD

    - by vg1890
    I used the VMWare VCenter Converter Standalone Client to convert a physical drive on my old PC to a virtual drive. The conversion worked fine and I ended up with a valid VMDK file. Next, I wanted to convert the VMDK to a VHD for use with Microsoft Virtual PC, since that's what I use on my new box. I used WinImage for the conversion and that worked fine, too. I can access the files from the virtual drive through WinImage. However, when I create a new virtual machine using Virtual PC and add the existing VHD file, the machine doesn't boot. The initial boot screen flashes with the amount of RAM and then the screen goes black. If I turn off the VM and reboot in safe mode I can see the drivers being loaded until eventually it gets to crcdisk.sys and hangs indefinitely. Any ideas how to fix this? I'm not opposed to starting over from scratch if there's another method to turn my physical machine into a Virtual PC VM. Thanks! EDIT - I should add that the virtual drive is a system boot drive and not a secondary drive. EDIT - I tried booting from the install CD and doing a repair. The result was that the system could not be repaired due to a "driver error."

    Read the article

  • Crazy problem with Nginx, PHP5-FPM on Ubuntu

    - by Emmanuel
    I've been trying to get a domain from shared hosting to my new VPS. Everything was working just 100% fine, and then all of a sudden rewrites stopped working, pictures that should work started returning 404s. I've got no idea why, but for some reason on my site: http://www.onlythebible.com/ only the home page works, all the other pages depend on rewrites which were working perfectly fine at one stage, but all of a sudden stopped working. Some of the pictures like this url: http://www.onlythebible.com/bgsPreview/Matthew-8.10.jpg which doesn't use a rewrite throws a 404? I almost certain it was nothing to do with the nginx configuration. I've got suspicions that it could be something to do with php5-fpm? The funny thing is, all of a sudden it started working again. And then an hour or so later it broke again and has now gone back to only displaying the home-page and all of the links (and some of the pictures) are just showing 404s. Does anyone have an idea of what the problem might be? I'm pretty new to the whole Linux VPS thing, but this just seems very strange. *edit Here's a line from the error log which might shed some light on the problem: 2011/02/06 03:04:59 [error] 2873#0: *220 open() "/usr/local/nginx/html/bgsPreview/Matthew-8.10.jpg" failed (2: No such file or directory), client: 114.77.115.211, server: onlythebible.com, request: "GET /bgsPreview/Matthew-8.10.jpg HTTP/1.1", host: "www.onlythebible.com", referrer: "http://www.onlythebible.com/" I wonder why it's trying to find the file in /usr/local/nginx/html instead of the proper root which is /var/www/ etc... Oh, and for some reason it's just started working again... for how long I don't know. Another thing that was a bit weird, is that the pages on my website are pulled from a database. But when I edited the database, the pages didn't change... It's almost like they've been cached or something.

    Read the article

  • Mount points disappear from network share directory listing

    - by Barakando
    When browsing a network share which contains volume mount points, said mount points disappear from the directory listing. The mount points are still accessible directly by path, just not present in the directory listing. The machine is a Vista SP1 32 bit machine. It has a network share that contains volume mount points to the volumes of the Vista machine (created using the SetVolumeMountPoint API). When browsing the network share from another computer (either Win7 64 bit, Win7 32 bit or Vista SP1 32 bit) using Windows Explorer the following problem occurs: First, both volume mount points called C, D appear fine. I browse into directory C and see all its contents properly. I go back to the root of the shared folder and now I only see D. C has disappeared from the directory listing. I enter D and see all its contents. Go back to the root of the shared folder and now it's empty. D disappeared as well. If I manually go to \\<path to shared folder>\C from the address bar - then all is fine and I can browse its contents (same with D). The same issue does NOT occur when creating a similar share with volume mount points on Windows XP SP2 or SP3. Has anyone came across this problem? Any ideas how to work around it?

    Read the article

  • ASP.Net application can no longer write to DB after having run out of disk space

    - by remi.despres-smyth
    I'm a software developer troubleshooting a sticky problem on a client's production server, and I've got a bit of a problem. They have a virtual server running Windows Server 2008, SQL Server 2008 R1 and IIS7. It was provisioned with two partitions: one that has the OS (~15 Gig), and the other has IIS' web sites (another ~15 Gig). My application that's running this server has been running perfectly well, up until about an hour ago, when it started throwing System.IO.IOException: "There is not enough space on disk". As soon as my client notified me, I cleared up some space on C:\, emptied the recycle bin, and restarted SQL Server and IIS. The web server came back up and the application was running, but it no longer saves information to the database. No error message is coming up, the application can get information out of the DB, but it can no longer save data back to it. I rebooted the server, to no effect. I spoke with a sys admin at the hosting company, and he says SQL Server appears to have come up fine and the database is not in read-only mode. I confirmed that, as I can add records to tables from SQL Server Management Studio. I looked at the event log immediately after trying to save an edited record in the app, and no new events appear in there that I can tell. I'm assuming this is related to having run out of space, as it was all working fine prior to that, but I'm at a bit of a loss as to what exactly needs a kick in the pants to get going again. Can anyone help me out? What the heck is going on here?

    Read the article

  • snmpd dead but subsys locked

    - by Hina NMS
    Hi folks I have an NMS and a Client machine. I want the client to send traps to the NMS. I have been configuring the snmpd.conf file testing if i disable a process do i receive an alert or not. For the changes to reflect that were made in the conf file i restarted the snmpd daemon each time. The testing was going fine. All of a sudden when i restarted snmpd i recieved the error msg "snmpd dead but subsys locked". I googled for answer as to what it actually meant and found out that when a service is started a logfile is created in the /var/lock/subsys. Sometimes if the service is not stopped properly or whatever the logfile remains created. Though i started/stopped the snmpd service properly it didnt go away so i removed the file manually (via rm cmd). when i checked the status the error "snmpd dead but subsys locked" was gone. On my NMS i recieved the alert of snmpd coldstart. i started the snmpd service everything goes fine! BUT after 5 mins again i recieve the same error message and this keeps on happening..what do i need to do now?

    Read the article

  • Windows Audio Issue

    - by Nikki
    This one is driving me nuts. Hoping someone can shed some light. I'm running windows 7 using onboard audio. It's been fine for over 2 years but lately there's a problem every time I play audio. I hear a small soft burst of static and the volume turns itself down from 50% to 23%. Once at 23%, it plays fine. No related events logged in viewer. No reported problems with the device. Different headphones, same problem. I played around with audio settings for hours but the problem persists. EDIT: ok more info: Motherboard: ECS G31T-M LGA775 System info displays this: Name High Definition Audio Device Manufacturer Microsoft Status OK PNP Device ID HDAUDIO\FUNC_01&VEN_1106&DEV_E721&SUBSYS_10192683&REV_1001\4&3D4E739&0&0001 Driver c:\windows\system32\drivers\hdaudio.sys (6.1.7600.16385, 297.00 KB (304,128 bytes), 14/07/2009 9:51 AM) I'll keep adding info as I find it. The question I want resolved is; Is it faulty hardware? If so, I can buy a sound card. I can't imagine software is responsible since I haven't installed anything new for weeks. Virus scans are clear as well. The static burst is irritating to say the least. Tried 2 different headphones and separate speakers. Same problem. I know it's not an easy problem but I was hoping someone had encountered the same thing.

    Read the article

  • Mac OS X - User home directories shared via NFS

    - by Hugh
    I've run into some problems with how I've got user home directories set up on our system here. Our server is an XServe, using Open Directory to manage the user accounts. The majority of our workstations are OS X, but there are a few running Linux (Centos 5.3), and, as time goes on, we expect the proportion of Linux workstations to increase (at some point, we expect to move the server side over to Linux too, but for now we're running with what we've already got) To ensure that the Linux and OS X workstations both see user's home directories in the same place, I shared the home directories using NFS. On the server end, the home directories are stored in: /Volumes/data/company_users This is mounted on the workstations to: /mount/company_users This work fine on the Linux workstations, but there is some weirdness under OS X. For the user who is logged in through the GUI, it all works just fine. However, if a user tries to SSH into a machine that they are not the primary user on, they often have no access to their own home directory. It looks as though OS X is trying to do something else to the user home directories mount point when you log in through the GUI.... For example, on this machine (nv001), I (hugh) am logged into the GUI. Last login: Mon Mar 8 18:17:52 on ttys011 [nv001:~] hugh% ls -al /mount/company_users total 40 drwxrwxrwx 26 hugh wheel 840 27 Jan 19:09 . drwxr-xr-x 6 admin admin 204 19 Dec 18:36 .. drwx------+ 128 hugh staff 4308 27 Feb 23:36 hugh drwx------+ 26 matt staff 840 4 Dec 14:14 matt [nv001:~] hugh% So Matt's home directory is accessible to him. However, if I try to switch to him: [nv001:~] hugh% su - matt Password: su: no directory [nv001:~] hugh% Or: [nv001:~] hugh% su matt Password: tcsh: Permission denied tcsh: Trying to start from "/mount/company_users/matt" tcsh: Trying to start from "/" [nv001:/] matt% Does anyone have any idea why it might be doing this? It's causing me all sorts of problems at the moment... The only machine that I can successfully switch users at the moment is the server that the user directories are stored on, where /mount/company_users is actually just a symlink to /Volumes/data/company_users

    Read the article

  • os x 10.4 Old, deleted user mail account problems

    - by Chris
    Hello- A while back I tried to add a user 'david' as a mail user on my OS X 10.4 server using dscl (I only had terminal access at the time, no ability to use workgroup manager). I could never get this account to work properly, so I deleted it. dscl . -list /Users no longer shows 'david' as an entry. I have since gained access via Workgroup Manager, and I am trying to re-create the 'david' account. Workgroup manager creates the account fine, along with an email account, which I can then log into via IMAP ('login david password' returns 'OK user logged in'). However, this mail account does not have an inbox, and I can not create one thru a mail client, IMAP or cyradm (they all say 'system I/O error'). When I re-delete this user, I can't find any record of him in any of the mail spool locations. Creating a user with any other name works fine (Inbox, mail access, everything). Any ideas on how I can get this user up and running again? -Chris P.S. - to create this user in the first place, I used dscl . create, then dscl . append /Users/david "some XML I found on the 'net" to add email privileges, if this helps...

    Read the article

  • Adobe Acrobat Pro 9.0 on Windows 7 print to network share gives error

    - by Archit Baweja
    I've recently upgraded a client's workstations to brand new computers, with Windows 7 Professional. The server is still Windows Server 2003. The server has 2-3 file shares that get mapped to users' workstations as drives. The client has also upgraded from Acrobat 6.0 to 9.0 Pro. Since the upgrade, when the client tries to print to the Adobe PDF printer (aka convert something to PDF via the printer interface), it gives an error in the queue if the file is being saved on the network drive. If I instead provide a local path, the file "prints" fine. Additionally, if I change the Adobe PDF printer's settings to "don't spool, print directly to printer", it prints to the network share fine, but then it resets that setting every time. Things I've checked for: Permissions on the network share. The user and the computer has full access. We even gave the "Everyone" ibject full access. Reinstall Adobe Acrobat Pro 9.0 Run updates to upgrade to 9.3.4 Has anyone else bumped into such a problem? The support fellows from Adobe are just taking me around in circles. They don't seem to have a clue either.

    Read the article

  • IPCop server slows down download speed

    - by noocyte
    I have an IPCop server running at home, been doing just fine for ~5 months, but last week I suddenly started getting time-outs and slow downloads from the 'net. I first thought that this was my ISP acting up, then I thought it might be one of my 3 switches or some of my cabling. In due order I've tested everything above and found them all to be working as they should. The only factor remaining is my IPCop server. Facts: I've got a 15/15 Mbit line (fiber) and I get ~15 Mbit upload, but only 0.5 Mbit download with the IPCop box as router (ISP router set in bridge mode). If I connect without the IPCop box (using the ISP router) I get ~12 Mbit upload and ~15 Mbit download. The load on the IPCop box appears to be light and it used to handle this traffic just fine 2 weeks ago. The memory usage is ~60%, I tried to restart it and test again, the memory fell to ~50% then (5 months of uptime). I'm thinking that one of my nics are busted, but I'm sort of perplexed that this could be the outcome; slow download but full speed upload. Anybody ever seen that happening before? Could it just be one of the nics that needs to be replaced? Will try that as soon as I can get my hands on a couple of new ones.

    Read the article

  • Virutal Machine loses network connectivity on Hyper V Cluster

    - by Chris W
    We're running a number of VMs on a 6 node failover cluster of blades using Hyper V. We have an intermittent issue (every few days at different times - not a fixed frequency) of VMs losing network connectivity. Console access to the VM suggests all is fine and the underlying blade has normal connectivity. To resolve the problem we either have to re-start the VM or, more usually, we do a live migration to another blade which fires up connectivity and we then migrate it back to the original blade. I've had 3 instances of this happen with a specific VM running on a particular blade however it has happened once with a different VM running on a different blade. All VMs and blades have the same basic setup and are running Windows 2008 R2. Any ideas where I should be looking to diagnose the possible causes of this problem as the event logs provide no help? Edit: I've checked that each blade is running the latest NIC drivers and all seem to be fine. Something that is confusing me - a failover or restart of the VM resolves the issue. Whilst I need to work out the underlying issue that is causing the NICs to hang I'm also concerned that the VM didn't failover to another node which would have solved the outage for me. Is there a way to configure the cluster so that it can tell that the VM guest has lost connectivity and fail it over? As things stand the cluster is assuming that the VM is running happily as I presume Hyper V says everything is great even though there is a problem.

    Read the article

  • Can't connect to LAN when connected to D-Link DIR-615

    - by Senseful
    I'm have a D-Link DIR-615 Wireless N 300 Router. I didn't use the CD it comes with to set up the network. Instead I configured it manually through the router's settings that are accessed via a web browser. The main changes I made are: Secured the router so that a password is required before clients can use the wireless internet. Broadcasting 802.11N only (not B or G). I can connect to the router just fine and I'm able to access the internet. The only problem is that I don't see any of the other computers in my LAN. When I try connecting to another Wi-Fi router that I have (which is connected to the same network), I can see all of the computer's on my LAN just fine. Therefore, I'm guessing that the reason I can't connect to the LAN is not a problem with my computer and is a problem with the router instead. I'm on a MacBook Air running Mac OS X 10.6.6. I tried contacting D-Link technical support, but they only try to help you if you have problems connecting to the internet. They aren't really concerned with problems that have to do with the accessing PC's on the same network.

    Read the article

  • Scheduled tasks fail to start unless I'm logged in to the server

    - by Chuck
    Tasks need to open a CMD window and pass net use commands, then do a DIR command, pipping the output to a file on the server. Log in as either me (Sysadmin) or with one of the system accounts and task will only run if I'm physically logged into the server. Run as batch file is set in security properties for both users (me and service account), security is granted to all directories, etc. It almost acts like a scheduled task, since it is not physically connected to a display can't create a CMD window and pass the WinID so the command can be sent. I'm guessing. Anyone know of a document that explains how the server handles initiation of a window if done via scheduled task and no attached user is associated with the task? If I log onto the box and run the scheduled tasks they run fine, but produce no errors or event log entries and then just show that it ran successfully and sets the next run time. Have tried both with the run if logged in checkbox on and off and makes no difference. Other tasks work fine, except that they are acting on local drives with no display writing or updating taking place, so I'm guessing the system either can't instantiate a window if no display is connected to a logged on user, or it can't establish a point if it is trying to create a virtual screen. You'd think it is just creating a memory map and then mapping it to a device to display, but that doesn't seem to be the case, but I can find no documentation on how the system handles a scheduled task and how to invoke a fake or virtual screen that it could write to so it appears that a user was connected. Thanks This is driving me nuts and I've tried everything I can think of as well as our network boys ideas and nothing seems to work.

    Read the article

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • How does it hurt to use Linux (Ubuntu) as a guest OS for all my tasks?

    - by sauparna
    I have a machine running Windows, where the disk has two partitions C (50 GB) and D (250GB). I do research in Information Retrieval and need to work with a large corpus (more than 50 GB) and in Linux. So if I want to install Linux on the existing system, keeping the Windows installation intact, will it be fine to run it in a virtual box? (say, QEMU, VMWare, etc.) An alternative is using Wubi. In that case the Linux installation has to be on drive C. Then, if I keep a small Linux installation (say 5GB) on C, and my corpus on D (mounted in Linux), how will it affect the performance of my programs which would be accessing the mounted Windows drive D. Is it feasible to use Linux this way? Which of the above is better if at all they are a way out? Note : Since my post in July 2010, I have been using and have tried several ways of maintaining a disk-image that I can mount in Linux. I had a 100GB qcow2 disk and a 100GB raw disk, both formatted to an EXT3 file system. I was mounting and connecting to the qcow2 disk using qemu-nbd. The problem was that every now and then, the connection to the disk would get lost and the running programs would throw disk I/O errors. The raw disk would mount and work fine as a loop mounted device, but when writing data to it, the mount.ntfs program would hog the CPU and the process would take an enormous amount of time. I was in fact running make on a piece of software located on this raw disk, and after a point of time make was waiting while mount.ntfs would show 100% CPU usage.

    Read the article

  • mysql master-master setup as a way to simply master-slave promotion

    - by Chris Go
    I'm trying to see if the following plan is viable. Goal here is to be able to do HA (uptime) and not necessarily for load -- writes are fine on one MySQL 5.5 server (with innodb) but not really possible when the database is down. Currently, I have a master-slave replication setup which works fine except it doesn't have automatic promotion (obviously). what I am planning on doing is setup master-master replication to possibly do this "automatic promotion" using Amazon Route 53 DNS Failover (Health checks). What I am trying to avoid is to NOT have to do the auto-increment trick because the "business folks" got used to the auto-incrementing PK as consecutive numbers (yeah, I know this is bad but data is from 2004). So, setup the master-master replication WITHOUT the auto-increment collision prevention bit. The primary master is db1.domain.com and secondary master is db2.domain.com In Amazon Route 53, setup DNS Failover record for db.domain.com - primary failover is db1.domain.com - with a TCP healthcheck on IP address port 3306 - secondary failover is db2.domain.com - with a TCP healthcheck on IP address port 3306 Most of the time (99%), unless tcp://db1.domain.com:3306 is dead, db1.domain.com will be served up on DNS hits to db.domain.com. In fact, hopefully this is 100%. The possible downsides of this is the loss of a primary key (collision) and I think I am OK with losing one order. We are a low data volume B2B business and can just call our client up if this occurs (like an order disappearing). Does this sound like a good plan? Then I will also run another slave replication on db1.domain.com as "master" to a slave-db1.domain.com -- not sure why, maybe for heavy SELECTs?

    Read the article

  • wget not working with domain on local machine

    - by user568829
    Basically - I have some PHP scripts that need to be run as cron jobs. Lets say the script needing to be run is: http://admin.somedomain.com/cron_jobs/get_stats If I run the script from the local machine it gives me a 404 Not Found error. So I entered the following into /etc/hosts XX.XX.XX.45 admin.somedomain.com Now wget works fine from the local machine to that domain. However when I restart Apache that domain no longer works. Here is the config for that site in /etc/apache2/sites-available NameVirtualHost XX.XX.XX.45:80 <VirtualHost XX.XX.XX.45:80> ServerName admin.somedomain.com DocumentRoot /var/www/admin.somedomain.com/ <Directory "/var/www/admin.somedomain.com"> allowoverride all Options Indexes order deny,allow allow from all </Directory> ErrorLog /var/log/apache2/admin.somedomain.com-error_log CustomLog /var/log/apache2/admin.somedomain.com-access_log combined </VirtualHost> It just goes to the default site config showing "It Works". If I take out that setting in /etc/hosts and restart apache the website at that domain works fine again. Can anyone point me in the right direction here? Thanks

    Read the article

  • Windows 2008 RemoteAPP client disconnects within a matter of minutes

    - by Jeroen Wilke
    I'm having an odd problem with Windows 2008 TS, and remote applications specifically. The situation is as follows: TS idle timeout is disabled via GPO TS terminating disconnected sessions after 1hr (via GPO) My users can log on to the Terminal server, and get a full desktop, OR via rdp files that give access to a few remote applications. When a user connects to a full desktop, everything is fine and dandy, they will remain logged on indefinately, and when they disconnect the session is terminated after an hour. however, when a user connects using a remote application link, the client seems to disconnect after only a few minutes of inactivity, when you click the window, the session reconnects. EventID's on TS server: 4779: This event is generated when a user disconnects from an existing Terminal Services session, or when a user switches away from an existing destop using Fast User Switching. 4778 : This event is generated when a user reconnects to an existing Terminal Services session, or when a user switches to an existing desktop using Fast User Switching users are connecting directly to 3389, not using a TS-gateway at the moment. This behavior is consistent on different clients that we have, Full desktop is fine, RemoteAPP constantly disconnects. The .rdp file used doesn't list any interesting parameters, aside from what application to launch, and where to find it. Can someone explain to me how there can be a difference in behaviour between full desktop, and remoteapp ? since essentially they use the exact same client ? Regards Jeroen

    Read the article

  • Task Scheduler not able to execute .vbs successfully

    - by Django Reinhardt
    Hi there, got this weird problem, which will hopefully have an obvious solution for some enlightened soul: We have several daily tasks we run via a .vbs script on our server (through the Task Scheduler), and for months it has been fine, but recently we've hit a problem. The .vbs script stopped successfully executing... but oddly it worked fine when ran manually! The error given in these circumstances was always "Timeout". We thought we try a little creative thinking, and run the .vbs another way: Via a .bat file. Again we hit weird issues, but with a little more debugging information, this time around. The .bat file is nothing more than... CScript "C:\location\script.vbs" > Log.txt But the Task Scheduler fails with the following error: 0x1: An incorrect function was called or an unknown function was called. The log.txt file says: CScript Error: Initialization of the Windows Script Host failed. (Not enough storage is available to process this command. ) But get this: The .bat file executes perfectly (vbs script and all) if it's executed with a double click! There's only a problem when it's run by Task Scheduler. What the hell? We're running Windows Server 2008 R2 (x64) and yes, the Task Sheduler's results are the same whether the user is logged in or not. Also, the user that can run the scripts successfully manually, is also the same user that runs the scripts in Task Scheduler. Thanks for any help for this weird problem!

    Read the article

  • One host on a network can't connect to one other host

    - by Max Williams
    I'm on a local network with a few other people. On of the hosts is a virtual machine running in virtualbox on a mac, which has the ip address 192.168.0.35 (the VM that is, not the mac host). Everyone except one guy can connect (ie ping, ssh etc) to that machine. When that one guy tries to ping it he gets Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 which i understand is just how certain mac os's report an unreachable connection. He can ping all the other hosts on the network, ie our computers, and we can all ping the VM fine and connect to it with no problems etc. His ip is 192.168.0.17. I ssh'd onto his machine (as a new user 'anon') and saw the same problems. I can ssh onto the 192.168.0.35 VM as well. From there, i can ping other users, but when i ping the problem guy, it's unreachable that way round as well. He restarted his mac, and was fine for a while. Then, just stopped working again. He's got a different IP to before. Any ideas, anyone? Don't know enough about this stuff to even diagnose the problem. thanks, max

    Read the article

  • How do I Setup Multiple Sites in HostGator Shared Hosting?

    - by cillosis
    I recently decided to consolidate all of my random projects into a single hosting account as it was starting to get very expensive to run each on an individual hosting plan. I purchased the HostGator Baby plan which allows hosting of multiple domains. You have to set it up with a root domain name which is fine (I used my portfolio domain name). As far as file structure, I wanted a folder for each site in /public_html so the structure looks like this: - public_html/ - myportfolio.com/ - ... my files ... - anothersite.com/ - ... my files ... - thirdsite.com/ - ... my files ... I setup add-on domains and pointed them to their respective folders which works fine. My problem is the root domain ex. myportfolio.com expects it's files to be contained at the root of /public_html rather than within it's folder I created. I setup a redirect to point requests for myportfolio.com to myportfolio.com/myportfolio.com/ which works initially except (at least in my WordPress installation) it still references it's root folder as public_html. TL;DR; What is the best way to go about setting up multiple site hosting in a shared hosting environment (i.e. I can't setup vhosts). Does anybody know of any tutorials or videos that walk through this more clearly? Thanks.

    Read the article

  • D-LINK 2450U DSL router: Port forwarding forwading to the modem itself, not the specified IP

    - by axk
    I found a similar question but it has no satisfactory answers. I have a D-LINK 2540U DSL router. It has a basic port forwarding(under DNS - Virtual Servers) configuration in the administration panel where you specify: external port range, protocol, internal port range, server IP address and it is supposed to forward that port to that IP address. When I first set it up for a Real VNC connection it worked fine, just as I expected. Then I added a DynDNS configuration entry in the router's 'Dynamic DNS' section and added an additional SSH (22) forwarding rule. The SSH forwarding also worked fine (now with the dynamic hostname, but I suppose it doesn't make any difference as far as SSH is concerned). Then I removed the SSH rule and after that the VNC forwarding stopped working with the VNC client failing to connect (I have tried to connect with telnet and it also failed to connect, so it wasn't a VNC problem). After adding a rule for port 80 it turned out it would forward on port 80 though not to the specified server IP but to the modem itself. At least it is what it looks like, because it gives me the administration panel when I connect to my external IP (both using a browser and plain telnet in which case I can see that it is mini_hhtpd sitting on the port, which is obviously the modem's administration panel). Have anybody encountered a similar problem with port forwarding? I have tried to do a reset through the administration panel and to restore a backup of the settings made before I started playing with port forwarding, but it didn't help. Should I do a 'hard' reset with the button on the modem? Is it any different from the administration panel's reset (Restore default)?

    Read the article

  • Why do weekly tasks created via PowerShell using a different user fail with error 0x41306

    - by Danny Tuppeny
    We have some scripts that create scheduled jobs using PowerShell as part of our application. When testing them recently, I noticed that some of them always failed immediately, and no output is ever produced (they don't even appear in the Get-Job list). After many days of tweaking, we've managed to isolate it to any jobs that are set to run weekly. Below is a script that creates two jobs that do exactly the same thing. When we run this on our domain, and provide credentials of a domain user, then force both jobs to run in the Task Scheduler GUI (right-click - Run), the daily one runs fine (0x0 result) and the weekly one fails (0x41306). Note: If I don't provide the -Credential param, both jobs work fine. The jobs only fail if the task is both weekly, and running as this domain user. I can't find information on why this is happening, nor think of any reason it would behave differently for weekly jobs. The "History£ tab in the Task Scheduler has almost no useful information, just "Task stopping due to user request" and "Task terminated", both of which have no useful info: Task Scheduler terminated "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" instance of the "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" task. Task Scheduler stopped instance "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" of task "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" as request by user "MyDomain\SomeUser" . What's up with this? Why do weekly tasks run differently, and how can I diganose this issue? This is PowerShell v3 on Windows Server 2008 R2. I've been unable to reproduce this locally, but I don't have a user set up in the same way as the one in our production domain (I'm working on this, but I wanted to post this ASAP in the hope someone knows what's happening!). Import-Module PSScheduledJob $Action = { "Executing job!" } $cred = Get-Credential "MyDomain\SomeUser" # Remove previous versions (to allow re-running this script) Get-ScheduledJob Test1 | Unregister-ScheduledJob Get-ScheduledJob Test2 | Unregister-ScheduledJob # Create two identical jobs, with different triggers Register-ScheduledJob "Test1" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Weekly -At 1:25am -DaysOfWeek Sunday) Register-ScheduledJob "Test2" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Daily -At 1:25am)

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >