Search Results

Search found 29432 results on 1178 pages for 'mite fine dailes'.

Page 122/1178 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Generic/Text Printer on Windows 7 not prompting for file name

    - by FantaFan
    Guys & gals, Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (32-bit) Thanks in advance.

    Read the article

  • Android SDK emulator freezes on a Mac running OS X 10.6 Snow Leopard

    - by Donald Burr
    I'm having trouble running the Android SDK on both of my Macs running OS X 10.6.2 Snow Leopard. This appears to be a 64 bit vs. 32 bit issue, as Snow Leopard now defaults to 64-bit everything, including the Java virtual machine. I found this webpage with instructions on how to get the Android tools to run in the 32-bit Java VM, and I am now able to run the Android GUI tool to download SDK files, create AVM's, etc. However, when I try the Hello World tutorial and get to the point where I run my application under the Android emulator, everything goes south. The emulator appears to start but it hangs (spinning beachball of death cursor) without displaying anything. (This only hangs the emulator; the rest of the system still works fine.) If I follow the exact same steps (minus the 32-bit java hack) in a Windows virtual machine, everything works fine. Googling didn't yield anything useful (except for the 32-bit java hack I spoke of earlier). This occurs on both my Mac Pro tower and 13" MacBook Pro. Does anyone have any suggestions?

    Read the article

  • Connectivity with SQL Server Express 2008 r2 and SQL Server 2000 on same machine

    - by Jim R
    At first glance this may same a duplicate of Installing both SQL Server 2000 and SQL Server 2008 on the same machine, but it is not. I have SQL Server 2000 and SQL Server 2008 R2 installed on the same machine and working fine. My problem lies with connecting to the 2008 R2 server from a remote machine. My connectivity needs to be TCP. The legacy installation or SQL 2000 uses the default port of 1433. The named instance is by default configured to use 'Shared Memory' and is working fine. When I configured the 2008 R2 server to use 1433 (I did not think that thru) the service refused to start becasue 1433 was already in use by the legacy SQL 2000 default instance. Doh! What I want to do is have both servers available simultaneously via TCP. both servers need not be on the same port, put if I cannot run them on the same port, then how do I configure the clients? Is there not some kind of proxy available that can monitor the 1433 port and pass the request thru to the correct SQL instance by name? Is this capability built into SQL server already? Thanks, Jim

    Read the article

  • ESXi 4.1 host not recognising existing VMFS datastore

    - by Graeme Donaldson
    Existing setup: host1 and host2, ESX 4.0, 2 HBAs each. lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here). This has been working just fine all along. I added host3, ESXi 4.1, 2 HBAs. If I view Configuration / Storage Adapters, I can see that both HBAs see both LUNs, but if I view Configuration / Storage, I only see 1 datastore. host1/2 can see both LUNs and I have VMs running on both too. I have rescanned, refreshed and even rebooted, but host3 refuses to acknowledge 1 of the datastores. Does anyone know what's going on? Update: I re-installed the host with ESX (not i) 4.0, same version as the existing hosts and it's still not recognising the vmfs. I think I'm going to SVmotion everything off that datastore then format it. Update2: I've created the LUN from scratch and the problem gets even weirder. I've presented the LUN to all 3 hosts, and I can see the LUN in the vSphere client's Configuration / Storage Adapters section on all 3 hosts. If I create a datastore on the LUN via the Configuration / Storage section on host1, it works fine and I can create an empty folder via datastore browser, but the datastore is not seen by the host2 and host3. I can use the Add Storage wizard on host2 and it will see the LUN. At this point the "VMFS Label" column has the label I gave with "(head)" appended. If I try the Add Storage wizard's "Keep the existing signature" option, it fails with an error "Cannot change the host configuration." and a dialog box that says 'Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storageSystem-17" on vCenter Server "vcenter.company.local" failed.' If I try the Add Storage wizard's "Assign a new signature" option on host2, it will complete and the VMFS label will have "snap-(hexnumber)-" prepended. At this point its also visible on host3, but not host1. I have a similar setup in a different datacenter which didn't give me all this trouble.

    Read the article

  • Apache2 with SSL and mod_jk on SUSE Linux Enterprise | Apache always starts SSL disabled

    - by Shaakunthala
    I have installed Apache2 (with mod_ssl enabled) on SUSE Linux Enterprise Server 11 (x86_64) (patchlevel 1), using YaST. Once installed, I tested whether everything works fine so far. SSL also worked fine. Just 'apache2ctl start' was enough to make everything working. Then I installed mod_jk and applied the following configuration changes to make it work. /etc/sysconfig/apache2 (added JK module) APACHE_MODULES="... ... ... ... ...jk" /etc/apache2/httpd.conf (included mod_jk.conf) Include /etc/apache2/mod_jk.conf /etc/apache2/mod_jk.conf (new file) JkLogFile /var/log/apache2/mod_jk.log JkWorkersFile /etc/apache2/mod_jk/workers.properties JkShmFile /etc/apache2/mod_jk/mod_jk.shm # Set the jk log level [debug/error/info] JkLogLevel info # Select the timestamp log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " mod_jk.log & mod_jk.shm files were also created. /etc/apache2/mod_jk/workers.properties (new file) worker.list=jira worker.jira.type=ajp13 worker.jira.host=127.0.0.1 worker.jira.port=8009 Once everything is done, I've restarted Apache using the following command, apache2ctl restart Then I observed that SSL is not working. When checked with telnet, I observed that port 443 is not open. In listen.conf, if I specify port 443 bypassing 'IfDefine' and 'IfModule' conditions, then SSL works properly. This is likely the 'SSL' flag is not passed to Apache. I did not make this a persistent change as I thought it might not be the correct practice. I checked /etc/sysconfig/apache2 to see if this has been altered, but it is there. Although this flag is enabled, Apache won't start with SSL support. APACHE_SERVER_FLAGS="SSL" Finally, I had to start Apache using the following command, apache2ctl -D SSL -k start And my question is, why did Apache (or apache2ctl) fail to start with SSL when I have installed and correctly configured mod_jk, and no other configuration changes were applied? Have I missed anything? Thanks in advance. -- Shaakunthala

    Read the article

  • Dynamic DNS updates for Linux and Mac OS X machines with a Windows DNS server

    - by DanielGibbs
    My network has a Windows machine running Server 2008 R2 which provides DHCP and DNS. I'm not particularly familiar with Windows domains, but the domain is set to home.local and that is the DNS domain name provided with DHCP leases. Everything works fine for Windows machines, they get the lease and update the server with their hostname and the server creates a DNS records for windowshostname.home.local. I am having problems obtaining the same functionality on Linux (Debian) and Mac OS X (Mountain Lion) machines. They receive DHCP just fine, but DNS entries are not being created on the server for them. On the Mac OS X machine, hostname gives an output of machostname.local, and on the Linux machine hostname --fqdn also gives an output of linuxhostname.local. I'm assuming that the server is not creating DNS entries because the domain does not match that of the server (home.local). I don't want to statically configure these machines to be part of the home.local domain, I just want them to pick it up from DHCP and be able to have entries in the DNS server. How should I go about doing this?

    Read the article

  • Cannot run "Automation Anywhere" exe files from console (session 0) on Windows Server 2003 64 bit

    - by Tyler
    I have a simple exe created from an Automation Anywhere task that displays a message box saying hello world. I created this simple exe just for debugging the following issue. When I log in to the console (session 0), and run the Automation Anywhere created executable, it starts to run the task, it shows up in the applications and processes list in the task manager and it shows the two "loading..." windows briefly on the screen, just like normal. But after that, nothing happens... the "hello world" message does not show up. The exe is done and is removed from the application and process list in the task manager. The user I am logged in as, has admin rights and the machine uses "autologin" to automatically log in using this profile when it starts up. If I right click on the exe and "run as" another admin user, the exe runs properly, showing the "hello world" message. Also, if I log into the server in a new session, with the original user (the one that has the problems in session 0), and then run the exe, it runs properly and shows the "hello world". It works fine in any session other than the console session. There is something about the console session that is causing the exe not to run properly... even though it does appear to start running the exe. I should also mention that everything was working fine until Monday at midnight, after which none of the executables could be run successfully. Nothing was changed on the server and no updates were installed. I have since installed windows updates, but that didn't change anything. Looking for some advice on how to get these executables working in the console session again. Thanks!

    Read the article

  • Drop outs when accessing share by DFS name.

    - by Stephen Woolhead
    I have a strange problem, aren't they all! I have a DFS root \domain\files\vms, it has a single target on a different server than the namespace. I can copy a test file set from the target directly via \server\vms$\testfiles and all is well, the files copy fine. I have repeated these tests many times. If I try and copy the files from the dfs root I get big pauses in the network traffic, about 50 seconds every couple of minutes, all the traffic just stops for the copy. If I start another copy between the same two machines during this pause, it starts copying fine, so I know it's not an issue with the disks on the server. Every once in a while the copy will fail, no errors, the progress bar will just zip all the way to 100% and the copy dialog will close. Checking the target folder show that the copy is incomplete. I've moved the LUN to another server and had the same problem. The servers are all 2008 R2, the clients are Vista x64, Windows7 x64 and 2008 R2, all have the same problem. Anyone got any ideas? Cheers, Stephen More Information: I've been running a NetMon trace on the connection when the file copy fails and what seems to be standing out is that when opening a file that the copy completes on the SMB command looks like this: SMB2: C CREATE (0x5), Name=Training\PDC2008\BB34 Live Services Notifications, Awareness, and Communications.wmv@#422082, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 245376 SMB2: R CREATE (0x5), Context=MxAc, Context=RqLs, Context=DHnQ, Context=QFid, FID=0xFFFFFFFF00000015, Mid = 245376 But for the last file when the copy dialog closes looks like this: SMB2: C CREATE (0x5), Name=gt\files\Media\Training\PDC2008\BB36 FAST Building Search-Driven Portals with Microsoft Office SharePoint Server 2007 and Microsoft Silverlight.wmv@#859374, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 77 SMB2: R , Mid = 77 - NT Status: System - Error, Code = (58) STATUS_OBJECT_PATH_NOT_FOUND The main difference seems to be in the name, one is relative to the open file share, the other has gained the gt\files\media prefix which is the name of the DFS target. These failures are always preceded by logoff and back on of the SMB target. Might have to bump this one to PSS.

    Read the article

  • Indefinite hang when restoring SQL 2005 database on a SQL 2008 server in EC2

    - by erinloy
    I'm trying to restore a 25 GB database backup taken from a Windows 2003/SQL 2005 machine to a Windows 2008/SQL 2008 machine in the Amazon EC2 cloud, using a .bak file and the SQL Management Studio. SQL Management Studio reports the restore reaches 100% complete, and then just hangs indefinitely (24+ hours) using a lot of CPU, until I restart the SQL Server service. Upon restart, SQL again uses a lot of CPU activity for what seems to be an indefinite amount of time, but the DB never comes online. Here are some details: - I have created two EBS volumes, one for DATA and one for LOGS, and I have set the default directories in SQL Server to the \DATA and \LOG directory on these respective volumes. (I wonder if the issue could be related to this, but the DB is too big to restore on the root drive.) - I have given the SQL Server user group full access to these directories. - The server can create a new empty test DB in these directories just fine, and can backup and restore the test DB. - I have tried both restoring of a .bak file and attaching directly to copies of the original .mdf/.ldf files, and the result is the same in both cases. - Both the .bak restore and the .mdf/.ldf attach occur from/to the EBS volumes. - I've also tried the above via SQL script, and "WITH RECOVERY", with no difference in the result, just less UI. - The backup contains two full text indexes. - I have to use "WITH MOVE" for most of the files in the backup. - There's nothing wrong with the backup or .mdf/.ldf files, as this works just fine on a Windows 2003/SQL 2005 machine in the Amazon EC2, but not Windows 2008/SQL 2008. - The DB is NOT marked as "Restoring" in the SQL Management Studio - it is just listed as a normal database, but throws errors when I try to do anything with it (expand the object browser tree, view properties, etc.) Any ideas?

    Read the article

  • Mouse wheel in VirtualPC (mostly) does not work on 64-bit Windows 7 RC

    - by JonStonecash
    I have recently upgraded my laptop from WinXP Pro (32-bit) to Windows 7 RC (64-bit). I have a number of VirtualPC 2007 images that I use for testing on various platforms and looking at beta software. I have installed the 64-bit version of VirtualPC. The images all work with the exception of the mouse wheel within the virtual machine. I have tried this out with WinXP Pro, Windows 7 RC, and Windows Server 2008 images. All are 32-bit and all exhibit the same behavior: a gentle rotation of the wheel does nothing; a quick rotation of the wheel sometimes gets a scroll and sometimes not. I regard this behavior as unusable as I tend to use the mouse wheel a lot. All of this worked just fine on WinXP. I have re-installed the Virtual machine additions on all of the machines. The Windows 7 RC virtual image was created after the upgrade to Windows 7 and the 64-bit version of VirtualPC (just to isolate the possibility that I had corrupted the images during the transition). I have googled, binged, and yahoo-ed. There are scattered mentions of this problem (dating back to VPC 2004) but no solutions. I am aware that I could start up one of these images and then use remote desktop connections to get access to that image. I, in fact, do just that for some development that I am doing; the mouse works just fine. This is acceptable in this case because I spend hours at a time in the development VM. These test environments are different in that I will bring up an image for just a short time: minutes rather than hours. Adding the rdc step is much more significant in these cases. Does anyone have any idea of what to do next?

    Read the article

  • Using %v in Apache LogFormat definition matches ServerName instead of specific vhost requested

    - by Graeme Donaldson
    We have an application which uses a DNS wildcard, i.e. *.app.example.com. We're using Apache 2.2 on Ubuntu Hardy. The relevant parts of the Apache config are as follows. In /etc/apache2/httpd.conf: LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog In /etc/apache2/sites-enabled/app.example.com: ServerName app.example.com ServerAlias *.app.example.com ... CustomLog "|/usr/sbin/vlogger -s access.log /var/log/apache2/vlogger" vlog Clients access this application using their own URL, e.g. company1.app.example.com, company2.app.example.com, etc. Previously, the %v in the LogFormat directive would match the hostname of the client request, and we'd get several subdirectories under /var/log/apache2/vlogger corresponding to the various client URLs in use. Now, %v appears to be matching the ServerName value, so we only get one log under /var/log/apache2/vlogger/app.example.com. This breaks our logfile analysis because the log file has no indication of which client the log relates to. I can fix this easily by changing the LogFormat to this: LogFormat "%{Host}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog This will use the HTTP Host: header to tell vlogger which subdirectory to create the logs in and everything will be fine. The only concern I have is that this has worked in the past and I can't find any indication that this has changed recently. Is anyone else using a similar config, i.e. wildcard + vlogger and using %v? Is it working fine?

    Read the article

  • Problem booting virtual machine after converting VMDK to VHD

    - by vg1890
    I used the VMWare VCenter Converter Standalone Client to convert a physical drive on my old PC to a virtual drive. The conversion worked fine and I ended up with a valid VMDK file. Next, I wanted to convert the VMDK to a VHD for use with Microsoft Virtual PC, since that's what I use on my new box. I used WinImage for the conversion and that worked fine, too. I can access the files from the virtual drive through WinImage. However, when I create a new virtual machine using Virtual PC and add the existing VHD file, the machine doesn't boot. The initial boot screen flashes with the amount of RAM and then the screen goes black. If I turn off the VM and reboot in safe mode I can see the drivers being loaded until eventually it gets to crcdisk.sys and hangs indefinitely. Any ideas how to fix this? I'm not opposed to starting over from scratch if there's another method to turn my physical machine into a Virtual PC VM. Thanks! EDIT - I should add that the virtual drive is a system boot drive and not a secondary drive. EDIT - I tried booting from the install CD and doing a repair. The result was that the system could not be repaired due to a "driver error."

    Read the article

  • Crazy problem with Nginx, PHP5-FPM on Ubuntu

    - by Emmanuel
    I've been trying to get a domain from shared hosting to my new VPS. Everything was working just 100% fine, and then all of a sudden rewrites stopped working, pictures that should work started returning 404s. I've got no idea why, but for some reason on my site: http://www.onlythebible.com/ only the home page works, all the other pages depend on rewrites which were working perfectly fine at one stage, but all of a sudden stopped working. Some of the pictures like this url: http://www.onlythebible.com/bgsPreview/Matthew-8.10.jpg which doesn't use a rewrite throws a 404? I almost certain it was nothing to do with the nginx configuration. I've got suspicions that it could be something to do with php5-fpm? The funny thing is, all of a sudden it started working again. And then an hour or so later it broke again and has now gone back to only displaying the home-page and all of the links (and some of the pictures) are just showing 404s. Does anyone have an idea of what the problem might be? I'm pretty new to the whole Linux VPS thing, but this just seems very strange. *edit Here's a line from the error log which might shed some light on the problem: 2011/02/06 03:04:59 [error] 2873#0: *220 open() "/usr/local/nginx/html/bgsPreview/Matthew-8.10.jpg" failed (2: No such file or directory), client: 114.77.115.211, server: onlythebible.com, request: "GET /bgsPreview/Matthew-8.10.jpg HTTP/1.1", host: "www.onlythebible.com", referrer: "http://www.onlythebible.com/" I wonder why it's trying to find the file in /usr/local/nginx/html instead of the proper root which is /var/www/ etc... Oh, and for some reason it's just started working again... for how long I don't know. Another thing that was a bit weird, is that the pages on my website are pulled from a database. But when I edited the database, the pages didn't change... It's almost like they've been cached or something.

    Read the article

  • Mount points disappear from network share directory listing

    - by Barakando
    When browsing a network share which contains volume mount points, said mount points disappear from the directory listing. The mount points are still accessible directly by path, just not present in the directory listing. The machine is a Vista SP1 32 bit machine. It has a network share that contains volume mount points to the volumes of the Vista machine (created using the SetVolumeMountPoint API). When browsing the network share from another computer (either Win7 64 bit, Win7 32 bit or Vista SP1 32 bit) using Windows Explorer the following problem occurs: First, both volume mount points called C, D appear fine. I browse into directory C and see all its contents properly. I go back to the root of the shared folder and now I only see D. C has disappeared from the directory listing. I enter D and see all its contents. Go back to the root of the shared folder and now it's empty. D disappeared as well. If I manually go to \\<path to shared folder>\C from the address bar - then all is fine and I can browse its contents (same with D). The same issue does NOT occur when creating a similar share with volume mount points on Windows XP SP2 or SP3. Has anyone came across this problem? Any ideas how to work around it?

    Read the article

  • ASP.Net application can no longer write to DB after having run out of disk space

    - by remi.despres-smyth
    I'm a software developer troubleshooting a sticky problem on a client's production server, and I've got a bit of a problem. They have a virtual server running Windows Server 2008, SQL Server 2008 R1 and IIS7. It was provisioned with two partitions: one that has the OS (~15 Gig), and the other has IIS' web sites (another ~15 Gig). My application that's running this server has been running perfectly well, up until about an hour ago, when it started throwing System.IO.IOException: "There is not enough space on disk". As soon as my client notified me, I cleared up some space on C:\, emptied the recycle bin, and restarted SQL Server and IIS. The web server came back up and the application was running, but it no longer saves information to the database. No error message is coming up, the application can get information out of the DB, but it can no longer save data back to it. I rebooted the server, to no effect. I spoke with a sys admin at the hosting company, and he says SQL Server appears to have come up fine and the database is not in read-only mode. I confirmed that, as I can add records to tables from SQL Server Management Studio. I looked at the event log immediately after trying to save an edited record in the app, and no new events appear in there that I can tell. I'm assuming this is related to having run out of space, as it was all working fine prior to that, but I'm at a bit of a loss as to what exactly needs a kick in the pants to get going again. Can anyone help me out? What the heck is going on here?

    Read the article

  • snmpd dead but subsys locked

    - by Hina NMS
    Hi folks I have an NMS and a Client machine. I want the client to send traps to the NMS. I have been configuring the snmpd.conf file testing if i disable a process do i receive an alert or not. For the changes to reflect that were made in the conf file i restarted the snmpd daemon each time. The testing was going fine. All of a sudden when i restarted snmpd i recieved the error msg "snmpd dead but subsys locked". I googled for answer as to what it actually meant and found out that when a service is started a logfile is created in the /var/lock/subsys. Sometimes if the service is not stopped properly or whatever the logfile remains created. Though i started/stopped the snmpd service properly it didnt go away so i removed the file manually (via rm cmd). when i checked the status the error "snmpd dead but subsys locked" was gone. On my NMS i recieved the alert of snmpd coldstart. i started the snmpd service everything goes fine! BUT after 5 mins again i recieve the same error message and this keeps on happening..what do i need to do now?

    Read the article

  • Windows Audio Issue

    - by Nikki
    This one is driving me nuts. Hoping someone can shed some light. I'm running windows 7 using onboard audio. It's been fine for over 2 years but lately there's a problem every time I play audio. I hear a small soft burst of static and the volume turns itself down from 50% to 23%. Once at 23%, it plays fine. No related events logged in viewer. No reported problems with the device. Different headphones, same problem. I played around with audio settings for hours but the problem persists. EDIT: ok more info: Motherboard: ECS G31T-M LGA775 System info displays this: Name High Definition Audio Device Manufacturer Microsoft Status OK PNP Device ID HDAUDIO\FUNC_01&VEN_1106&DEV_E721&SUBSYS_10192683&REV_1001\4&3D4E739&0&0001 Driver c:\windows\system32\drivers\hdaudio.sys (6.1.7600.16385, 297.00 KB (304,128 bytes), 14/07/2009 9:51 AM) I'll keep adding info as I find it. The question I want resolved is; Is it faulty hardware? If so, I can buy a sound card. I can't imagine software is responsible since I haven't installed anything new for weeks. Virus scans are clear as well. The static burst is irritating to say the least. Tried 2 different headphones and separate speakers. Same problem. I know it's not an easy problem but I was hoping someone had encountered the same thing.

    Read the article

  • Mac OS X - User home directories shared via NFS

    - by Hugh
    I've run into some problems with how I've got user home directories set up on our system here. Our server is an XServe, using Open Directory to manage the user accounts. The majority of our workstations are OS X, but there are a few running Linux (Centos 5.3), and, as time goes on, we expect the proportion of Linux workstations to increase (at some point, we expect to move the server side over to Linux too, but for now we're running with what we've already got) To ensure that the Linux and OS X workstations both see user's home directories in the same place, I shared the home directories using NFS. On the server end, the home directories are stored in: /Volumes/data/company_users This is mounted on the workstations to: /mount/company_users This work fine on the Linux workstations, but there is some weirdness under OS X. For the user who is logged in through the GUI, it all works just fine. However, if a user tries to SSH into a machine that they are not the primary user on, they often have no access to their own home directory. It looks as though OS X is trying to do something else to the user home directories mount point when you log in through the GUI.... For example, on this machine (nv001), I (hugh) am logged into the GUI. Last login: Mon Mar 8 18:17:52 on ttys011 [nv001:~] hugh% ls -al /mount/company_users total 40 drwxrwxrwx 26 hugh wheel 840 27 Jan 19:09 . drwxr-xr-x 6 admin admin 204 19 Dec 18:36 .. drwx------+ 128 hugh staff 4308 27 Feb 23:36 hugh drwx------+ 26 matt staff 840 4 Dec 14:14 matt [nv001:~] hugh% So Matt's home directory is accessible to him. However, if I try to switch to him: [nv001:~] hugh% su - matt Password: su: no directory [nv001:~] hugh% Or: [nv001:~] hugh% su matt Password: tcsh: Permission denied tcsh: Trying to start from "/mount/company_users/matt" tcsh: Trying to start from "/" [nv001:/] matt% Does anyone have any idea why it might be doing this? It's causing me all sorts of problems at the moment... The only machine that I can successfully switch users at the moment is the server that the user directories are stored on, where /mount/company_users is actually just a symlink to /Volumes/data/company_users

    Read the article

  • os x 10.4 Old, deleted user mail account problems

    - by Chris
    Hello- A while back I tried to add a user 'david' as a mail user on my OS X 10.4 server using dscl (I only had terminal access at the time, no ability to use workgroup manager). I could never get this account to work properly, so I deleted it. dscl . -list /Users no longer shows 'david' as an entry. I have since gained access via Workgroup Manager, and I am trying to re-create the 'david' account. Workgroup manager creates the account fine, along with an email account, which I can then log into via IMAP ('login david password' returns 'OK user logged in'). However, this mail account does not have an inbox, and I can not create one thru a mail client, IMAP or cyradm (they all say 'system I/O error'). When I re-delete this user, I can't find any record of him in any of the mail spool locations. Creating a user with any other name works fine (Inbox, mail access, everything). Any ideas on how I can get this user up and running again? -Chris P.S. - to create this user in the first place, I used dscl . create, then dscl . append /Users/david "some XML I found on the 'net" to add email privileges, if this helps...

    Read the article

  • Adobe Acrobat Pro 9.0 on Windows 7 print to network share gives error

    - by Archit Baweja
    I've recently upgraded a client's workstations to brand new computers, with Windows 7 Professional. The server is still Windows Server 2003. The server has 2-3 file shares that get mapped to users' workstations as drives. The client has also upgraded from Acrobat 6.0 to 9.0 Pro. Since the upgrade, when the client tries to print to the Adobe PDF printer (aka convert something to PDF via the printer interface), it gives an error in the queue if the file is being saved on the network drive. If I instead provide a local path, the file "prints" fine. Additionally, if I change the Adobe PDF printer's settings to "don't spool, print directly to printer", it prints to the network share fine, but then it resets that setting every time. Things I've checked for: Permissions on the network share. The user and the computer has full access. We even gave the "Everyone" ibject full access. Reinstall Adobe Acrobat Pro 9.0 Run updates to upgrade to 9.3.4 Has anyone else bumped into such a problem? The support fellows from Adobe are just taking me around in circles. They don't seem to have a clue either.

    Read the article

  • IPCop server slows down download speed

    - by noocyte
    I have an IPCop server running at home, been doing just fine for ~5 months, but last week I suddenly started getting time-outs and slow downloads from the 'net. I first thought that this was my ISP acting up, then I thought it might be one of my 3 switches or some of my cabling. In due order I've tested everything above and found them all to be working as they should. The only factor remaining is my IPCop server. Facts: I've got a 15/15 Mbit line (fiber) and I get ~15 Mbit upload, but only 0.5 Mbit download with the IPCop box as router (ISP router set in bridge mode). If I connect without the IPCop box (using the ISP router) I get ~12 Mbit upload and ~15 Mbit download. The load on the IPCop box appears to be light and it used to handle this traffic just fine 2 weeks ago. The memory usage is ~60%, I tried to restart it and test again, the memory fell to ~50% then (5 months of uptime). I'm thinking that one of my nics are busted, but I'm sort of perplexed that this could be the outcome; slow download but full speed upload. Anybody ever seen that happening before? Could it just be one of the nics that needs to be replaced? Will try that as soon as I can get my hands on a couple of new ones.

    Read the article

  • Virutal Machine loses network connectivity on Hyper V Cluster

    - by Chris W
    We're running a number of VMs on a 6 node failover cluster of blades using Hyper V. We have an intermittent issue (every few days at different times - not a fixed frequency) of VMs losing network connectivity. Console access to the VM suggests all is fine and the underlying blade has normal connectivity. To resolve the problem we either have to re-start the VM or, more usually, we do a live migration to another blade which fires up connectivity and we then migrate it back to the original blade. I've had 3 instances of this happen with a specific VM running on a particular blade however it has happened once with a different VM running on a different blade. All VMs and blades have the same basic setup and are running Windows 2008 R2. Any ideas where I should be looking to diagnose the possible causes of this problem as the event logs provide no help? Edit: I've checked that each blade is running the latest NIC drivers and all seem to be fine. Something that is confusing me - a failover or restart of the VM resolves the issue. Whilst I need to work out the underlying issue that is causing the NICs to hang I'm also concerned that the VM didn't failover to another node which would have solved the outage for me. Is there a way to configure the cluster so that it can tell that the VM guest has lost connectivity and fail it over? As things stand the cluster is assuming that the VM is running happily as I presume Hyper V says everything is great even though there is a problem.

    Read the article

  • Can't connect to LAN when connected to D-Link DIR-615

    - by Senseful
    I'm have a D-Link DIR-615 Wireless N 300 Router. I didn't use the CD it comes with to set up the network. Instead I configured it manually through the router's settings that are accessed via a web browser. The main changes I made are: Secured the router so that a password is required before clients can use the wireless internet. Broadcasting 802.11N only (not B or G). I can connect to the router just fine and I'm able to access the internet. The only problem is that I don't see any of the other computers in my LAN. When I try connecting to another Wi-Fi router that I have (which is connected to the same network), I can see all of the computer's on my LAN just fine. Therefore, I'm guessing that the reason I can't connect to the LAN is not a problem with my computer and is a problem with the router instead. I'm on a MacBook Air running Mac OS X 10.6.6. I tried contacting D-Link technical support, but they only try to help you if you have problems connecting to the internet. They aren't really concerned with problems that have to do with the accessing PC's on the same network.

    Read the article

  • Scheduled tasks fail to start unless I'm logged in to the server

    - by Chuck
    Tasks need to open a CMD window and pass net use commands, then do a DIR command, pipping the output to a file on the server. Log in as either me (Sysadmin) or with one of the system accounts and task will only run if I'm physically logged into the server. Run as batch file is set in security properties for both users (me and service account), security is granted to all directories, etc. It almost acts like a scheduled task, since it is not physically connected to a display can't create a CMD window and pass the WinID so the command can be sent. I'm guessing. Anyone know of a document that explains how the server handles initiation of a window if done via scheduled task and no attached user is associated with the task? If I log onto the box and run the scheduled tasks they run fine, but produce no errors or event log entries and then just show that it ran successfully and sets the next run time. Have tried both with the run if logged in checkbox on and off and makes no difference. Other tasks work fine, except that they are acting on local drives with no display writing or updating taking place, so I'm guessing the system either can't instantiate a window if no display is connected to a logged on user, or it can't establish a point if it is trying to create a virtual screen. You'd think it is just creating a memory map and then mapping it to a device to display, but that doesn't seem to be the case, but I can find no documentation on how the system handles a scheduled task and how to invoke a fake or virtual screen that it could write to so it appears that a user was connected. Thanks This is driving me nuts and I've tried everything I can think of as well as our network boys ideas and nothing seems to work.

    Read the article

  • How does it hurt to use Linux (Ubuntu) as a guest OS for all my tasks?

    - by sauparna
    I have a machine running Windows, where the disk has two partitions C (50 GB) and D (250GB). I do research in Information Retrieval and need to work with a large corpus (more than 50 GB) and in Linux. So if I want to install Linux on the existing system, keeping the Windows installation intact, will it be fine to run it in a virtual box? (say, QEMU, VMWare, etc.) An alternative is using Wubi. In that case the Linux installation has to be on drive C. Then, if I keep a small Linux installation (say 5GB) on C, and my corpus on D (mounted in Linux), how will it affect the performance of my programs which would be accessing the mounted Windows drive D. Is it feasible to use Linux this way? Which of the above is better if at all they are a way out? Note : Since my post in July 2010, I have been using and have tried several ways of maintaining a disk-image that I can mount in Linux. I had a 100GB qcow2 disk and a 100GB raw disk, both formatted to an EXT3 file system. I was mounting and connecting to the qcow2 disk using qemu-nbd. The problem was that every now and then, the connection to the disk would get lost and the running programs would throw disk I/O errors. The raw disk would mount and work fine as a loop mounted device, but when writing data to it, the mount.ntfs program would hog the CPU and the process would take an enormous amount of time. I was in fact running make on a piece of software located on this raw disk, and after a point of time make was waiting while mount.ntfs would show 100% CPU usage.

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >