Search Results

Search found 11954 results on 479 pages for 'gets'.

Page 302/479 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • MultiView 2000 terminal emulator not printing correctly to Generic/Text Printer on Windows 7

    - by FantaFan
    Guys & gals, Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (320-bit) Thanks in advance.

    Read the article

  • Generic/Text Printer on Windows 7 not prompting for file name

    - by FantaFan
    Guys & gals, Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (32-bit) Thanks in advance.

    Read the article

  • ESXi 4.1 host not recognising existing VMFS datastore

    - by Graeme Donaldson
    Existing setup: host1 and host2, ESX 4.0, 2 HBAs each. lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here). This has been working just fine all along. I added host3, ESXi 4.1, 2 HBAs. If I view Configuration / Storage Adapters, I can see that both HBAs see both LUNs, but if I view Configuration / Storage, I only see 1 datastore. host1/2 can see both LUNs and I have VMs running on both too. I have rescanned, refreshed and even rebooted, but host3 refuses to acknowledge 1 of the datastores. Does anyone know what's going on? Update: I re-installed the host with ESX (not i) 4.0, same version as the existing hosts and it's still not recognising the vmfs. I think I'm going to SVmotion everything off that datastore then format it. Update2: I've created the LUN from scratch and the problem gets even weirder. I've presented the LUN to all 3 hosts, and I can see the LUN in the vSphere client's Configuration / Storage Adapters section on all 3 hosts. If I create a datastore on the LUN via the Configuration / Storage section on host1, it works fine and I can create an empty folder via datastore browser, but the datastore is not seen by the host2 and host3. I can use the Add Storage wizard on host2 and it will see the LUN. At this point the "VMFS Label" column has the label I gave with "(head)" appended. If I try the Add Storage wizard's "Keep the existing signature" option, it fails with an error "Cannot change the host configuration." and a dialog box that says 'Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storageSystem-17" on vCenter Server "vcenter.company.local" failed.' If I try the Add Storage wizard's "Assign a new signature" option on host2, it will complete and the VMFS label will have "snap-(hexnumber)-" prepended. At this point its also visible on host3, but not host1. I have a similar setup in a different datacenter which didn't give me all this trouble.

    Read the article

  • Configure Postfix to send/relay emails Gmail (smtp.gmail.com) via port 587

    - by tom smith
    Hi. Using Centos 5.4, with Postfix. I can do a mail [email protected] subject: blah test . Cc: and the msg gets sent to gmail, but it resides in the spam folder, which is to be expected. My goal is to be able to generate email msgs, and to have them appear in the regular Inbox! As I understand Postfix/Gmail, it's possible to configure Postfix to send/relay mail via the authenticated/valid user using port 587, which would no longer have the mail be seen as spam. I've tried a number of parameters based on different sites/articles from the 'net, with no luck. Some of the articles, actually seem to conflict with other articles! I've also looked over the stacflow postings on this, but i'm still missing something... Also talked to a few people on IRC (Centos/Postfix) and still have questions.. So, i'm turning to Serverfault, once again! If there's someone who's managed to accomplish this, would you mind posting your main.cf, sasl-passwd, and any other conf files that you use to get this working! If I can review your config files, I can hopefully see where I've screwed up, and figure out how to correct the issue. Thanks for reading this, and any help/pointers you provide! ps, If there is a stackflow posting that speaks to this that I may have missed, feel free to point it out to me! -tom

    Read the article

  • BIOS interrupts, priviledge levels and paging

    - by Jack
    Hi, I was learning about Intel 8086-80486 CPUs and their interactions with HW. But I still don´t understand it quite well. Please, help me fill blank spots. First, I know that CPU communicates with HW using BIOS interrupts. But, what really happens in PC, when I call some INT instruction? I know that according the interrupt table some instructions begin to execute, but how by executing some instructions can BIOS recognize what I want to do? Becouse as far as I know, CPU has no extra communication channel with BIOS, it can only adress memory and receive data. So how can I instruct BIOS to do something, when I can only adress RAM? Next thing I dont understand is about priviledge levels. I know about ring model, and acess rights, but how CPU knows which priviledge level has executed instruction? I think that these priviledges apply only when intruction is trying to adress memory, but how applications gets its priviledge level? I mean I know its level 3, but how its set? And last thing, I know that paging is adress scheme that is used to support aplication-transparent virtual memory, or swaping, but I could not find any informations about how is paging tied with protected mode. Like if paging is like next mode independent of protectet mode, or its somehow implemented within protected mode. And if it is implemented in protected mode, isn´t it too slow, to first adress application space, than offset, and than paging folder, page and offset once again? Thank you for every response.

    Read the article

  • Windows Server 2008 / SQL 2008 Licensing for Authenticated Web Application

    - by MikeM
    Hello, I'm trying to crunch some numbers to see what the software costs involved are for hosting an application we are developing. Users will not be anonymous - they will need to log in. SQL Server 2008: SQL Server licensing is easy - it will be licensed per-processor. No real fuss there. The cost of CALs would be much higher for the number of users as compared to the processor licenses. Windows Server 2008: This is where it gets trickier. We need to license the OS for both the web servers (there will be a couple) plus the database servers (also a couple). The Web Servers could run on the Web Edition without a need for CALs, but if you continue reading, you will see that may not matter much because I will likely have user CALs for each user anyway. We can't use the "External Connector" for any of the Windows licenses, because that doesn't cover customers who are paying to access a hosted application. We can't use the Web Edition for the SQL Servers because that license only allows database running on Web Edition to host data for the local web application (i.e. other web servers can't connect to it). So that leaves us with the "full" editions of Windows Server for the database server OS. I find this a little rediculous, and I feel as though I must be missing something, but it looks to me like I will actually need to buy a CAL for every user who signs up to use our service. I feel like I'm missing something because that means that for every user, I have to shell out $40 for a CAL. That could be one or two years' worth of revenue from each user for an inexpensive service! Is there any way to serve a web application to authenticated users without paying for individual Windows Server CALs, if the web servers and SQL servers are seperate boxes?

    Read the article

  • Existing open source software for wireless mesh networking?

    - by user70352
    Hello! My goal: Build a wireless mesh network with some ALIX 2D2 (500 MHz AMD Geode LX800 x86 CPU, 256MB RAM, Atheros wireless card) Aside from working like a normal wireless mesh network users should be able to read/write data from/to the ALIX Boxes and the ALIX boxes should be able to process data. Questions: Should I try to flash dd-wrt x86, voyage linux (linux.voyage.hk) or something else? What (open source) software should I look into before I start? Should I use a 'server' for data storage and processing instead of the ALIX boxes? Is it even possible to use the ALIX boxes for routing AND data storage and processing? Final notes: Data can be anything, for example, I want to setup a wireless mesh using the OLSRD protocol so my whole town gets wifi and can access songs on the network. It's not for that, but that's the idea. I'm not afraid of programing, compiling or working with *nix. This is mostly 'for fun' rather than for practicality. Thanks in advance for any feedback.

    Read the article

  • Zscaler. Certs, cookies, and port 80 traffic

    - by 54's_lol
    So I work at HQ for a large company that shall remain nameless. We use Zscaler and I had to roll out a 2048 cert per zscaler's request. People around me at work dont understand the technology and think that the cert's are what is allowing internet connectivity. From my understanding(and please chime in) is the cookie located C:\Users\$$$$$$4$$\AppData\Roaming\Macromedia\Flash Player#SharedObjects\Q3JQJQJV\gateway.zscaler.net\zscaler.swf here that gets created when you provide your creds the first time you use the browser. The cert's are just simply a way of inspecting the SSL traffic as zscaler had no way of doing this before without them. They are essentially using the classic MITM attack to parse your SSL traffic. Gmail is smart enough to recognize this as you get a warning. My question is this, is there a product or service that I can use to verify my web browser when at home(I.E. off company network) isn't still getting routed to zscaler's cloud? If i do a tracert that will work fine. It's the port 80 and 443 web traffic zscaler and my company is after. I would like to verify that when I'm off their premise that my web traffic is using only my isp and the path to whatever content I'm searching for. Do the cert's i'm pushing and browser authentication do something behind the curtain that forces web traffic to get routed to zscaler? I searched quite a bit and would very much like to know if I'm ever off company scrutiny. I do know zscaler offers the service to force the scenario im asking about. Can I prove how my web traffic is getting routed? Thanks for any insight. I've been a fan for a long time and your guy's kung fu is very strong:-)

    Read the article

  • Windows 7 Backup not backing up custom library?

    - by James McMahon
    I have created a custom Library under Windows 7 64bit professional to handle my source code. When I tried Windows Backup and Restore for the first time I get the following error Backup encountered a problem while backing up file C:\Windows\System32\config\systemprofile\Source. Error:(The system cannot find the file specified. (0x80070002)) I've found a thread on the error on the Microsoft answers site. But it appears to be 404 (there is a version in Google's Cache) and the thread starter never gets an answer to his issue that works. The official Microsoft answer on this is This problem is due to one or more profiles under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\ProfileList with missing ProfileImagePath. To check whether you have missing profiles: Open regedit, navigate to the above registry key. (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList). Expand the list Click on each of the profiles listed. The first 3 profiles should have ProfileImagePath value of %SystemRoot%\System32\Config\SystemProfile, %SystemRoot%\ServiceProfiles\LocalService, and %SystemRoot%\ServiceProfiles\NetworkService respectively. Starting from the 4th profile, the ProfileImagePath should contain path to the user profiles on your machine, such as C:\users\Christine If one or more of the profile has no profile image, then you have missing profiles. To work around this, delete the profile in question (Caution: The registry contains critical settings that are necessary for your system to function properly. Take extra caution while making changes) First, export the ProfileList key for safekeeping. (Right click on the key, choose “Export”, and save it to the desktop.) Right click on the profile in question, choose delete. Try backup again. This does not work for me. Anyone have any idea what is going on here?

    Read the article

  • Mouse wheel in VirtualPC (mostly) does not work on 64-bit Windows 7 RC

    - by JonStonecash
    I have recently upgraded my laptop from WinXP Pro (32-bit) to Windows 7 RC (64-bit). I have a number of VirtualPC 2007 images that I use for testing on various platforms and looking at beta software. I have installed the 64-bit version of VirtualPC. The images all work with the exception of the mouse wheel within the virtual machine. I have tried this out with WinXP Pro, Windows 7 RC, and Windows Server 2008 images. All are 32-bit and all exhibit the same behavior: a gentle rotation of the wheel does nothing; a quick rotation of the wheel sometimes gets a scroll and sometimes not. I regard this behavior as unusable as I tend to use the mouse wheel a lot. All of this worked just fine on WinXP. I have re-installed the Virtual machine additions on all of the machines. The Windows 7 RC virtual image was created after the upgrade to Windows 7 and the 64-bit version of VirtualPC (just to isolate the possibility that I had corrupted the images during the transition). I have googled, binged, and yahoo-ed. There are scattered mentions of this problem (dating back to VPC 2004) but no solutions. I am aware that I could start up one of these images and then use remote desktop connections to get access to that image. I, in fact, do just that for some development that I am doing; the mouse works just fine. This is acceptable in this case because I spend hours at a time in the development VM. These test environments are different in that I will bring up an image for just a short time: minutes rather than hours. Adding the rdc step is much more significant in these cases. Does anyone have any idea of what to do next?

    Read the article

  • Trying to reconcile global ip address and Vhosts

    - by puk
    I have been using my local machine as a web server for a while, and I have several websites set up locally on my machine, all with similar Vhost files like the one seen here /etc/apache2/sites-available/john.smith.com: <VirtualHost *:80> RewriteEngine on RewriteOptions Inherit ServerAdmin [email protected] ServerName john.smith.com ServerAlias www.john.smith.com DocumentRoot /home/john/smith # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn LogFormat "%v %l %u %t \"%r\" %>s %b" comonvhost CustomLog /var/log/apache2/access.log comonvhost </VirtualHost> then I set up the /etc/hosts file like so for every Vhost: 192.168.1.100 www.john.smith.com john.smith.com 192.168.1.100 www.jane.smith.com jane.smith.com 192.168.1.100 www.joe.smith.com joe.smith.com 192.168.1.100 www.jimbob.smith.com jimbob.smith.com Now I am hosting my friend's website until he gets a permanent domain. I have port forwarding set up to redirect port 80 to my machine, but I don't understand how the global ip fits into all of this. Do I for example use the following web site addresses (assume global ip is 12.34.56.789): 12.34.56.789.john.smith 12.34.56.789.jane.smith 12.34.56.789.joe.smith 12.34.56.789.jimbob.smith

    Read the article

  • CopSSH SFTP -- limit users access to their home directory only

    - by bradvido
    Let me preface this by saying I've read and followed these instructions at the FAQ many times: http://www.itefix.no/i2/node/37 It does not do what the title claims... It allows every user access to every other user's home directory, as well as access to all subfolders below the copssh installation path. I'm only using this for SFTP access and I need my users to be sandboxed into only their home directory. If you know a fool-proof way to lock users down so they can see only their home directory and its subfolders, stop reading now and reply with the solution. The details: Here is exactly what i tried as I followed the FAQ. My copSSH installation directory is: C:\Program Files\CopSSH net localgroup sftp_users /ADD **Create a user group to hold all my SFTP users cacls c:\ /c /e /t /d sftp_users **For that group, deny access at the top level and all levels below cacls "C:\Program Files\CopSSH" /c /e /t /r sftp_users **Allow my user group access to the copSSH installation directory and its subdirectories For each sftp user, I create a new windows user account, then I: net localgroup sftp_users sftp_user_1 /add **Add my user to the group I've created Open the activate user wizard for CopSSH, choosing the user, "/bin/sftponly" and Remove copssh home directory if it exists **Remains checked Create keys for public key authentication **Remains checked Create link to user's real home directory **Remains checked This works, however, every user has access to every other user's home directory as well as the CopSSH root directory.... So I tried denying access for all users to the user home directory: cacls "C:\Program Files\CopSSH\home" /c /e /t /d sftp_users **Deny access for users to the user home directory Then I tried adding permissions on a user-by-user basis for each users home\username folder. However,these permission were not allowed by windows because of the above deny rule i created at the home directory was being inherited and over-riding my allow rule. The next step for me would be to remove the deny rule at the home directory and for each user folder, add a deny rule for every user it doesn't belong to, and add an allow rule for the one user it does belong to. However, as my user list gets long, this will become very cumbersome. Thanks for the help!

    Read the article

  • Problem booting virtual machine after converting VMDK to VHD

    - by vg1890
    I used the VMWare VCenter Converter Standalone Client to convert a physical drive on my old PC to a virtual drive. The conversion worked fine and I ended up with a valid VMDK file. Next, I wanted to convert the VMDK to a VHD for use with Microsoft Virtual PC, since that's what I use on my new box. I used WinImage for the conversion and that worked fine, too. I can access the files from the virtual drive through WinImage. However, when I create a new virtual machine using Virtual PC and add the existing VHD file, the machine doesn't boot. The initial boot screen flashes with the amount of RAM and then the screen goes black. If I turn off the VM and reboot in safe mode I can see the drivers being loaded until eventually it gets to crcdisk.sys and hangs indefinitely. Any ideas how to fix this? I'm not opposed to starting over from scratch if there's another method to turn my physical machine into a Virtual PC VM. Thanks! EDIT - I should add that the virtual drive is a system boot drive and not a secondary drive. EDIT - I tried booting from the install CD and doing a repair. The result was that the system could not be repaired due to a "driver error."

    Read the article

  • mod_perl loses STDOUT in middle of request

    - by puzzled72
    Hi, I have been having this weird issue where mod_perl seems to lose STDOUT in the middle of the request. So far I have eliminated everything I could think of. You might have seen this bug related to the following errors in error_log : Apache2 IO flush: (103) Apache2::RequestIO::read: (104) Software caused connection abort They are all the same error. It happens when the perl script running under mod_perl loses STDOUT when trying to print the result back to apache. I only notice this error on my servers running the following: (centos5.4) Perl 5.8.8-27 mod_perl 2.0.4-6 httpd 2.2.3-31 kernel-2.6.18-164.15.1 It's not the code This code has been working for months It's not network related The browser gets the error response from apache. It's not time related I get the error 15 or so seconds after I restart httpd It's not idle httpd related I have tried reducing the min/max SpareServers to 1 It's not load related I get the error even if there are only 10 sessions on httpd It's not related to the "fd < PERLIO_MAX_REFCOUNTABLE_FD" perl 5.8.8 bug I recompiled perl-5.8.8 with the patch mentioned here : https://bugzilla.redhat.com/show_bug.cgi?id=559832, same error. It appeared sometime between December 2009 and February 2010 sorry I cannot be more specific Anyone has any idea ? Anything that I have not tested ? Really Puzzled!

    Read the article

  • Zabbix Proxy not collecting data

    - by Jordan Eunson
    I have a working Zabbix 1.8.2 server collecting data for our office and our colo facility. However the link between the colo and office is flaky. What I'm trying to do is setup a proxy on the colo side to have a 1 hour cache and relay the data to our primary server at the office. Our zabbix server is compiled from source and uses a mysql database I've followed the instructions found in the zabbix documentation to compile the proxy using a sqlite3 database. I add the proxy to zabbix under Administration-DM-Proxies. The zabbix server "sees" the proxy because the "last seen" field is always under 60s. However when I assign a colo host to the proxy I stop receiving data from it. The colo host's zabbix_agentd.log file says this: 29343:20100622:124847 Timeout while answering request 29343:20100622:124847 Getting list of active checks failed. Will retry after 60 seconds The zabbix_proxy.log says this. 2041:20100622:123131.760 Deleted 0 records from history [0.000994 seconds] 2028:20100622:124131.671 Error while receiving answer from server [ZBX_TCP_READ() failed I also am unable to receive any SNMP data which is more important to me than the zabbix agent data. Has anyone had this problem before? Zabbix Server OS: CentOS5.4 Zabbix Server Build: 1.8.2 from source Zabbix Proxy OS: CentOS5.4 Zabbix Proxy Build: 1.8.2 from source P.S. The SQLite database on the zabbix proxy never gets any data written to it, it is identical to when I created it from the blank schema in zabbix-1.8.2/create/schema. (Yes I've checked the permissions)

    Read the article

  • Use RAID for desktop computer?

    - by NickAldwin
    I'm building a new computer over the summer. I'm fairly competent in computer hardware, and am thus building the computer from scratch. I have everything planned out, but I was wondering if I should consider RAID/what RAID to use. I plan to purchase 2x1TB drives. Currently I'm leaning toward RAID 1 for the redundancy -- I've heard newer super-capacity drives fail more often than one would think, and I don't want to have a problem and lose all my data. What do you think? My mobo supports RAID 0/1/5/10. Is it worth it to use RAID at all, or should I just use a backup service like Mozy? Should I consider RAID 0 instead, for the performance? I'm kind-of going back and forth on this one. Thanks a lot for your help. EDIT: I'd like to avoid the OS drive different from Data drive situation, because that can get frustrating when programs like to store a lot of data in their program files folder. I've lived with that situation before and it gets annoying after a while.

    Read the article

  • NFS "Permission Denied" getting cached on NetApp Filer

    - by Christopher Karel
    We have a bunch of Linux boxes mounting NFS shares off a NetApp filer. From time to time, I will flub some part of the export configuration. Typo on one of the allowed hosts, incorrect IP address, etc, etc. No worries, this is usually done on a test system, or with brand new exports that aren't yet in production. However, I've found that once I've been denied permission to mount something from a Linux machine, the failure gets cached for as long as a day. I will correct the problem that was blocking the mount, re-export on the NetApp, and still not be able to mount the share. I'm pretty sure this caching is done at the NetApp side. It normally ages out after a day or so, but it really sucks having to wait until tomorrow to mount a share. I've tried exportfs -f on the NetApp, as well as dns flush. (I found both suggestions via Google) However, neither one works. I would sell my soul if someone could help out with a command/pagan ritual that would clear up this cache issue. --Christopher Karel

    Read the article

  • Audio Static/Interference regardless of audio interface?

    - by Tom
    I currently am running a media center/server on a Lubuntu machine. The machine specs: Core 2 Duo Extreme EVGA SLI 680i MotherBoard 2 GB DDR2 Ram 3 Hard Drives no raid - WD Caviar Black, Green, and Samsung Spinpoint Galaxy GTX 220 1GB External USB Creative XI-FI Extreme Card 550W Power Supply This machine is hooked up through an optical cable to an ONKYO HTR340 Receiver through the XIFI card. Whenever I play any audio regardless if it is through XBMC, the default audio player, a flash video, etc, I get a horrible static sound that randomly gets louder. Here is a video of the sound: http://www.youtube.com/watch?v=SqKQkxYRVA4 This static comes in randomly, sometimes going away for short periods, but eventually always comes back. So far I have tried everything I could think of: Reinstalling OS Installing/upgrading/repairing PulseAudio/Alsa Installing alternate OSes, straight Ubuntu, Lubuntu, Xubuntu, Arch, Mint, Windows 7 Switching audio from the external card to internal Optical, audio out through HDMI, audio out through headphones Different ports on receiver (my main desktop sounds fine on the same sound system) Different optical cables Unplugging everything unnecessary from the motherboard (1 HD, 1 Stick of Ram, 1 Keyboard) Swapping out ram Swapping out the motherboard Replacing the Graphics Card (was replaced due to fan being noisy, not specifically for this problem) Different harddrives Swapping power supply Disabling onboard audio Pretty much everything short of swapping the CPU. I haven't been able to narrow down the problem and it is getting frustrating. Is it possible that the CPU is faulty and might cause a problem such as this, or that the PC case is shorting out the motherboard? Any kind of suggestions will be appreciated.

    Read the article

  • RoboCopy fails with "the specified network name is no longer available"

    - by Justin Scott
    We have a scheduled task that runs robocopy periodically to mirror a rather large folder structure from one server to another (thousands of folders, 100,000+ files, 50+ GB in size). There is a share on the receiving server where the mirror gets stored. We're running the task from the origin server connecting out to the share on the receiving end. Both servers run Windows Server 2003 and are connected to the same network switch (100Mbps). The process will sometimes complete all the way through without error. More often than not, however, at some point during the process (seems random as to where), robocopy will fail with the error The specified network name is no longer available. It will wait 30 seconds and try the file again and eventually give up after a number of retries. Process will repeat at the next schedule interval and may complete... or not. When this occurs I am not able to access the share at all on the destination server from anywhere on the network for up to 30 minutes. There is nothing else on the network using this share. My question is what does this error mean specifically? Why is the share "dropping off" and becoming inaccessible? Is there a way to prevent it and get the file mirroring to be more stable?

    Read the article

  • Too much memory consumed during TFS automated build

    - by Bernard Chen
    We're running TFS 2010 Standard Edition, and we've set up an automated build to run whenever someone checks in code. We run through all of the automated tests (built with MSTest) as part of the build. We've configured the build to run the tests as a 64-bit process, but the QTAgent.exe that runs the tests grows in memory while the tests are running. It is currently reaching 8GB for the ~650 tests we have, and the process has slowed significantly when we went from 450 tests to 650 tests. When we run all of the tests in the local development environment, memory seems to be freed at least with each TestClass and never exceeds a certain level. The process of running all tests has not increased significantly in the local development environment. Is there a way to configure the build service to free up memory with each Test or each TestClass? With the way things are currently running, the build process gets very slow when we start to run out of memory on the machine. Edit: I found the MSTest invocation in the build log and ran it manually and saw the same behavior of runaway memory. I removed the /publish, /publishbuild, /teamproject, /platform, and /flavor parameters from the invocation of MSTest, in case the test runner was holding onto results until the end, but the behavior didn't change. I ran the same command line on a dev box, separate from the build server, and the memory freed up frequently. It seems there must be something wrong/different about the build server that is causing it to behave different, but I'm stumped where to look. I've looked at qtagent.exe.config, mstest.exe.config, versions of both executables. What else might affect this?

    Read the article

  • Unexplained CPU and Disk activity spikes in SQL Server 2005

    - by Philip Goh
    Before I pose my question, please allow me to describe the situation. I have a database server, with a number of tables. Two of the biggest tables contain over 800k rows each. The majority of rows are less than 10k in size, though roughly 1 in 100 rows will be 1 MB but <4 MB. So out of the 1.6 million rows, about 16000 of them will be these large rows. The reason they are this big is because we're storing zip files binary blobs in the database, but I'm digressing. We have a service that runs constantly in the background, trimming 10 rows from each of these 2 tables. In the performance monitor graph above, these are the little bumps (red for CPU, green for disk queue). Once ever minute we get a large spike of CPU activity together with a jump in disk activity, indicated by the red arrow in the screenshot. I've run the SQL Server profiler, and there is nothing that jumps out as a candidate that would explain this spike. My suspicion is that this spike occurs when one of the large rows gets deleted. I've fed the results of the profiler into the tuning wizard, and I get no optimisation recommendations (i.e. I assume this means my database is indexed correctly for my current workload). I'm not overly worried as the server is coping fine in all circumstances, even under peak load. However, I would like to know if there is anything else I can do to find out what is causing this spike? Update: After investigating this some more, the CPU and disk usage spike was down to SQL server's automatic checkpoint. The database uses the simple recovery model, and this truncates the log file at each checkpoint. We can see this demonstrated in the following graph. As described on MSDN, the checkpoints will occur when the transaction log becomes 70% full and we are using the simple recovery model. This has been enlightening and I've definitely learned something!

    Read the article

  • SQL 2008 SP2 RsClientPrint ActiveX - "Unable to load client print control"

    - by Miles
    We recently updated our SQL 2008 server to use SP 2 and its causing a few headaches. We use SSRS on this server and when a client tries to print a report by the built-in print function, we're needing to download the RsClientPrint ActiveX control from the server from the client gets the following error Unable to load client print control. We have about 700 computers that are needing this fixed and I've followed the instructions found on the following URL: http://www.kodyaz.com/articles/client-side-printing-silent-deployment-of-rsclientPrint.aspx We have two issues: Most of the users who will be using this ActiveX control are not local administrators so they will not be able to install the control themselves Since there are so many computers, this has to be done silently behind the scenes run by a local admin account After following the information from the link above, we're able to put the files in the C:\Windows\System32 folder and register the DLL but we still get the same problem. The only small thing I've noticed is that in the HTML for the report page, everything that references a version is referencing version 2007.100.4000.00 and the version of the DLL that I pulled from the report server is 2007.100.1600.22. Also, for some clients that are local administrators, they are prompted every time to install the ActiveX control when they click print. This works successfully but we can't have the user asked if they want to install the same control every time they need to print.

    Read the article

  • PHP upgrade to 5.3 from 5.2, sessions no longer get stored

    - by Damo
    background link: http://stackoverflow.com/questions/7014945/php-upgrade-5-2-to-5-3-session-issue I have upgraded PHP on my 2008 std server from PHP 5.2 to PHP 5.3. Following the upgrade, sessions no longer work correctly. I have copied over the settings from my PHP.ini files which are applicable and configure new settings in line with the server or PHP's recommendations. PHP executes fine correctly, however session data does not get saved. I have session data stored in c:\temp. For each session created, I can see the session file in this folder. However no information gets written into the session file. Permissions wise, IUSR and EVERYONE has write access to this folder. If I downgrade to PHP 5.2, sessions are saved correctly and the site functions correctly. I have followed advise to ensure my code is optimised. closing session files correctly and forcing a session reset. I'm stumped. session Session Support enabled Registered save handlers files user sqlite Registered serializer handlers php php_binary wddx DirectiveLocal ValueMaster Value session.auto_startOffOff session.bug_compat_42OnOn session.bug_compat_warnOnOn session.cache_expire180180 session.cache_limiternocachenocache session.cookie_domainno valueno value session.cookie_httponlyOffOff session.cookie_lifetime00 session.cookie_path// session.cookie_secureOffOff session.entropy_fileno valueno value session.entropy_length00 session.gc_divisor100100 session.gc_maxlifetime14401440 session.gc_probability11 session.hash_bits_per_character44 session.hash_function00 session.namePHPSESSID53PHPSESSID53 session.referer_checkno valueno value session.save_handlerfilesfiles session.save_path/temp/temp session.serialize_handlerphpphp session.use_cookiesOnOn session.use_only_cookiesOnOn session.use_trans_sid00

    Read the article

  • CloneZilla PXE Boot Without NFS

    - by John
    I am trying to setup CloneZilla to be bootable via PXE without using NFS. I do not have NFS running on our PXE server and would like to keep it that way. However, most of the information that I have found online indicates that you need to setup NFS in order to PXE boot CloneZilla. I believe that I am pretty close in getting it to work, but am not sure where to go next. Listed below are the different PXE menu option configurations that I have used so far. LABEL Clonezilla Live MENU LABEL Clonezilla Live KERNEL utilities/clonezilla/vmlinuz APPEND initrd=utilities/clonezilla/initrd.img boot=live live-config noswap nolocales edd=on nomodeset ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" o$ I have also tried the following append lines, without success: APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=788 fetch=tftp://10.130.155.23/filesystem.squashfs APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=normal nomodeset nosplash fetch=tftp://10.130.155.23/filesystem.squashfs Each of them have resulted in a no go with the following error: "Unable to find a live file system on the network". It looks like it gets to the point of trying to load the filesystem.squashfs file, hangs, and then throws the error. Any help would be greatly appreciated.

    Read the article

  • ICMP Redirect Theory VS. Application

    - by joeqwerty
    I'm trying to watch ICMP redirects in a lab using Cisco Packet Tracer (version 5.3.2) and I'm not seeing them, which leads me to believe that either my lab configuration isn't correct or my understanding of ICMP redirects isn't correct or that Packet Tracer doesn't support/use ICMP redirects. Here's what I believe to be true regarding ICMP redirects: Routers send ICMP redirects when all of these conditions are met: The interface on which the packet comes into the router is the same interface on which the packet gets routed out. The subnet or network of the source IP address is on the same subnet or network of the next-hop IP address of the routed packet. The datagram is not source-routed. The router kernel is configured to send redirects. I have the lab set up in Cisco Packet Tracer as displayed in the image and would expect to see an ICMP redirect from Router1 when pinging from PC1 to PC3. I'm not seeing the ICMP redirect and it looks like Router1 is actually routing all of the packets via Router2. I have IP ICMP debugging enabled on Router1 (and Router2) and I'm not seeing any ICMP redirect activity in either console. I'm also not seeing a route to the PC3 network in the routing table on PC1, which I think confirms that the ICMP redirect is not occurring. I'm using only static routing on Routers 1 and 2. Is my understanding of ICMP redirects incorrect, or is there a problem with my lab configuration or does Packet Tracer not support/use ICMP redirects?

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >