Search Results

Search found 44090 results on 1764 pages for 'working conditions'.

Page 89/1764 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Drive stopped working on windows server 2003 and I receive a "controller error"

    - by Durden81
    I can access the server in safe mode. I have a Proliant 360 Hp server with Windows server 2003 R2. The event viewer is completely filled up with this error: the driver detected a controller error on Device\Harddisk3\DR3 I individuated the drive affected. It is drive H that is a secondary non mirrored drive. When I access anything on that drive I receive: "the request could not be performed because of an I/O device error" What should I do? Is this just a driver issue or a hard drive failure? Please give me a quick help as my websites are offline due to this. Any suggestion is welcome!

    Read the article

  • Wireless Access Point stopped working

    - by Alex Pritchard
    I have a simple LAN set up at home using a Linksys WRT54GSV4 as my primary router and an Encore ENHWI-2AN3 as an access point. I connect the Encore to the Linksys by running a cable from one of the Linksys LAN ports into the Encore WAN input. I originally configured this using the Encore setup wizard, setting the device up in AP Router Mode. It detected the input network and worked about as expected, creating a second network that used my primary network to connect to the internet. It worked fine for about 2 weeks, then abruptly cut out today. I checked to make sure the network was still live through the cable going into the Encore (provides internet when connected to a laptop directly) and that devices are still able to connect to the network being broadcast by the Encore. When I try to rerun the connection wizard on the Encore, I receive the message "No Services found in WAN port." The WAN Settings is no longer retrieving a dynamic ip from the line. I tried providing a static IP, assigning an IP address within the subnet range of my primary router that wasn't being used and pointing the Default Gateway to the Linksys IP, but this did not work either. When I plug the cable into the WAN port, an internet light comes on that is not lit when a live network is not connected. I've tried doing a hard reset on the Encore (held down the rest button until the lights flashed, reconfigured from scratch), but the WAN settings are still not detected. Also tried powering off and on the modem, linksys, and encore. Any suggestions would be appreciated!

    Read the article

  • Problem accessing the remote working space on my new SBS 2008 box

    - by Dabblernl
    This supposedly easy to install OS is starting to drive me nuts... SYMPTOMS: When trying to connect to the remote workplace I get (and ignore) the security warning because I am currently testing with the self issued certificate. After loggin in the remote workplace's main screen displays but the images on it do not load. When I try to click the email link I am thrown back to the login screen. If I try the login to exchange directly by typing in the remote.mydomain.com/owa address I get a 403 error that I am denied access. The problem occurs on both a vista and a win 7 machine. It seems that some security setting is playing tricks with me. How can I troubleshoot this?

    Read the article

  • CPU not working on a specific motherboard

    - by Shaman
    I'm making a computer for someone and I met a weird problem. The CPU that I have doesn't work on this motherboard. The CPU is an Intel Pentium D 925 and the motherboard is an ECS G41T-M6, which in theory should work together. The only thing reused is the power source(400W). When I start the computer, the fans start, and that's it. The BIOS doesn't boot. I tried my own power source (600W Corsair) and nothing. Removed the RAM, no warning. In desperation I tried the last thing, swaped my own CPU with this one (Core2Duo E7200). Lo and behold, it worked. Both. The Core2Duo worked on the ECS with the old power source and the RAM that I used in the first place, and the Pentium D worked on my Gigabyte G31M-ES2L. What I discovered was that the Pentium D didn't receive power on the ECS, because I tried running it without the cooler and it remained at room temperature. On a side note, I also removed the HDDs just in case. So, in conclusion, any ideas? I can't return it, and I can still use it to upgrade another PC, but I would really prefer not to buy another CPU if possible.

    Read the article

  • Squid stale-while-revalidate not working when max-age=0

    - by Wiliam
    Squid 2.7 always reaches backend, expected is to reach backend using stale-while-revalidate only when cache expires, not when client triggers max-age=0. Script: <?php header('Cache-Control: public, max-age=10, stale-if-error=200, stale-while-revalidate=500'); header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); sleep(2); die("OK"); And squid config: # http_port public_ip:port accel defaultsite= default hostname, if not provided http_port 80 accel defaultsite=mydomain.com # IP and port of your main application server (or multiple) cache_peer 127.0.0.1 parent 8000 0 no-query allow-miss originserver name=main # Do not tell the world that which squid version we're running httpd_suppress_version_string on # Remove the Caching Control header for upstream servers header_access Cache-Control deny all #header_access Last-Modified deny all # log all incoming traffic in Apache format logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh access_log /usr/local/squid/var/logs/squid.log combined all cache_effective_user squid refresh_pattern . 10080 90% 999999 ignore-no-cache override-expire ignore-private icp_port 0

    Read the article

  • Web server replica not working in other server

    - by user761076
    I have a Drupal installation (php+mysql) in a server, and I'm trying to copy this installation to another server with the same configuration, same physical and virtual path, same db configuration, etc. The thing is, in my new server I get the homepage to work, but not the inner pages, so I guess has something to do with rewrite (mod_rewrite is installed) (both .htaccess are the same). When I access http://localhost/myweb/content/mypage I get a 404 or a "Forbidden" if I uncomment this in httpd.conf (original httpd.conf does not have this entry): <Directory path/to/docs"> DirectoryIndex index.php index.html Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any clue? Thank you

    Read the article

  • How can I get around windows 8.1 store (& metro apps) not working with UAC disabled

    - by Enigma
    I have UAC disabled because it is annoying and causes more problems that it could ever possibly solve, at least for me. Here is yet another problem, and it seems to be due to a recent update as I don't remember it in the past. Even with a MS account, I can't use the store because UAC is disabled. How can I get around this? Short term I can just enable it, reboot, use store, disable it, reboot and be on my way but there has to be a better way (other than MS getting their software completely right - like that will happen anytime, ever) Edit: Apparently this is far more of an issue than I originally thought. Now every(?)many metro apps requires UAC. Anyone aware of the update this got rolled in with? Thankfully netflix isn't affected which is the only metro app I use at the moment. What I see: Event Log info: Activation of app winstore_cw5n1h2txyewy!Windows.Store failed with error: This app can't be activated when UAC is disabled. See the Microsoft-Windows-TWinUI/Operational log for additional information.

    Read the article

  • New-ManagedContentSettings - not working properly under Exchange 2010

    - by mfinni
    I have a client that is divesting a business unit into a new AD forest, Exchange org, etc. We're using Quest tools to migrate users and mailboxes. However, I have to build the new infrastructure to match the old one. In the old one, we're using Managed Folder Mailbox Policies to limit (or allow) retention. They started with Exchange 2007 and never upgraded to Retention Policies; oh well. So, in the old environment, when you use a 2007 server to define a new Managed Content Setting, you can pick "Email" from the dropdown for MessageClass. This is a display name; the actual MessageClass values are thus: MessageClass : IPM.Note;IPM.Note.AS/400 Move Notification Form v1.0;IPM.Note.Delayed;IPM.Note.Exchange.ActiveSync.Report;IPM.Note.JournalReport.Msg;IPM.Note.JournalReport.Tnef;IPM.Note.Microsoft.Missed.Voice;IPM.Note.Rules.OofTemplate.Microsoft;IPM.Note.Rules.ReplyTemplate.Microsoft;IPM.Note.Secure.Sign;IPM.Note.SMIME;IPM.Note.SMIME.MultipartSigned;IPM.Note.StorageQuotaWarning;IPM.Note.StorageQuotaWarning.Warning;IPM.Notification.Meeting.Forward;IPM.Outlook.Recall;IPM.Recall.Report.Success;IPM.Schedule.Meeting.*;REPORT.IPM.Note.NDR If I take that and try to mangle it into a new cmdlet for Ex2010 in my new environment here's what I get New-ManagedContentSettings -Name "Delete Messages older then 90 days" -FolderName "Entire Mailbox" -RetentionEnabled $True -AgeLimitForRetention 90 -TriggerForRetention WhenDelivered -RetentionAction DeleteAndAllowRecovery -MessageClass "IPM.Note","IPM.Note.AS/400MoveNotificationFormv1.0","IPM.Note.Delayed","IPM.Note.Exchange.ActiveSync.Report","IPM.Note.JournalReport.Msg","IPM.Note.JournalReport.Tnef","IPM.Note.Microsoft.Missed.Voice","IPM.Note.Rules.OofTemplate.Microsoft","IPM.Note.Rules.ReplyTemplate.Microsoft","IPM.Note.Secure.Sign","IPM.Note.SMIME","IPM.Note.SMIME.MultipartSigned","IPM.Note.StorageQuotaWarning","IPM.Note.StorageQuotaWarning.Warning","IPM.Notification.Meeting.Forward","IPM.Outlook.Recall","IPM.Recall.Report.Success","IPM.Schedule.Meeting.*","REPORT.IPM.Note.NDR" -whatif Invoke-Command : Cannot bind parameter 'MessageClass' to the target. Exception setting "MessageClass": "The length of t he property is too long. The maximum length is 255 and the length of the value provided is 518." At C:\Users\MFinnigan.sa\AppData\Roaming\Microsoft\Exchange\RemotePowerShell\pfexcas02.fve.ad.5ssl.com\pfexcas02.fve.ad .5ssl.com.psm1:28204 char:29 + $scriptCmd = { & <<<< $script:InvokeCommand ` + CategoryInfo : WriteError: (:) [New-ManagedContentSettings], ParameterBindingException + FullyQualifiedErrorId : ParameterBindingFailed,Microsoft.Exchange.Management.SystemConfigurationTasks.NewManaged ContentSettings So, the config object can store all that mess, but I can't fit it in through the cmdlet to create the object. Lovely. Any ideas?

    Read the article

  • RabbitMQ Management console not working

    - by rrejc
    I have started with RabbitMQ. I have a (windows) machine on which I installed two RabbitMQ nodes as a service - I have choose the nodename, port and service name for each of them. The services are running normally (i see that they are listening in a netstat-a). I have also installed management plugin with "rabbitmq-plugins enable rabbitmq_management" and restarted both services. But the plugin isn't running - I dont see it listening in a netstat and I can't connect to the management console via browser. Any idea what could be wrong? Is there any log to see what is goind on? Updated: when I do rabbitmq-plugins list i get: c:\RabbitMq\sbin>rabbitmq-plugins list [e] amqp_client 3.0.1 [ ] cowboy 0.5.0-rmq3.0.1-git4b93c2d [ ] eldap 3.0.1-gite309de4 [e] mochiweb 2.3.1-rmq3.0.1-gitd541e9a [ ] rabbitmq_auth_backend_ldap 3.0.1 [ ] rabbitmq_auth_mechanism_ssl 3.0.1 [ ] rabbitmq_consistent_hash_exchange 3.0.1 [ ] rabbitmq_federation 3.0.1 [ ] rabbitmq_federation_management 3.0.1 [ ] rabbitmq_jsonrpc 3.0.1 [ ] rabbitmq_jsonrpc_channel 3.0.1 [ ] rabbitmq_jsonrpc_channel_examples 3.0.1 [E] rabbitmq_management 3.0.1 [e] rabbitmq_management_agent 3.0.1 [ ] rabbitmq_management_visualiser 3.0.1 [e] rabbitmq_mochiweb 3.0.1 [ ] rabbitmq_mqtt 3.0.1 [ ] rabbitmq_old_federation 3.0.1 [ ] rabbitmq_shovel 3.0.1 [ ] rabbitmq_shovel_management 3.0.1 [ ] rabbitmq_stomp 3.0.1 [ ] rabbitmq_tracing 3.0.1 [ ] rabbitmq_web_stomp 3.0.1 [ ] rabbitmq_web_stomp_examples 3.0.1 [ ] rfc4627_jsonrpc 3.0.1-git7ab174b [ ] sockjs 0.3.3-rmq3.0.1-git92d4ba4 [e] webmachine 1.9.1-rmq3.0.1-git52e62bc

    Read the article

  • PHP `virtual()` with Apache MultiViews not working after upgrade to Ubuntu 12.04

    - by Izzy
    I use PHP's virtual() directive quite a lot on one of my sites, including central elements. This worked fine for the last ~10 years -- but after upgrading (or rather moving, as it is on a new machine) to Ubuntu 12.04 it somehow got broken. Example setup (simplified) To make it easier to understand, I simplify some things (contents). So say I need a HTML fragment like <P>For further instructions, please look <A HREF='foobar'>here</P> in multiple pages. 10 years ago, I used SSI for that, so it is put into a file in a central place -- so if e.g. the targeted URL changes, I only need to update it in one place. To serve multiple languages, I have Apache's MultiViews enabled -- and at $DOCUMENT_ROOT/central/ there are the files: foobar.html (English variant, and the default) foobar.html.de (German variant). Now in the PHP code, I simply placed: <? virtual("/central/foobar"); ?> and let Apache take care to deliver the correct language variant. The problem As said, this worked fine for about 10 years: German visitors got the German variant, all others the English (depending on their preferred language). But after upgrading to Ubuntu 12.04, it no longer worked: Either nothing was delivered from the virtual() command, or (in connection with framesets) it even ended up in binary gibberish. Trying to figure out what happens, I played with a lot of things. I first thought MultiViews was (somehow) not available anymore -- but calling http://<server>/central/foobar showed the right variant, depending on the configured language preferences. This also proved there was nothing wrong with file permissions. The error.log gave no clues either (no error message thrown). Finally, just as a "last ressort", I changed the PHP command to <? virtual("central/foobar.html"); ?> -- and that very same file was in fact included. So PHP's virtual() function basically worked -- but the language dependend stuff obviously did no longer work together with it as it did before. Of course I tried to find some change (most likely in PHP's virtual() command), using Google a lot, and also searching the questions here -- unfortunately to no avail. Finally: The question Putting "design questions" aside (surely today I would design things differently -- but at least currently I miss the time to change that for a quite huge amount of pages): What can be done to make it work again? I surely missed something -- but I cannot figure out what...

    Read the article

  • DNS Name lookup (was SSH) Not Working After Snow Leopard Upgrade

    - by Peter Cardona
    I think this started with the Snow Leopard update. Cleaned out the .ssh directory, still having the issue. ~: uname -a Darwin california-example-com.local 10.0.0 Darwin Kernel Version 10.0.0: Fri Jul 31 22:47:34 PDT 2009; root:xnu-1456.1.25~1/RELEASE_I386 i386 ~: ssh -V OpenSSH_5.2p1, OpenSSL 0.9.8k 25 Mar 2009 ~: ls -l ~/.ssh ~: nslookup nevada Server: 10.94.62.3 Address: 10.94.62.3#53 Name: nevada.example.com Address: 10.94.62.3 ~: ssh nevada ssh: Could not resolve hostname nevada: nodename nor servname provided, or not known

    Read the article

  • Interactive mode of PSExec not working for console application

    - by Focker
    I am trying to use PSExec to kick off a console application on a remote computer in an interactive state. When I run something like this: PsExec.exe -s -d -i 1 \\MyServer notepad.exe It launches Notepad just fine. If I then run this: PsExec.exe -s -d -i 1 \\MyServer C:\Temp\MyConsoleApp.exe It launches the command windows but doesn't do anything as far as I can tell. As in, when I run my console application locally, it displays a "heartbeat" every 5 seconds, but when I run it remotely, nothing is displayed in the command window. The .exe does show up as a process in Task Manager. Any ideas?

    Read the article

  • .htaccess redirect https to http not working

    - by Ira Rainey
    I am trying to catch any https traffic to the front of my site so: https://www.domain.com is redirected to: http://www.domain.com However other subdomains need to be redirected elsewhere. For the most part this is all working, apart from the https - http redirection. Here's my .htaccess file at the moment: RewriteEngine On RewriteCond %{HTTPS} on RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} RewriteCond %{HTTP_HOST} ^domain\.com [NC] RewriteRule ^(.*)$ http://www.domain.com/$1 [L,R=301] RewriteCond "%{HTTP_HOST}" !^www.* [NC] RewriteCond "%{HTTP_HOST}" ^([^\.]+).*$ RewriteRule ^(.*)$ https://secure.domain.com/a/login/%1 [L,R=301] It would seem that this bit: RewriteCond %{HTTPS} on RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} isn't working as I would imagine. In fact it doesn't seem to redirect at all. In another subdirectory I have the opposite in effect which works fine: RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} so my thinking is the opposite should have done the job, but seemingly not. Any thoughts anyone?

    Read the article

  • Outlook 2010 Auto Responder Rule Not Working (Error)

    - by Obie
    In Outlook 2010 on Windows 7 I've created a template to use as an auto responder and I set a rule to respond using the template if my name is in the "to line". Upon recieving any message the rule reports an error but gives no explination of what the error is. My goal here is simply to make an auto responder, if there is a simpler way/workaround I would love any help getting the to work as I am leaving town very shortly. Thank you!

    Read the article

  • Mac OSX DHCP Stopped Working [on hold]

    - by Jesse James Richard
    Tethering a Raspberry PI to a MacBook (Mavericks) via ethernet is proving to be a real pain. This worked for about a day. My MacBook required a rare reboot and once it came back up the Pi won't get an address. I've confirmed it's not a problem with the Pi. It's a problem with the MacBook for sure. It's basically just stopped giving out IPs. I've read as much as I've found about how to fix this friggin' problem, but I've thus far come up blank. Internet sharing Wi-fi Ethernet enabled, and/or Edited /etc/bootpd.plist as described here (http://www.jacquesf.com/2011/04/mac-os-x-dhcp-server/ - this worked initially and now no longer does) Pi connected directly to the router has no problems. My MacBook DHCP server will no longer give out addresses. Any help would be much appreciated.

    Read the article

  • windows cache not working as it should?

    - by piotrektt
    I run windows 2012 server with data center. The setup is with 60GB of RAM. I have one file shared on VHD and when I copy this file locally the RAM cache is all used up but when multiple computers connect to the share it the cache is not used. The network is 8Gb. The whole network is around 200 computers that need to read that one file but on this setup only 10 connection kills the server. Is there any way to check what is going on? What other solution can I use to manage cache in windows?

    Read the article

  • Plesk not working after restart?

    - by user37888
    Has anyone had a problem where you restarted your Plesk server with the services and only 1 domain works out of all the ones you have? Only one domain will work out of all of the ones on our server and we cannot understand why this is happening.

    Read the article

  • form submit not working in firefox but works fine in IE

    - by jestges
    Hi, I want to submit my parent page when I click on submit button of the child page. In my child page I've written my code as string scriptString = "<script language=JavaScript> window.opener.document.forms(0).submit(); </script>"; // ASP.NET 2.0 if (!Page.ClientScript.IsClientScriptBlockRegistered(scriptString)) { Page.ClientScript.RegisterClientScriptBlock(this.GetType(), "script", scriptString); } it is working fine in IE but not working in Firefox. What could be the alternate method for this? Thank in advance

    Read the article

  • SSH not working through Double NAT

    - by d_inevitable
    I am trying to setup port forwarding for ssh through 2 NATs The first Router translates my internet IP to my outer network (10.1.7.0). In the outer network there's a second Router that does NAT to my inner network (192.168.1.0). The target server is connected to both, the outer network and the inner network. I cannot change the port forwarding options for outer router. It is currently configured to forward the SSH and HTTP port to the router for the inner network. Internet + | v +-----------------+ +------------------+ | Outer Router | | Inner Router | |-----------------| |------------------| | | SSH HTTP | | +----+ +--------------------->| | | | | | | | | | | | | +-------+---------+ +------+---------+-+ | | | | | | | | | | | | | | +------------------+ | SSH | | | | Server | | | | | |------------------| | | | +-----------> |<-------+ | | | | |HTTP (testing) | +------------------+ | | | +------v------------------+ | | Outer Workstation | +-------------------+ | |-------------------------| | Inner Workstation| | | | |-------------------| | | | | |<----------------+ +-------------------------+ | | +-------------------+ When connecting from a outer workstation to the address of the inner router, then both SSH and HTTP work fine. When connecting from the internet to my public ip with HTTP, the connection works fine as well. However SSH just times out. Most likely because the reply is not routed back properly. I suspect its either because of the SSH itself, or because the server is connected to both, the inner and outer network. Any ideas how I could resolve this issue? The routes on the server are currently: ip route show default via 10.1.7.254 dev eth0 metric 100 10.1.7.0/24 dev eth0 proto kernel scope link src 10.1.7.1 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.2 Do I have to change this? If so how?

    Read the article

  • Magento 1.4 Load By Category Not Working

    - by LinuxGnut
    Hi folks. I have a Magento helper class I wrote that works wonderfully in 1.3. However, we're working on a new install of 1.4 and filtering by category won't work for some reason. function __construct() { Mage::app(); $this-model = Mage::getModel('catalog/product'); $this-collection = $this-model-getCollection(); $this-collection-addAttributeToFilter('status', 1);//enabled $this-collection-addAttributeToSelect('*'); } function filterByCategoryID($catID) { $this-collection-addCategoryFilter(Mage::getModel('catalog/category')-load($catID)); } I can't figure out why this isn't working in 1.4. Has anyone else come into this issue?

    Read the article

  • Service Broker not working after database restore

    - by roryok
    Have a working Service Broker set up on a server, we're in the process of moving to a new server but I can't seem to get Service Broker set up on the new box. Have done the obvious (to me) things like Enabling Broker on the DB, dropping the route, services, contract, queues and even message type and re adding them, setting ALTER QUEUE with STATUS ON SELECT * FROM sys.service_queues gives me a list of the queues, including my own two, which show as activation_enabled, receive_enabled etc. Needless to say the queues aren't working. When I drop messages into them nothing goes in and nothing comes out. Any ideas? I'm sure there's something really obvious I've missed...

    Read the article

  • DisplayPort to DVI not working on Quadro FX 580

    - by kaosvid
    I have a PNY NVIDIA Quadro FX 580 graphics card with 1x DVI and 2x DisplayPorts. The DVI port works fine with both my Viewsonic monitors but I cannot get either of the DPs to work using the supplied DP to DVI adapter; all I get is a "no signal" on either monitor when connected to either DP port. The NVIDIA Control Panel shows that the second monitor is not connected when in fact it is. How do I get the second monitor to work? System: Windows XP Professional 32-bit Asus P5Q motherboard Core 2 Duo E8500 CPU 4GB PC8500 RAM

    Read the article

  • SSL totally stopped working in Windows

    - by Dims
    Apparently, on my notebook, I have suddenly lost any ability to use network connections, involving SSL and/or data encryption, provided my MS: 1) remote desktop connections: Because of an error in data encryption, this session will end 2) browse HTTPS sites: Can't browse HTTPS pages. TLS error 3) communicate over WiFi, while wired is ok Is there any possible one central reason for all of these problems in Windows? Third party applications, like Putty, works fine. Is it possible to reset/repair certificate store or something in Windows?

    Read the article

  • Need Assistance on ASUS K55V USB ports not working

    - by Pascal Schilling
    I have a K55V ASUS laptop, W7 Home (64bit). Whenever I try to use a mouse (or any other USB device), it doesn't work because of a driver problem. The automatic update tries to update the driver and requires me to reboot directly after. It doesn't work. Windows update then wants to re-attempt the update and then I have to reboot (etc). Now, after 10 tries it still doesn't work. Can some one help me out on this? No USB ports work at the moment, it has 2 USB 3.0 ports and 1 USB 2.0 port.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >