Search Results

Search found 34513 results on 1381 pages for 'end task'.

Page 275/1381 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Lion server profile manager, device enrollment doesn't work

    - by user964406
    I am in the process of setting up Lion Servers profile manager to manage iPads on our local school network. I don't need to manage them while they are outside the network. I have successfully had it working on my personal network. The school network is behind a proxy which we have no control over. I can get the iPads to view the mydevices page and install a trust cert. I have managed to get an iPad to successfully install the remote management profile. After this the profile manager bugs out. It will list the active task of 'new device (sending)' but it's unable to complete the task. If I click on the device on profile manager and try any of the actions out they will all fail to complete. I am using the auto generated certificates and this works if I bring the server and iPad outside of the school network. Shortly after device enrollment the system log on the Lion server reports the following Replaced the actual ip address with INTERNALIP Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini applepushserviced[778]: Got connection error Error Domain=NSPOSIXErrorDomain Code=1 "The operation couldn\u2019t be completed. Operation not permitted" UserInfo=0x7fa483b1a340 {NSErrorFailingURLStringKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS, NSErrorFailingURLKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS} Jun 4 08:40:53 mini applepushserviced[778]: Failed to get client cert on attempt 2, will retry in 15 seconds Does anyone have any ideas on how to get past this stage? Thanks in advance.

    Read the article

  • Ghost Solution Suite: Booting over PXE to WinPE for re-imaging, then booting to installed OS

    - by uberdanzik
    I have 40 networked computers that need to be re-imaged each night over a network via an automatic and unattended process. This is to reset the computers to a default state, as well as update the computers to the latest software loads. I'm using Symantec Ghost Solution Suite 2.5. My process so far is the following: Client begins in a powered down WakeOnLan accepting state. Ghost Console task uses WakeOnLan and PXE to boot the client into the WinPE environment. The client connects to the ghost console and reimages itself successfully. The client closes WinPE and restarts. PROBLEM: The client boots into the WinPE environment again, instead of the newly installed OS (Win7) I need it to boot into Win7 once so that I can run a few configuration batch files, then shut down into the WakeOnLan state again. Ghost console even reports an error on the process, that it never rebooted into the OS. Right now it seems that there is not an option to stop it from booting into the PXE server's WinPE image after re-imaging. Even if I set up a PXE boot menu with other boot options, its pointless, because it will always boot the default option. I would expect the ghost console task to be able to influence the PXE boot choice somehow. What do they expect us to do, turn the PXE server on and off manually? How can I get the client to boot to the OS after re-imaging? Thank you.

    Read the article

  • Puppet: array in parameterized classes VS using resources

    - by Luke404
    I have some use cases where I want to define multiple similar resources that should end up in a single file (via a template). As an example I'm trying to write a puppet module that will let me manage the mapping between MAC addresses and network interface names (writing udev's persistent-net-rules file from puppet), but there are also many other similar usage cases. I searched around and found that it could be done with the new parameterised classes syntax: if implemented that way it should end up being used like this: node { "myserver.example.com": class { "network::iftab": interfaces => { "eth0" => { "mac" => "ab:cd:ef:98:76:54" } "eth1" => { "mac" => "98:76:de:ad:be:ef" } } } } Not too bad, I agree, but it would rapidly explode when you manage more complex stuff (think network configurations like in this module or any other multiple-complex-resources-in-a-single-config-file stuff). In a similar question on SF someone suggested using Pienaar's puppet-concat module but I doubt it could get any better than parameterised classes. What would be really cool and clean in the configuration definition would be something like the included host type, it's usage is simple, pretty and clean and naturally maps to multiple resources that will end up being configured in a single place. Transposed to my example it would be like: node { "myserver.example.com": interface { "eth0": "mac" => "ab:cd:ef:98:76:54", "foo" => "bar", "asd" => "lol", "eth1": "mac" => "98:76:de:ad:be:ef", "foo" => "rab", "asd" => "olo", } } ...that looks much better to my eyes, even with 3x options to each resource. Should I really be passing arrays to parameterised classes, or there is a better way to do this kind of stuff? Is there some accepted consensus in the puppet [users|developers] community? By the way, I'm referring to the latest stable release of the 2.7 branch and I am not interested in compatibility with older versions.

    Read the article

  • Kate gives out debug messages on the console from which it is started

    - by Elan
    I am new to Linux. I use Ubuntu 11.04. Whenever I open a file with kate from the commandline, with 'kate &' (or without ampersand), Kate starts out giving messages on the console. It continuously gives them out as I save a file or close one. They look like debug messages to me (sample below). I have used Synaptic package manager to install Kate. Uninstalling and installing the dev version did not make any change. Soon my console becomes cluttered. Is there a way to suppress these messages? There was nothing explicit in Kate settings either. Thank you, The messages look like kate(13412)/kate-filetree KateFileTreeModel::handleInsert: BEGIN! kate(13412)/kate-filetree KateFileTreeModel::handleInsert: creating a new root kate(13412)/kate-filetree ProxyItem::ProxyItem: ProxyItem(0x1796840,0x0,-1,QObject(0x0) .... kate(13435)/kate-filetree KateFileTreeModel::documentActivated: adding viewHistory ProxyItem(0x1eb7cf0,0x1eb6830,0,KateDocument(0x1d93ea0) , "Untitled" ) kate(13435)/kate-filetree KateFileTreeModel::updateBackgrounds: BEGIN! kate(13435)/kate-filetree KateFileTreeModel::updateBackgrounds: END! kate(13435)/kate-filetree KateFileTreeModel::documentActivated: END! kate(13435)/kate-filetree KateFileTreePluginView::viewChanged: END! X Error: BadWindow (invalid Window parameter) 3 Major opcode: 20 (X_GetProperty) Resource id: 0x5601b42 X Error: BadWindow (invalid Window parameter) 3 Major opcode: 20 (X_GetProperty) Resource id: 0x5601b42

    Read the article

  • Domain Environment + Certificate Authority + Server 2008 R2

    - by user1110302
    I have recently been delegated the task to setup a CA in our domain environment and have a question on why Microsoft does somethings the way they do lol. I have been trying to read up on what the best practices are for going about this task, and have decided that in an ideal CA environment you should have one “offline” Root CA, and then two subordinate CAs for redundancy/issuing the certs. That is all good, I understand how this works and why, but in messing with a sandbox I have setup, the way you go about adding certificate authorities to a domain environment seems extremely trivial and against all of their best practices… Dooes anyone know what the purpose is of an Enterprise Root CA that is integrated into Active Directory? From what I have read, once you setup an Enterprise Root CA that is integrated into Active Directory, it stays with Active Directory for the long haul and must not be turned off/renamed/touched under any circumstances. If this is true, that seems to go against the practice of setting up a standalone root CA, adding the subordinates, and then taking the root offline. Thanks for any feedback you may have to offer!

    Read the article

  • Firefox and Chrome keeps forcing HTTPS on Rails app using nginx/Passenger

    - by Steve
    I've got a really weird problem here where every time I try to browse my Rails app in non-SSL mode Chrome (v16) and Firefox (v7) keeps forcing my website to be served in HTTPS. My Rails application is deployed on a Ubuntu VPS using Capistrano, nginx, Passenger and a wildcard SSL certificate. I have set these parameters for port 80 in the nginx.conf: passenger_set_cgi_param HTTP_X_FORWARDED_PROTO http; passenger_set_cgi_param HTTPS off; The long version of my nginx.conf can be found here: https://gist.github.com/2eab42666c609b015bff The ssl-redirect.include file contains: rewrite ^/sign_up https://$host$request_uri? permanent ; rewrite ^/login https://$host$request_uri? permanent ; rewrite ^/settings/password https://$host$request_uri? permanent ; It is to make sure those three pages use HTTPS when coming from non-SSL request. My production.rb file contains this line: # Enable HTTP and HTTPS in parallel config.middleware.insert_before Rack::Lock, Rack::SSL, :exclude => proc { |env| env['HTTPS'] != 'on' } I have tried redirecting to HTTP via nginx rewrites, Ruby on Rails redirects and also used Rails view url using HTTP protocol. My application.rb file contains this methods used in a before_filter hook: def force_http if Rails.env.production? if request.ssl? redirect_to :protocol => 'http', :status => :moved_permanently end end end Every time I try to redirect to HTTP non-SSL the browser attempts to redirect it back to HTTPS causing an infinite redirect loop. Safari, however, works just fine. Even when I've disabled serving SSL in nginx the browsers still try to connect to the site using HTTPS. I should also mention that when I pushed my app on to Heroku, the Rails redirect work just fine for all browsers. The reason why I want to use non-SSL is that my homepage contains non-secure dynamic embedded objects and a non-secure CDN and I want to prevent security warnings. I don't know what is causing the browser to keep forcing HTTPS requests.

    Read the article

  • How to direct reverse proxy requests using wildcard vhosts

    - by HonoredMule
    I'm interested in running a reverse proxy with 2-3 virtual machines behind it. Each internal server will run multiple virtual hosts, and rather than manually configuring each individual vhost on the proxy (a variety of vhosts come and go too often for this to be practical), I would like to use something which can employ pattern matching in a sequential order to find the appropriate back-end server. For example: Server 1: *.dev.mysite.com Server 2: *.stage.mysite.com Server 3: *.mysite.com, dev.mysite.com, stage.mysite.com, mysite.com Server 4: * In the above configuration, task.dev.mysite.com would go to Server 1, dev.mysite.com would go to Server 3, yoursite.stage.mysite.com to Server 2, www.mysite.com to Server 3, and yoursite.com to Server 4. I've looked into using Squid, Varnish, and nginx so far. I have my opinions regarding their respective desirability and general suitability, but it's not readily apparent if any of them can handle dynamic server selection in this manner and not require per-vhost configuration. Apache on the other hand can do this handily and simply, but otherwise (aside from being well-known and familiar) seems very poorly suited to the partly-performance-serving task. Performance isn't actually a major concern yet, but it seems foolish to use Apache if another system will perform far better and can also handle the desired 'hands-free' configuration. But so is frequently having to adjust the gateway for all production services and risk network-wide outage...and so also is setting oneself up for longer downtime later if Apache becomes a too-small bottleneck. Which of these (or other) reverse proxies can do it/would do it best? And maybe I should post this as a separate question, but if Apache is the only practical option, how safe/reliable/predictable is apache-mpm-event in apache2.2 (Ubuntu 12.04.1) particularly for a dedicated reverse proxy? As I understand it the Event MPM was declared "safe" as of 2.4 but it's unclear whether reaching stability in 2.4 has any implications for the older (2.2) versions available in official/stable package channels of various distros.

    Read the article

  • multiple streaming servers behind a Bastion Host

    - by Bond
    I am using open source streaming server Red5 on multiple servers. Which are running behind a bastion host. the world knows these sites as http://site1.mydomain.com http://site2.mydomain.com http://site3.mydomain.com http://site4.mydomain.com To reach the front end server is using Apache Reverse Proxy. I am also having video streaming on each of these websites using rtmp. To be able to reach the streaming server I embed a javascript in HTML pages as follows Code: <embed ..... var="rtmp://site1.my_domain.com" > the problem is the website are many site1.mydomain.com site2.mydomain.com site3.mydomain.com site4.mydomain.com each on a separate physical server. Each of these four have their own Red5 installations the front end to each of these four is a common Bastion Host. If I run rtmp on each of the subdomains at a different port how will I make sure a request such as rtmp://site1.mydomain.com rtmp://site2.mydomain.com goes to their respective servers. from the front end server. What do I need to handle in this case ? IPTABLES came to mind instantly but from the client browser on internet when some one requests rtmp://site1.mydomain.com how will I make sure this rtmp request is mapped to a port different than 1935 as there are three other streaming servers which are also to respond to their respective requests ?

    Read the article

  • Is "DSLAM congestion" a legitimate reason for slow DSL?

    - by Jay Bazuzi
    My DSL has been extremely slow in the evenings recently. To test it, I telnet to my DSL Modem, and ping the gateway. This way I eliminate internet congestion and local network issues. In the mornings I get 30ms - 50ms pings. In the evenings, it bounces around a lot, but 10000ms pings are common. I complained to Qwest support, and they said it was a known issue on their end, their engineers were working on it, and wouldn't say anything else. A couple days later I complained again, and they sent out a technician. He tested my house wiring and found that one of them had a short. It was an unused line, so we disconnected it, and he said things looked better and left. My daytime speeds improved at this point, but evening is still bad. I complained to Qwest support again, and they said it was a problem with DSLAM congestion at their end, and that they were working on it, but no ETA. My neighbor has Qwest DSL and doesn't seem to have these problems. That seems strange. I go use her network when I absolutely must get online and mine is behaving badly. I can't tell if they're yanking my chain or not. Regardless, these speeds are crap. I'm paying for 7Mpbs but am lucky if I get 1/10th that in the evenings. My kids like to watch Netflix streaming movies, and it's just impossible after 5pm or so. Should I wait it out? Will complaining again produce any results? Should I change my subscription to a lower speed until they fix their end? Or switch to cable?

    Read the article

  • Why is Windows Update trying to install an update I don't need?

    - by Oliver Salzburg
    I have a Windows 7 system that currently has a single update pending: Windows Internet Explorer 9 for Windows 7 for x64-based Systems If I try to install the update, Windows Update will: Create a restore point Fail with the error: Code 9C48 Windows Update encountered an error. The event log for the event reads: Installation Failure: Windows failed to install the following update with error 0x80070643: Windows Internet Explorer 9 for Windows 7 for x64-based Systems. If you search the web for that error, there are many other people with the exact same issue. Sadly, I am unable to apply the proposed solutions to my case, because I just installed this system. There is nothing on it, except Windows 7. I installed the system and ran through the updates. I also did the exact same process with this machine several times over the past few days due to a long-term test we just started. I didn't have any problems with any Windows Update on the previous installation runs and I know I didn't do anything different this time because I followed the installation procedures instructions which are to be used during the test. How did this happen and how do I solve it? Further Investigation So, as I always like to do, I ran the update again while running Process Monitor and dug up further details. WindowsUpdate.log First of all, there is a Windows Update log file located at C:\Windows\WindowsUpdate.log which I didn't know about. But I fail to see any significant entry in it, maybe you're more lucky: 2012-04-10 22:46:58:017 956 728 AU AU received approval from Ux for 1 updates 2012-04-10 22:46:58:017 956 728 AU AU setting pending client directive to 'Progress Ux' 2012-04-10 22:46:58:095 956 728 AU BeginInteractiveInstall invoked for Download 2012-04-10 22:46:58:095 956 728 AU Auto-approving update for download, updateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}.100, ForUx=1, IsOwnerUx=1, HasDeadline=0, IsMinor=0 2012-04-10 22:46:58:095 956 728 AU Auto-approved 1 update(s) for download (for Ux) 2012-04-10 22:46:58:110 956 728 AU UpdateDownloadProperties: 0 download(s) are still in progress. 2012-04-10 22:46:58:110 956 728 AU ############# 2012-04-10 22:46:58:110 956 728 AU ## START ## AU: Download updates 2012-04-10 22:46:58:110 956 728 AU ######### 2012-04-10 22:46:58:110 956 728 AU # Approved updates = 1 2012-04-10 22:46:58:110 956 728 AU AU initiated download, updateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}.100, callId = {35DF928B-B428-4BAC-8C63-55295967EFBB} 2012-04-10 22:46:58:110 956 728 AU Setting AU scheduled install time to 2012-04-11 01:00:00 2012-04-10 22:46:58:110 956 728 AU Successfully wrote event for AU health state:0 2012-04-10 22:46:58:110 956 728 AU Currently showing Progress UX client - so not launching any other client 2012-04-10 22:46:58:110 956 bb8 DnldMgr ************* 2012-04-10 22:46:58:110 956 bb8 DnldMgr ** START ** DnldMgr: Downloading updates [CallerId = AutomaticUpdatesWuApp] 2012-04-10 22:46:58:110 956 bb8 DnldMgr ********* 2012-04-10 22:46:58:110 956 bb8 DnldMgr * Call ID = {35DF928B-B428-4BAC-8C63-55295967EFBB} 2012-04-10 22:46:58:110 956 bb8 DnldMgr * Priority = 3, Interactive = 1, Owner is system = 0, Explicit proxy = 0, Proxy session id = 1, ServiceId = {9482F4B4-E343-43B6-B170-9A65BC822C77} 2012-04-10 22:46:58:110 956 bb8 DnldMgr * Updates to download = 1 2012-04-10 22:46:58:110 956 bb8 Agent * Title = Windows Internet Explorer 9 for Windows 7 for x64-based Systems 2012-04-10 22:46:58:110 956 bb8 Agent * UpdateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}.100 2012-04-10 22:46:58:110 956 bb8 Agent * Bundles 1 updates: 2012-04-10 22:46:58:110 956 bb8 Agent * {6D9A90B7-FAF9-4A47-9EFE-A506264873B3}.100 2012-04-10 22:46:58:110 956 bb8 DnldMgr *********** DnldMgr: New download job [UpdateId = {6D9A90B7-FAF9-4A47-9EFE-A506264873B3}.100] *********** 2012-04-10 22:46:58:110 956 728 AU Successfully wrote event for AU health state:0 2012-04-10 22:46:58:110 956 728 AU # Pending download calls = 1 2012-04-10 22:46:58:110 956 728 AU ## RESUMED ## AU: Download update [UpdateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}, succeeded] 2012-04-10 22:46:58:313 956 bb8 Agent ** END ** Agent: Downloading updates [CallerId = AutomaticUpdatesWuApp] 2012-04-10 22:46:58:313 956 bb8 Agent ************* 2012-04-10 22:46:58:313 956 718 AU ######### 2012-04-10 22:46:58:313 956 718 AU ## END ## AU: Download updates 2012-04-10 22:46:58:313 956 718 AU ############# 2012-04-10 22:46:58:313 956 718 AU Setting AU scheduled install time to 2012-04-11 01:00:00 2012-04-10 22:46:58:313 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:46:58:313 956 718 AU Currently showing Progress UX client - so not launching any other client 2012-04-10 22:46:58:313 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:46:58:313 956 aac AU Getting featured update notifications. fIncludeDismissed = true 2012-04-10 22:46:58:313 956 aac AU No featured updates available. 2012-04-10 22:47:00:107 956 aac AU BeginInteractiveInstall invoked for Install 2012-04-10 22:47:00:107 956 aac AU Auto-approving update for install, updateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}.100, ForUx=1, IsOwnerUx=1, HasDeadline=0, IsMinor=0 2012-04-10 22:47:00:107 956 aac AU Auto-approved 1 update(s) for install (for Ux), installType=1 2012-04-10 22:47:00:107 956 aac AU ############# 2012-04-10 22:47:00:107 956 aac AU ## START ## AU: Install updates 2012-04-10 22:47:00:107 956 aac AU ######### 2012-04-10 22:47:00:107 956 aac AU # Initiating manual install 2012-04-10 22:47:00:107 956 aac AU # Approved updates = 1 2012-04-10 22:47:00:107 956 aac AU ## RESUMED ## AU: Installing update [UpdateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}] 2012-04-10 22:47:13:773 2232 9fc Handler : WARNING: Exit code = 0x8024200B 2012-04-10 22:47:13:773 956 718 AU # WARNING: Install failed, error = 0x80070643 / 0x00009C48 2012-04-10 22:47:13:773 2232 9fc Handler ::::::::: 2012-04-10 22:47:13:773 2232 9fc Handler :: END :: Handler: Command Line Install 2012-04-10 22:47:13:773 2232 9fc Handler ::::::::::::: 2012-04-10 22:47:13:851 956 a7c Agent ********* 2012-04-10 22:47:13:851 956 a7c Agent ** END ** Agent: Installing updates [CallerId = AutomaticUpdates] 2012-04-10 22:47:13:851 956 718 AU Install call completed. 2012-04-10 22:47:13:851 956 a7c Agent ************* 2012-04-10 22:47:13:851 956 718 AU # WARNING: Install call completed, reboot required = No, error = 0x00000000 2012-04-10 22:47:13:851 956 718 AU ######### 2012-04-10 22:47:13:851 956 718 AU ## END ## AU: Installing updates [CallId = {FCFF2A5C-25AB-4FB9-AB2B-35C65CCA6A9F}] 2012-04-10 22:47:13:851 956 718 AU ############# 2012-04-10 22:47:13:851 956 718 AU Install complete for all calls, reboot NOT needed 2012-04-10 22:47:13:851 956 718 AU Setting AU scheduled install time to 2012-04-11 01:00:00 2012-04-10 22:47:13:851 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:47:13:851 956 498 AU Getting featured update notifications. fIncludeDismissed = true 2012-04-10 22:47:13:851 956 498 AU No featured updates available. 2012-04-10 22:47:14:366 956 168 AU No featured updates notifications to show 2012-04-10 22:47:14:366 956 168 AU UpdateDownloadProperties: 0 download(s) are still in progress. 2012-04-10 22:47:14:366 956 168 AU Triggering Offline detection (non-interactive) 2012-04-10 22:47:14:366 956 168 AU AU setting pending client directive to 'Install Complete Ux' 2012-04-10 22:47:14:366 956 168 AU Changing existing AU client directive from 'Progress Ux' to 'Install Complete Ux', session id = 0x1 2012-04-10 22:47:14:366 956 168 AU Successfully wrote event for AU health state:0 2012-04-10 22:47:14:366 956 b78 AU ############# 2012-04-10 22:47:14:366 956 b78 AU ## START ## AU: Search for updates 2012-04-10 22:47:14:366 956 b78 AU ######### 2012-04-10 22:47:14:366 956 b78 AU ## RESUMED ## AU: Search for updates [CallId = {0198DD3A-D7B0-48F5-A77D-795F8A1BDCE8}] 2012-04-10 22:47:16:097 956 718 AU # 1 updates detected 2012-04-10 22:47:16:097 956 718 AU ######### 2012-04-10 22:47:16:097 956 718 AU ## END ## AU: Search for updates [CallId = {0198DD3A-D7B0-48F5-A77D-795F8A1BDCE8}] 2012-04-10 22:47:16:097 956 718 AU ############# 2012-04-10 22:47:16:097 956 718 AU No featured updates notifications to show 2012-04-10 22:47:16:097 956 718 AU Setting AU scheduled install time to 2012-04-11 01:00:00 2012-04-10 22:47:16:097 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:47:16:097 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:47:16:113 956 55c AU Getting featured update notifications. fIncludeDismissed = true 2012-04-10 22:47:16:113 956 55c AU No featured updates available. 2012-04-10 22:47:18:780 956 bb8 Report REPORT EVENT: {27479C66-E930-4F9C-AFF2-27EDD76DED8F} 2012-04-10 22:47:13:773+0200 1 182 101 {B33ACEC1-3265-4D01-9C37-AC0892E95ED9} 100 80070643 AutomaticUpdates Failure Content Install Installation Failure: Windows failed to install the following update with error 0x80070643: Windows Internet Explorer 9 for Windows 7 for x64-based Systems. 2012-04-10 22:47:18:780 956 bb8 Report CWERReporter::HandleEvents - WER report upload completed with status 0x8 2012-04-10 22:47:18:780 956 bb8 Report WER Report sent: 7.5.7601.17514 0x80070643 B33ACEC1-3265-4D01-9C37-AC0892E95ED9 Install 101 Unmanaged 2012-04-10 22:47:18:780 956 bb8 Report CWERReporter finishing event handling. (00000000) WU-IE9-Windows7-x64.exe The actual update that is executed is downloaded and stored at the following location: C:\Windows\SoftwareDistribution\Download\Install\WU-IE9-Windows7-x64.exe Executing that file manually, results in the following error message: IE9_main.log The IE9 installer/updater also creates an own log file located at C:\Windows\IE9_main.log For the update session in question, the installer logged: 00:00.000: ==================================================================== 00:00.016: Started: 2012/04/10 (Y/M/D) 23:10:53.897 (local) 00:00.032: Time Format in this log: MM:ss.mmm (minutes:seconds.milliseconds) 00:00.063: Command line: "C:\Windows\SoftwareDistribution\Download\Install\WU-IE9-Windows7-x64.exe" 00:00.078: INFO: Setup installer for Internet Explorer: 9.0.8112.16421 00:00.094: INFO: Previous version of Internet Explorer: 9.0.8112.16443 00:00.110: INFO: Checking if iexplore.exe's current version is between 9.0.6001.0... 00:00.125: INFO: ...and 9.1.0.0... 00:00.141: INFO: Maximum version on which to run IEAK branding is: 9.1.0.0... 00:00.156: ERROR: A newer version of Internet Explorer is already installed on the system. 00:00.188: ERROR: Internet Explorer version check failed. 01:03.789: INFO: Setup exit code: 0x00009C48 (40008) - A more recent version of Internet Explorer is installed. 01:03.820: INFO: Scheduling upload to IE SQM server: http://sqm.microsoft.com/sqm/ie/sqmserver.dll 01:03.852: INFO: SQM Upload returned 403 01:03.867: INFO: Cleaning up temporary files in: C:\Windows\TEMP\IE978E.tmp 01:03.883: INFO: Unable to remove directory C:\Windows\TEMP\IE978E.tmp, marking for deletion on reboot. 01:03.898: INFO: Released Internet Explorer Installer Mutex Which pretty much confirms what the error message says when executing the update manually; it's simply already installed or even obsolete because a newer version is installed. So, why does it try to keep installing the update? Possible solutions? Uninstalling Windows Internet Explorer 9 and manually installing the cached C:\Windows\SoftwareDistribution\Download\Install\WU-IE9-Windows7-x64.exe will result in the same error after applying all pending updates. Applying the FixIt for the issue You receive “0x80070643” or “0x643” error codes when you try to install .NET Framework updates through Windows Update or Microsoft Updates will not resolve the issue. Applying the suggested solution for the issue Error message when you try to install updates by using the Windows Update or Microsoft Update Web site: "0x80070003" will not resolve the issue. Running the FixIt Automatically diagnose and fix common problems with Windows Update does report having resolved issues with Windows Update, but didn't resolve the issue. Running the FixIt for the issue How to troubleshoot Windows Update or Microsoft Update when you are repeatedly offered an update does not resolve the issue. Neither with normal nor with aggressive settings.

    Read the article

  • Disable write-protection on Micro SD

    - by Tim
    My task today is to open up and copy some files to 700 brand new micro SD cards. As I get going on this task I am finding that some of the Micro SD cards are telling me "sorry this drive is write protected" To copy the files I am using a standard SD to micro SD card adapter, and a USB SD card reader / writer. I have ensured that the switch is set to OFF on all of my adapters. As soon as I get a Micro SD that tells me it is write protected I can use the same adapter with another micro SD and it works fine, so I know the problem is not with my adapters. My question is: How can I disable the write protection on a Micro SD card? This eHow article seems to indicate that there is also a physical switch on Micro SD cards. However I have personally never seen a Micro SD with a physical switch, and none of the ones I am using today have said switch. Since these cards are brand new and thus empty are the ones that are telling me they are write protected simply useless? Could this be caused by some sort of defect in the cards?

    Read the article

  • Explorer.EXE ordinal 423 not found in urlmon.dll after updates/IE8 install

    - by Zoot
    Setting up a brand new Dell Optiplex 980 with Windows XP SP3, and everything started up fine on the first boot. My first task was to install system updates, including IE8 and WGA. After the required reboot after installing updates, I now get this error message: Explorer.EXE Ordinal not found. The ordinal 423 could not be located in the dynamic link library urlmon.dll Per my cursory Google search, this forum thread places the blame squarely on IE8. The solution provided is to enter safe mode and remove IE8. Unfortunately, when I press F8 to choose to boot safe mode, I only have the option of "Windows XP SP3 Professional" and no safe mode options. Any other ideas? Thanks in advance. FYI, I can get to the Windows Task Manager by holding down Control-Alt-Delete, but programs don't seem to run properly if you select them. I tried chatting with Dell Support, and we tried to initiate the system restore at c:\windows\system32\restore\rstrui.exe, but that had a similar "ordinal 423 not found in urlmon.dll" error.

    Read the article

  • Regarding traffic shaping on juniper SRX550

    - by peilin
    We have implemented the Juniper SRX550 in our company. Now we have one issue that how to restrict the internal user download speed from internet. Take one example that i want to restrict the end user with IP:192.168.1.20/32 downloading speed up to 1M via my external port ge-0/0/6.0. Below is my setting: [edit firewall policer p1M] root@SRX550# show if-exceeding { bandwidth-limit 1m; burst-size-limit 15k; } then discard; [edit firewall family inet] root@SRX550# show filter limit-user term 10 { from { destination-address { 192.168.1.20/32; } } then policer p1M; } term else { then accept; } [edit interfaces ge-0/0/6] root@SRX550# show per-unit-scheduler; unit 0 { family inet { filter { input limit-user; } address Hidden Here; } } As per the setting, the end user downloading speed should not exceed the 1m (125KB in windows), but the result is the downloading speed for this end users still can up to 400KB via HTTP/HTTPS. Please advise. Thanks.

    Read the article

  • Should one have a separate user account for work use? [closed]

    - by Tyler Wayne
    This question examines the practice of using a separate OS-level user account to divide work use from personal use (specifically, in a creative profession and on a personal computer). I recently left my in-the-flesh job to go to school, but I'm carrying on with the work remotely. I do all of my work on my laptop, and I currently have a separate user account called "Work" where I do exactly that. However, I'm now starting to question that practice. Because my hobby is the same as my job, I want to save notes of the things I learn while working. Because ideas come at any moment, I often want to throw something into my personal task manager's inbox and look at it again later. That task manager is well-suited to handle both the work and personal aspects of my life. Only my personal account has admin rights, but work sometimes requires me to install programs. My employer has no preference regarding my choice, so that is a non-issue. My work is essentially freelance web development, so advice given with that in mind will be much appreciated. Back up all opinion with some personal experience, please. Ideally, give a list of pros and cons and then name reasons for your position.

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • Migrating 10-15 Websites Running Linux, LAMP, RoR, WordPress

    - by Michael
    Task is to move 10-15 websites running Linux to new servers hosted by Amazon. These boxes are currently on dedicated servers. Some sites are running WordPress, some have custom CMS, and others might have RoR applications. Unfortunately, there is sparse documentation regarding each site and how services/files are dependent on each other which means there is a lot of detective work that needs to take place. My goal is to properly document each site, what makes them work, etc., so future admins have at least something to work with. Currently my strategy is to download each site so I have a backup of the files then scan through them looking for configuration files -- db connections, apache configs, etc. Then, create a nice spreadsheet with these findings and migrate these out to the new server. My question to ServerFault is this, are there things you would look out for? Easier ways to handle this task that I'm missing? Points will be awarded to answers that help with efficiency. Thanks in advance.

    Read the article

  • how does a computer know which IP address will route information to the internet? [closed]

    - by JohnMerlino
    Possible Duplicate: How does IPv4 Subnetting Work? For example, I have a computer with a Network Inteface Card (NIC) which is an Ethernet card that is connected by Ethernet cables to a router. There is also another computer with a cable that is connected in another port of the router. This is a Belkin router operating over an Ethernet in the LAN. When I connect to serverfault.com, it maps to an IP address. My computer now has a task of connecting to that IP address. But my computer itself cannot connect to the serverfault IP address. Only the router can. So the task of my computer is to find the IP address associated with the node that will do the routing to the public internet. How does my computer know that a particular IP address in the local network belongs to the router, and is not another computer connected to the network? Is this information configured manually in the operating system itself? Somehow my computer must know that it must send ethernet frames to the router with the expectation that the router will then send the packet to a public IP. How does it know to send it to the router? Is the router's ip address stored in my computer like a key/value pair e.g. "router"="192.168.2.6", so that when I put a public ip address, my computer first knows to connect to 192.168.2.6?

    Read the article

  • changing filesystem format from xfs to ext4 without losing data

    - by A.Rashad
    I have a fresh Lucid Lynx (Ubuntu 10.04) running on a laptop. where I defined the filesystems as: mount point / on ext4 (46 Gb) mount point /home on jfs (63 GB) swap as 3 Gb I left the machine over night to do some task, without AC power supply. next day in the morning I found it on standby, task completed, but filesystem was not reachable. it gave me I/O error it seems that there is a problem with jfs and standby. anyways, to avoid any hassle, I want to move this mount point from jfs format to ext4. can I do this without losing data and without the need to place the data in a temporary location until transformation is done? sorry to mention that, but I recall back in the windows days, we would change a FAT16 to FAT32 or a FAT32 to NTFS without having to lose the data. I hope this is available on Linux. Update The /home filesystem was xfs not jfs, and it seems there is a bug with this filesystem for some reason, I had to re-install the OS twice until I ended up with ext4 for the entire / However, as a conclusion, it seems that there is no way to make a conversion

    Read the article

  • Should I use Evernote or Org-mode for taking notes?

    - by tobeannounced
    I am looking for an app that will help me manage my notes, and after coming across Org-mode, I was wondering whether Org-mode's functionality is strong enough that it can remove the need for me to use another note taking app (because org is more of a task management app), such as Evernote. My wishes for a note taking app are: can be accessed offline in some form, eg through an iPhone app or desktop client Org-Mode and Evernote can both do this, however it seems like MobileOrg is more aimed at tasks, rather than notes? If this is the case, I probably would use Evernote in addition to MobileOrg. I can clip web content into easily for research Evernote has the browser extension, how is it with Org-Mode? I know I can use c-c c-l, but how suited is it really for taking notes on stuff I am browsing in Chrome/Firefox? has voice notes on the iPhone and computer too, if possible Org-Mode cannot do this on the iPhone, on the computer could I record audio externally and then link the files in? I can add notes too on my iPhone & computer while not connected to the internet both can do this. The types of notes I am likely to have include: howtos/things I have learnt, documentation on my setup/stuff, research on things I may do in the future, ideas, and task specific notes. I have thought about where I would want to access each of these notes and will post that here if you think it would help. So, is Org-mode strong enough in note-taking and the requirements I listed that I can avoid the need to use a separate tool for taking notes?

    Read the article

  • Exchange DiskShadow/Robocopy backup does not purge log files

    - by Robert Allan Hennigan Leahy
    I have a series of scripts setup to backup my Exchange. The following command is executed to start the process: diskshadow /s C:\Backup_Scripts\exchangeserverbackupscript1.dsh This is exchangeserverbackupscript1.dsh: #DiskShadow script file set verbose on #delete shadows all set context persistent writer verify {76fe1ac4-15f7-4bcd-987e-8e1acb462fb7} set metadata C:\Backup_Scripts\shadowmetadata.cab begin backup add volume C: alias SH1 create expose %SH1% P: exec C:\Backup_Scripts\exchangeserverbackupscript1.cmd end backup delete shadows exposed P: exit #End of script And this is exchangeserverbackupscript1.cmd: robocopy "P:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group" "\\leahyfs\J$\E-Mail Backups\Day 1" /MIR /R:0 /W:0 /COPY:DT /B This is not causing Exchange to purge its log files. The edb file is 4.7 gigabytes, but the First Storage Group folder itself is 50+ gigabytes due to many, many log files for each day going back to 2009. Is there any way -- I've Googled and haven't found anything -- to notify Exchange when I've completed a full backup, and have it purge its log files? According to this and this, end backup should cause Exchange to "flush the transaction logs for that storage group" but only "if a successful backup of a storage group occurred", which leaves my question as: What constitutes a "successful backup", and why is what I'm doing not it?

    Read the article

  • Dynamic ARP Entries turning into Static ARP entries

    - by Zach
    I recently acquired a client that has a strange ARP caching issue on one of thier servers. I have a server that will eventually start turning it's dynamic ARP entries into static ARP entries. This causes problems because when the machine that has a static ARP entries on this server receives a new IP via DHCP, then the server is not able to communicate with the clients. Clearing the ARP cache resolves the issue and the server is fine for about a week and then it starts slowly turning ARP entries into static ARP entries. I haven't narrowed it down to when or how many it starts to do, but slowly you start seeing 1 static ARP and then 5 and then 10. The server in question is a Windows Server 2003 SP2. It is a DC, DHCP, and DNS server. I've checked the DHCP scope options and there's nothing in there that would indicate anything to do with static ARP entries. The only thing different between this DNS server and our other DNS server is that the 'Dynamically Update DNA A and PTR records for DHCP clients that do not request updates' is checked on the problematic server. I've done a bit of research about this and it seems that this may happen if any PXE type services are running, from what I can tell, there is nothing running a PXE server. I'm a bit lost as I have never seen dynamic ARP entries start to turn into static ARP entries. Right now my solution is a schedule task that runs every 24 hours to clear the ARP cache (arp -d *). I would like to not rely on this schedule task. Has anybody seen this before or have any suggestions on how to troubleshoot this?

    Read the article

  • Best photo management software?

    - by Niels Basjes
    Hi, What I would like is a single piece of software (or a smart combination of tools) that allow me to manage my photos in a better way than what I've found so far. 1. Tags Primarily I need a way of tagging the images. So I can manually tag photos the same way we tag questions here at SO/SF/SU. I want this software to place a lot of the tags automagically (obvious things like date and resolution). 2. Face recognition What I would really like is that this software has a feature that it can recognize faces in images and places tags with the name of the person. So far I've only heard of one online photo system that can do that (Picasa) and not yet of any offline tool. 3. Version database I must have some way of having a central GIT/SVN/... that contains all images. I have had a harddrive corruption a few years ago and it took me a long time to figure out which images had been damaged. I always want to be able to go back to what the camera produced. 4. Website I want to be able to generate a website (few 'tag' specific websites) based on the actual content. 5. Easy bulk uploading Many photo tools have a one on one uploading option. I prefer simply 'throwing' my images on a file server under Linux (Samba) and let the system automagically integrate, tag, recognize, etc. all images. Ok, I know these are a bit much. Perhaps you guy's have some suggestions about existing tools that can make this possible. Or even a complete system that does this. EDIT: To clarify on the OS. I prefer Linux for any 'server' task and Windows XP for any 'desktop' task. Thanks for all your input. Niels Basjes

    Read the article

  • Campus VLAN Segmentation - By OS?

    - by Moduspwnens
    We've been thinking through re-arranging our network and VLAN configuration. Here's the situation. We already have our servers, VoIP phones, and printers on their own VLANs, but our problem lies with end user devices. There are just too many to lump on the same VLAN without being hammered with broadcasts! Our current segmentation strategy has them split into VLANs like this: Student iPads Staff iPads Student Macbooks Staff Macbooks Gaming devices Staff (Other) Student (Other) *Note that our network has many more iPads and MacBooks than most. Since the primary reason we're splitting them is just to put them in smaller groups, this has been working for us (for the most part). However, this required our staff to maintain access control lists (MAC addresses) of all devices belonging in these groups. It also has the unfortunate side effect of illogically grouping broadcast traffic. For example, using this setup, students on opposite ends of campus using iPads will share broadcasts, but two devices belonging to the same user (in the same room) will likely be on completely separate VLANs. I feel like there must be a better way of doing this. I've done a lot of research and I'm having trouble finding instances of this kind of segmentation being recommended. The feedback on the most relevant SO question seems to point toward VLAN segmentation by building/physical location. I feel like that makes sense because logically, at least among miscellaneous end users, broadcasts will typically be intended for nearby devices. Are there other campuses/large-scale networks out there segmenting VLANs based on end-system OS? Is this a typical configuration? Would VLAN segmentation based on physical location (or some other criteria) be more effective? EDIT: I've been told that we will soon be able to dynamically determine device OS without maintaining access lists, although I'm not sure how much that affects the answers to the questions.

    Read the article

  • Kickstart CentOS 6 prompting for TCP/IP with network set to DHCP

    - by Andy Shinn
    I am trying to stop my kickstart CentOS install prompting me for TCP/IP information. After I click through this prompt (keeping IPv4 and IPv6 to their defaults) the installation continues and completes just fine. Below is my kickstart file: # Andy's super awesome VM kickstart file install url --url=http://mirrors.kernel.org/centos/6/os/x86_64 lang en_US.UTF-8 keyboard us text %include /tmp/network.ks rootpw --iscrypted $6$RA8DyrNTsVJkGIgY$ohZ62HHiOjNnn1yDMZlIu3lQ63D3plGPcbVZtPKE8Oq6Z.IGUgN.kNLkxs/ZymZuluRDWsW2eey5zLOl2G3mp. firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --disabled timezone America/Los_Angeles bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work zerombr clearpart --all --drives=vda --initlabel part /boot --fstype=ext4 --size=500 part pv.253002 --grow --size=1 volgroup vg1 --pesize=4096 pv.253002 logvol / --fstype=ext4 --name=lv_root --vgname=vg1 --grow --size=1024 --maxsize=51200 logvol swap --name=lv_swap --vgname=vg1 --grow --size=4032 --maxsize=4032 repo --name="CentOS" --baseurl=http://mirrors.kernel.org/centos/6/os/x86_64 --cost=100 repo --name="Puppet Labs Products" --baseurl=http://yum.puppetlabs.com/el/6/products/x86_64 repo --name="Puppet Labs Dependencies" --baseurl=http://yum.puppetlabs.com/el/6/dependencies/x86_64 repo --name="EyeFi" --baseurl=http://flexo.eye.fi/6/eye-fi-api %packages @core @server-policy puppet facter %end %pre --erroronfail #!/bin/bash for x in `cat /proc/cmdline`; do case $x in SERVERNAME*) eval $x echo "network --onboot yes --device eth0 --bootproto dhcp --hostname ${SERVERNAME}.eye.fi" /tmp/network.ks ;; esac; done %end %post puppet agent --waitforcert 10 --onetime --no-daemon --pluginsync --server puppet.eye.fi %end reboot My kernel arguments are in this following virt-install command that I use to start the install: virt-install -n zabbix -r 2048 --vcpus=2 -l http://mirrors.kernel.org/centos/6/os/x86_64 --disk /dev/vg_inf1/zabbix --network bridge=br85 --initrd-inject=/home/ashinn/vm_kickstart --extra-args "ks=file:/vm_kickstart SERVERNAME=zabbix" --autostart During the install, I can pull up a console on the second terminal and verify the contents of /tmp/network.ks are: network --onboot=yes --bootproto=dhcp --ipv6=auto --hostname=jenkins2.mydomain.com Why might Anaconda be prompting for the TCP/IP settings when they are already set to DHCP?

    Read the article

  • How to avoid ugly dithering when running KDE over VNC?

    - by Chris Jester-Young
    I'm currently setting up a new Xen paravirt domain running KDE (4.2.2, from Kubuntu 9.04). As I have been unable to get the virtual framebuffer working in it, I've decided to set up VNC (from the vnc4server package), and run KDE over Xvnc. This is all fine and good, and KDE starts up okay. However, all the colours look dithered, especially on the task bar and title bar, making them impossible to see. From my web searches, it appears to be because these items are drawn using Porter-Duff. This is especially the case when using the Oxygen style, and Oxygen and Ozone window titlebars (selecting these styles generates messages about Porter-Duff being unavailable); not using those styles at least makes most of the UI widgets and window titles usable again. But this doesn't solve the problem for the task bar, nor for the desktop, where the only theme available to me is Oxygen (this is under the "Desktop Settings - Plasma Workspace" window, just for reference). So, unless I have a way to use a non-Porter-Duff theme for those, it seems that KDE would still be unusable under VNC. So if someone experienced with KDE can advise on how to work around, or even fix, these issues, I'd appreciate it very much. :-)

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >