Search Results

Search found 33468 results on 1339 pages for 'behaviour change'.

Page 417/1339 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Problem with creation of scheduled task from IIS6 on SR2003

    - by Morten Louw Nielsen
    Hi, I have also posted this question on stackoverflow, but will also try here, since it might be more system-related I am writing a webapplication using .NET. The webapp creates scheduled tasks using the System.Diagnostics.Process class, calling SCHTASKS.EXE with parameters. I have changed the identity on the app pool, to a specific domain user. The domain-user is local administrator on all the four webservers. From webserver01 I am creating tasks on webserver01 to webserver04. It works perfect for 3-5 days, but then it breaks. It gives me the following errormessage in a messagebox: "The application failed to initialize properly (0xc0000142). Click on OK to terminate the application." If I have the system in the broken state, and I change the identity of the app pool to Domain administrator, it works. As I change it back to my domain-user, it breaks again. If I reboot the server, it works again for the same amount of days, but will break again. It seems like a permission-related problem. I just don't understand why it works sometimes, and sometimes doesn't. I hope someone outthere has seen this problem! Looking forward to hear from you! Kind regards, Morten, Denmark

    Read the article

  • EEE PC Keyboard malfunctioning - Ctrl key "sticks" after 10 seconds

    - by DWilliams
    I was given a EEE PC belonging to a friend of a friend to fix. The keyboard did not appear to work at all. I spent a while testing out various things, blowing the keyboard out, checking for damage, and so on. Nothing appeared to be physically wrong. At first I noticed that the keyboard appeared to work just fine for 10 seconds (on average, sometimes more sometimes less) after being powered on. It had been restored to the factory default xandros installation with no user set up, so I couldn't get in to mess with things since I couldn't type to make a user. I made an ubuntu live USB to boot it from, and managed to get the boot order changed to boot from USB in the ~10 seconds of working keyboard I had (I don't think I've ever had to rush around BIOS menus that quickly). After I got Ubuntu up on it, I played around a bit more and determined that apparently the ctrl key is stuck down (not literally, but it's on all the time). If I open gedit, pressing the "o" key brings the open dialog, "s" opens the save dialog, and all other behaviour you would expect to see if you were holding down the control key. The only exception that I noticed is the "9" and "0" keys. They function normally. Figuring that out I made a xandros user with a name/password consisting of 9's and 0's. I couldn't find any options in Xandros that could potentially be helpful. I'm not familiar with EEE PCs. Is it safe to assume that the keyboard is simply dead or could there be another problem? I don't want to purchase another keyboard for him if that isn't going to fix the problem. The netbook doesn't show any obvious signs of damage but the owner is a biker and very often has it with him on the road so it's been subjected to a good bit of vibration.

    Read the article

  • Google Chrome mouse wheel takes keyboard focus

    - by Steve Crane
    I recently switched the default browser on all my machines from Firefox to Google Chrome. In general I’m loving Chrome but there is one behaviour that is driving me nuts. Scrolling with the mouse wheel takes keyboard focus away from the document. Here’s what happens. I’ve just opened a page or switched to a new tab and the page has keyboard focus as I can use the keyboard up/down and PgUp/PgDn keys to scroll it; no problem there. But if I then use the wheel on my mouse to scroll the page, it loses the keyboard focus and no longer responds to up/down, PgUp/PgDn, or in fact any other keyboard keys. I have to click on the page background to restore the keyboard focus. This is a minor inconvenience for scrolling but where it really drives me nuts is in Google Reader and Gmail where I use keyboard shortcuts a lot. Here I find that I scroll the article or e-mail I’m reading with the mouse wheel then get no response when I press j/k to move to the next or previous article or e-mail. I am using Windows 7 and the Chrome dev channel (version 4.0.249.43).

    Read the article

  • Hiera can't find puppet environment

    - by quickshiftin
    I'm testing out hiera and hitting a snag on the hierarchy configuration. What I have is extremely simple, the part that isn't working is specification of hiera datadir files based on environment. Here's the config file (/etc/hiera.yaml) I'm trying --- :backends: - yaml :logger: console :hierarchy: - "%{::environment}" :yaml: :datadir: /var/lib/hiera Now, I have a file /var/lib/hiera/development.yaml blah: meh When I run hiera it's not finding the file or the value $ hiera -d blah DEBUG: Fri Oct 25 15:50:52 -0600 2013: Hiera YAML backend starting DEBUG: Fri Oct 25 15:50:52 -0600 2013: Looking up blah in YAML backend nil I've verified this agent is configured for development $ sudo puppet agent --configprint environment development Now let me prove hiera is capable of finding something; a change to the hiera.yaml file: :hierarchy: - development And now hiera finds the file and the value $ hiera -d blah DEBUG: Fri Oct 25 15:53:25 -0600 2013: Hiera YAML backend starting DEBUG: Fri Oct 25 15:53:25 -0600 2013: Looking up blah in YAML backend DEBUG: Fri Oct 25 15:53:25 -0600 2013: Looking for data source development DEBUG: Fri Oct 25 15:53:25 -0600 2013: Found blah in development meh So why isn't it working with the dynamic environment configuration? I got that straight from the documentation. Note, I have tried running the hiera command via sudo with no change in the result.

    Read the article

  • Networking "chokes" on Windows 7 64 bit

    - by Rohit Nair
    I've been having this problem for some months now, and I have been unable to figure out a solution, or even the cause. At random points throughout the day, my internet connectivity "freezes". I don't get disconnected from my local wireless network. My router doesn't get disconnected from the world. However, for some reason, my computer stops receiving packets. If I'm playing an MMO ( World of Warcraft, in this case, but it has happened with Eve Online as well ) all activity just freezes. If I try to browse, Opera, Firefox and IE all stall at "Waiting for google.com..." or whatever the hostname may be. Inspection with a packet sniffer seems to reveal that there are no incoming packets. Here's the interesting part. Disconnecting from my wireless network and reconnecting fixes the issue. Obviously this led me to conclude that it was a problem with my router or wireless card. However, I have tweaked all the settings on my router that I could think of, including things like QoS, AP Isolation, etc. with no change. My wireless card doesn't really have that many options, and I have uninstalled and reinstalled drivers a few times without any change. Windows Firewall on/off doesn't make a difference. Anyone have any suggestions for debugging this? It's becoming an annoyance.

    Read the article

  • Basic IP address structure

    - by dannymcc
    We currently have a few servers, around 30-40 workstations and 16 phones. Each device has a static IP address. As an example the standard settings for a new workstation is; IP: 192.168.1.XXX Subnet: 255.255.255.0 Gateway: 192.168.1.99 DNS: 192.168.1.50 As I am slowly exploring new server OS's and virtualisation etc. I am getting close to wanting a wider range of IP addresses. What I would like to do is seperate the devices by IP as follows: Servers 192.168.1.XXX Workstations 192.168.2.XXX Printers 192.168.3.XXX Phones 192.168.4.XXX VM's 192.168.5.XXX Is this a bad idea, or is this a common way of doing things? My biggest concern is the phones and subnet masks. The phones are managed by our provider although I have access to the server that runs them. Would I need to change the subnet mask to 255.255.0.0 on all devices? Or only those that change? For example, the phones don't need to connect to any other devices other than other phones and the phone server. So if I have the phones on 192.168.1.XXX with a subnet mask of 255.255.255.0 and then moved everything I had complete ownership/control of to 192.168.X.XXX with a new subnet mask of 255.255.0.0. Would that work?

    Read the article

  • multiple php compiler on single apache installation

    - by getmizanur
    elloo, i have some old php scripts which runs on php-5.2.x and the current server has php-5.3.x. to get around this problem,i have got two options one is to downgrade php-5.3.x or install php-5.2.x and php-5.3.x at the same time where php-5.2.x serve cgi script. i have decided go for the second option i have followed this tutorial and i can get most of it working however except execution of shell script which selects php-cgi version. i cannot get apache to execute this script. how do i get apache to execute #!/bin/sh # you can change the PHP version here. version="5.2.6" # php.ini file location, */php-5.2.6/lib equals */php-5.2.6/lib/php.ini. PHPRC=/etc/php/phpfarm/inst/php-${version}/lib/php.ini export PHPRC PHP_FCGI_CHILDREN=3 export PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=5000 export PHP_FCGI_MAX_REQUESTS # which php-cgi binary to execute exec /etc/php/phpfarm/inst/php-${version}/bin/php-cgi my apache vhost.conf <VirtualHost *:80> ServerName 526.localhost DocumentRoot /home/getmizanur/public_html/www <Directory "/home/getmizanur/public_html/www"> AddHandler php-cgi .php Action php-cgi /php-fcgi/php-cgi-5.2.6 </Directory> </VirtualHost> can some one tell me what am i doing wrong? thanks in advance. solution: if i did a2dismod php5 then the above configuration worked. when a2enmod php5 had been activated, apache was executing php5.3 instead of php5.2 even after telling apache to execute php5.2 shell script. to solve my problem, i had to change my virtualhost configuration <VirtualHost *:80> ServerName 526.localhost DocumentRoot /home/getmizanur/public_html/www DirectoryIndex index.php <Directory "/home/getmizanur/public_html/www"> AddHandler php-cgi .php Action php-cgi /php-fcgi/php-cgi-5.2.6 <FilesMatch "\.php"> SetHandler php-cgi </FilesMatch> </Directory> </VirtualHost> presto, it started working.

    Read the article

  • How to remove request blocking on apache reverse proxy after failure of backend before asking backen

    - by matnagel
    I am working on an apache2 reverse proxy vhost. When the server behind apache is down, the first request to apache shows the error page of course. But at subsequent requests it seems apache delays for some time before asking the backend server again. During all this time (which is short but in development I don't want a delay at all) only the apache error page is shown to the browser, although the backend server is already up. Where is this setting in apache, what is this behaviour, and how can I set the delay time to zero? Edit: I am not trying to change the timeout for a single request. I want to change the blocking time. It is my experience that apache blocks further requests for a certain time before asking a backend server again that has failed once. Edit2: This is what apache delivers: Service Temporarily Unavailable The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.7 with Suhosin-Patch proxy_html/3.0.0 Server at localhost Port 80 After hitting Ctrl-R in firefox for 60 seconds the page finally appears.

    Read the article

  • How do I stop MSYS from transforming my compiler options?

    - by Carl Norum
    Is there a way to stop MSYS/MinGW from transforming what it thinks are paths on my command lines? I have a project that's using nmake & Microsoft Visual Studio 2003 (yeecccch). I have the build system all ported and ready to go for GNU make (and tested with Cygwin). Something weird is happening to my compiler flags when I try to compile in an MSYS environment, though. Here's a simplified example: $ cl /nologo Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 13.10.6030 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. /out:nologo.exe C:/msys/1.0/nologo LINK : fatal error LNK1181: cannot open input file 'C:/msys/1.0/nologo.obj' As you can see, MSYS is transforming the /nologo compiler switch into a windows path, and then sending that to the compiler. I really don't want this to happen - in fact I'd be happy if MSYS never transformed any paths - my build system had to take care of all that when I first ported to Cygwin. Is there a way to make that happen? It does work to change the command to $ cl -nologo Which produces the expected results, but this build system is very large and very painful to update. I really don't want to have to go in and change every use of a / for a flag to a -. In particular, there may be tools that don't support the use of the - at all, and then I'll really be stuck. Thanks for any suggestions!

    Read the article

  • vsftp login errors 530 login incorrect

    - by mcktimo
    Using Ubuntu 10.04 on an aws ec2 instance. I was happy just using ssh but then a wordpress plugin needs ftp access...I just need ftp access for one site www.sitebuilt.net which is in /home/sitebuil. I installed a vftpd and pam and followed suggestions that got me to the following state /etc/vftpd.conf listen=YES anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES xferlog_file=/var/log/vsftpd.log secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem guest_enable=YES user_sub_token=$USER local_root=/home/$USER chroot_local_user=YES hide_ids=YES check_shell=NO userlist_file=/etc/vsftpd_users /etc/pam.d/vsftpd # Standard behaviour for ftpd(8). auth required pam_listfile.so item=user sense=deny file=/etc/ftpusers onerr=succeed # Note: vsftpd handles anonymous logins on its own. Do not enable pam_ftp.so. # Standard pam includes @include common-account @include common-session @include common-auth auth required pam_shells.so # Customized login using htpasswd file auth required pam_pwdfile.so pwdfile /etc/vsftpd/passwd account required pam_permit.so session optional pam_keyinit.so force revoke auth include system-auth account include system-auth session include system-auth session required pam_loginuid.so /etc/vsftpd_users sitebuil tim /etc/passwd ... sitebuil:x:1002:100:sitebuilt systems:/home/sitebuil:/bin/sh ftp:x:108:113:ftp daemon,,,:/srv/ftp:/sbin/nologin /etc/vsftpd/passwd sitebuil:Kzencryptedpwd /var/log/vftpd.log Wed Feb 29 15:15:48 2012 [pid 20084] CONNECT: Client "98.217.196.12" Wed Feb 29 15:16:02 2012 [pid 20083] [sitebuil] FAIL LOGIN: Client "98.217.196.12" Wed Feb 29 16:12:33 2012 [pid 20652] CONNECT: Client "98.217.196.12" Wed Feb 29 16:12:45 2012 [pid 20651] [sitebuil] FAIL LOGIN: Client "98.217.196.12"

    Read the article

  • How to remove that extra text from URL titles in Firefox browser?

    - by amar
    I am using Firefox with Vimperator. On Mac I use Shiori to bookmark. I also use Pinboard's default add-on (on both Mac and Windows, but mainly on Windows as there's no Shiori). When I used to bookmark, I had this behaviour[A] (before changing Vimperator options): Example: URL: xyz.com TITLE: xyz website Pinboard add-on would fetch the title as xyz website which was fine (I reckon it directly fetches TITLE form the URL). But after installing and starting to use Shiori the title I was getting in its "title" field was xyz website - Vimperator exactly the same what I could see in tabs (seems it's getting what the browser is feeding it). So, I :set titlestring= in Vimperator to remove that extra Vimperator from title. Now, this is what I am getting [B]: When I try to bookmark using Pinboard add-on the title is undefined Using Shiori it results in xyz website - undefined. I tried to change it back to original[C] :set titlestring=Mozilla Firefox (assuming this was the original, before Vimperator) but the results are still the same as [B]. How to get read of that extra - undefined or that extra - Vimperator or - Mozilla Firefox for that matter, while bookmarking pages either with Pinboard add-on or Shiori? One workaround is, instead of no string just add a white-space while setting title in Firefox via Vimperator there but that results in - appended to titles.

    Read the article

  • sorry, the maximum allowed clients from your host (10) are already connected" FTP error

    - by Sejanus
    Hello, I keep getting the "sorry, the maximum allowed clients from your host (10) are already connected" error whenever I try to transfer a large number of files. At first I thought it's a filezilla bug, however I get the same error basically with every FTP client I've tried, including Total Commander under Wine. I do not get that error using Windows. I did try to limit maximum allowed connections for Filezilla, both in server settings and in global settings, it didnt change anything. I did try to switch between passive and active modes (not sure if it's related at all, just last desperate attempt), and it didnt change anything either. When I try to use native ftp client (not sure how is it called, the one in Places - Connect to a server) I get abstract "connection refused" error every time I transfer large number of files. Connection is refused for separate particular files, if I click "Ignore" each time the rest of files are transfered perfectly well, so I assume it's the very same error. Anything I could do? This really drives me mad, transfering large numbers of files is a part of my everyday job... P.S. and this happens with many different FTP servers. Also I dont get this error in Windows. So I assume it's not a server problem. P.P.S. I am aware of similar question here, the answer provided just didn't solve it to me.

    Read the article

  • Conditionally changing MIME type in nginx

    - by Peter
    I'm using nginx as a frontend to Rails. All pages are cached as .html files on disk, and nginx serves these files if they exist. I want to send the correct MIME type for feeds (application/rss+xml), but the way I have so far is quite ugly, and I'm wondering if there is a cleaner way. Here is my config: location ~ /feed/$ { types {} default_type application/rss+xml; root /var/www/cache/; if (-f request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f request_filename.html) { rewrite (.*) $1.html break; } if (-f request_filename) { break; } if (!-f request_filename) { proxy_pass http://mongrel; break; } } location / { root /var/www/cache/; if (-f request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f request_filename.html) { rewrite (.*) $1.html break; } if (-f request_filename) { break; } if (!-f request_filename) { proxy_pass http://mongrel; break; } } My questions: Is there a better way to change the MIME type? All cached files have .html extensions and I cannot change this. Is there a way to factor out the if conditions in /feed/$ and /? I understand that I can use include, but I'm hoping for a better way. Putting part of the config in a different file is not that readable. Can you spot any bugs in the if conditions? I'm using nginx 0.6.32 (Debian Lenny). I prefer to use the version in APT. Thanks.

    Read the article

  • Failed to open the Parallels networking module

    - by user49204
    I'm running Parallels Desktop 7 on OSX Lion and encounter the issue in the subject on every time I'm trying to launch Parallels VM. The error message contains a hint to restore to network configuration default which does not help. As been advised by some forums to ran the following script: sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_hypervisor.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_hid_hook.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_usb_connect.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_vnic.kext" The output of: sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext" is: Diagnostics for /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext: Warnings: The booter does not recognize symbolic links; confirm these files/directories aren't needed for startup: /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeDirectory /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeRequirements /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeResources /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeSignature Dependency Resolution Failures: No kexts found for these libraries: com.parallels.kext.prl_hypervisor I've noticed that prl_netbridge is not being loaded (when I'm trying to unload it, I'm notified it is not loaded). Am I doing something wrong? What can be the reason for such behaviour?

    Read the article

  • update all the servers through one virtual servers using Storage are network virtual machine

    - by Mr.Calm
    Using UBUNTU and Virtal Box by Oracle, and Using this script to start nginx in Virtual Box, and placing it in Virtual box inside~/init.d #!/bin/bash ### BEGIN INIT INFO # Provides: Testinit # Required-Start: # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO # RETVAL=0; start() { CurrentTime=$(date +%d/%m/%Y"-"%I:%M:%S) ./usr/local/nginx/sbin/nginx echo "Current Time:"$CurrentTime>>/home/server/Desktop/NginxLogs.txt echo "!Starting nginx!" >>/home/server/Desktop/NginxLogs.txt Like this i want to write auto script (setup.sh file) and place that script in all virtual boxes inside my system, for example 8 virtual boxes and in all Virtual boxes NGINX is installed. Now, The thing is i am facing problem when i want change something in setup.sh i have to go to each and every virtual box, or Communicate each Virtual machine through SSH from my main machine. i am thinking to write another script (ex: Update.sh),and inside that script we give one path of file which is saved and recently edited in main machine (ex: DummySetup.sh). as soon as i run that script all the setup.sh files which are saved in each virtual machines should update the change or replace contents with DummySetup.sh's contents. Hope this is possible thing. Help would be appreciated.Thanking you

    Read the article

  • Squid, authentication, Outlook Anywhere, Windows 7 and HTTP 1.1 = NIGHTMARE

    - by Massimo
    I'm running a Squid proxy (latest version, 3.1.4) on Linux CentOS 5.4 with Samba 3.5.4, in order to allow authenticated web access for domain users; everything works fine, and even Windows 7 clients are fully supported. Authentication is transparent for domain users, while it is explicitly requested for non-domain ones, and it works if the user can provide valid domain credentials. All nice and good. Then, Outlook Anywhere kicks in and pain and suffering ensue. When Outlook (be it 2007 or 2010, it doesn't matter) runs on Windows XP clients, it connects gracefully through the Squid proxy to its remote Exchange server. When it runs on Windows 7, it doesn't. If the authentication requirement is lifted from the proxy, everything works on Windows 7 too, so the problem is obviously related to NTLM authentication with Squid. Digging more deeply (WireShark), I discovered Outlook Anywhere uses HTTP 1.1 when it runs on Windows 7, while it uses HTTP 1.0 when on Windows XP. And it looks like Squid, even in its latest incarnation, still has some serious troubles handling HTTP 1.1 properly, particularly when SSL and proxy authentication are thrown in the mix. While waiting for Squid to fully and officially support HTTP 1.1 (and it looks like this could take quite a long time), I'm looking for one of the following solutions: Make Squid handle this correctly, if it is at all possible. Identify Outlook Anywhere connections and have Squid not require authentication for them. But it isn't easy: again, the behaviour of Outlook differs when running on Windows XP and Windows 7, and while on Windows XP Outlook sends a really nice user-agent string of "MSRPC", on Windows 7 it doesn't send any (why? WHY?!?). Force Outlook Anywhere to use HTTP 1.0 even when running on Windows 7. And no, this is not as simple as deselecting "use HTTP 1.1" in Internet Explorer, looks like Outlook ignores that setting and chooses on its own which protocol to use. Any other feasible solution which doesn't involve whitelisting specific destination Exchange servers, which is the last-resort solution I'm trying to avoid.

    Read the article

  • Changing Domain Name DNS to Redirect web traffic to one server, and leave mail to original server

    - by David S
    Hi there, Ok, quite the idiot with DNS.. apart from the basics. I have a domain name hosted with a domain registrar. It seems to have full DNS control (i.e. ability to view/edit A Records, Mail etc..) We have recently setup a server at Rackspace which hosts the new website The original/existing server (where the old website still is and Mail) is on another shared hosting companies server I went to the domain name registrar, and checked out the DNS management as follows: click here to view the DNS screenshot So obviously the A Record is pointing to the actual server where the website/mail is I figure, and the CNAME is pointing (alias?) to the website url. So my question is this: If I want the web traffic portion to go to the Rackspace/new server, but keep the mail going to where it is now, what do I have to change? Also, should I even change this info at the domain registrar? the rackspace server account has full DNS which seems to suggest I can point to their nameservers and then re-direct the MX (Mail) traffic to where the mail server is? Sorry if that was a bit confusing.. obviously in need of DNS training ;) Any help very appreciated. David.

    Read the article

  • LDAP ACLs with ldapmodify & .ldif file grand user access only

    - by plaetzchen
    I want to change the settings my new LDAP server let only users of the server read entries and not anonymous. Currently my olcAccess looks like this: olcAccess: {0} to attrs=userPassword,shadowLastChange by self write by anonymous auth by dn="cn=admin,dc=example,dc=com" write by * none olcAccess: {1} to * by self write by dn="cn=admin,dc=example,dc=com" write by * read I tried to change it like so: olcAccess: {0}to attrs=userPassword,shadowLastChange by self write by anonymous auth by dn="cn=admin,dc=example,dc=com" write by * none olcAccess: {1} to * by self write by dn="cn=admin,dc=exampme,dc=com" write by users read But that gives me no access at all. Can someone help me on this? thanks UPDATE: This is the log read after the changes mentioned by userxxx Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 fd=28 ACCEPT from IP=87.149.169.6:64121 (IP=0.0.0.0:389) Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 op=0 do_bind: invalid dn (pbrechler) Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 op=0 RESULT tag=97 err=34 text=invalid DN Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 op=1 UNBIND Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 fd=28 closed Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 fd=28 ACCEPT from IP=87.149.169.6:64122 (IP=0.0.0.0:389) Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 op=0 do_bind: invalid dn (pbrechler) Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 op=0 RESULT tag=97 err=34 text=invalid DN Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 op=1 UNBIND Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 fd=28 closed pbrechler should be a valid user but has no system user (we don't need it) admin does't work also List item

    Read the article

  • What should I do to make sure that IIS does not recycle my application?

    - by AngryHacker
    I have a WCF service app hosted in IIS. On startup, it goes and fetches a really expensive (in terms of time and cpu) resource to use as local cache. Unfortunately, IIS seems to recycle the process on a fairly regular basis. So I am trying to change the settings on the Application Pool to make sure that IIS does not recycle the application. So far, I've change the following: Limit Interval under CPU from 5 to 0. Idle Time-out under Process Model from 20 to 0. Regular Time Interval under Recycling from 1740 to 0. Will this be enough? And I have specific questions about the items I changed: What specifically does Limit Interval setting under CPU mean? Does it mean that if a certain CPU usage is exceeded, the application pool will be recycled? What exactly does "recycled" mean? Is the application completely torn down and started up again? What is the difference between "Worker Process shutdown" and "Application Pool recycling"? The documentation for the Idle Time-out under Process Model talks about shutting down the worker process. While the docs for Regular Time Interval under Recycling talk about application pool recycling. I don't quite grok the difference between the two. I thought the w3wp.exe is the worker process which runs the application pool. Can someone explain the difference to the application between the two? The reason for having IIS7 and IIS7.5 tags is because the app will run in both and hope the answers are the same between the versions. Image for reference:

    Read the article

  • Sort order in Windows Explorer

    - by Haim H.
    The behaviour described below occurs on Windows-7 systems and on Windows XP. We operate in a dual-language environment - English and Hebrew. When in Windows Explorer we sort files by name, the order in which they are listed is not what we would expect. Here is a list of file names as sorted by Windows Explorer (all of the files have a .pdf suffix): 1G110033H-PP 19C050G-PP-ORB 19C050H-PPRM 19C100H-PPRM 19C-MBPS-PP 19C-MBPS-PP-1 29AAC050-PP 29AAC100-PP 29AAC100-PPUL 29B004064-PP 101AC050-PP 101AC100-PP 101B100-PPE 1091003G-PPFSUL 10108033G-PPSA 10125033H-PPM It looks to me that first the items are sorted according to the position of the first alphabetic character in the name, and then, within those groups, they are sorted in "normal" alpha-numeric order. That is, all the files with an alpha character in the first position are on top of the list, followed by those with the first alpha character in the second position, followed by those with the first alpha character in the third position, and so on. An alternate way of looking at this is that, in a file name composed of numbers and letters, the sort treats the first group of numbers in the name as the major sort node, with the rest of the name being the secondary sort node. Now that I understand the sequencing logic, it's not a big problem, but I was wondering why this happens?

    Read the article

  • Server 2008 R2 DNS Lockup / Stops Resolving Internet Names

    - by Richard Maynard
    We've deployed our first 2008 R2 server on a client site which has replaced their existing 2003 DC. This server provides DNS resolution services to all client machines on that site for general internet usage. Since using the 2008 R2 DNS services we have noticed every couple of days the DNS server starts timing out when requests to certain sites are made (google is the only example I can provide at this time although it seems to be larger sites with problems rather than small - CDN compatiblity issue?). When you restart the DNS Server service then resolution returns to normal... just only for a day or so. Is anybody aware of any significant changes to the DNS server architecture or configuration out of the box in R2 that may explain this intermittent behaviour? I have already tried the fix listed here to no avail: http://weblogs.asp.net/owscott/archive/2009/09/15/windows-server-2008-r2-dns-issues.aspx The following PS command prompt info illustrates the issue: PS C:\Users\Administrator.UK> nslookup Default Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 > www.google.com Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 Non-authoritative answer: Name: www.l.google.com Addresses: 66.102.9.99 66.102.9.104 66.102.9.105 66.102.9.103 66.102.9.147 Aliases: www.google.com > www.google.co.uk Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 * s8209001.uk.kingdomfaith.com can't find www.google.co.uk: Server failed

    Read the article

  • DHCP forwarding behind access list on a Cisco Catalyst

    - by Ásgeir Bjarnason
    I'm having some trouble with forwarding DHCP from a subnet behind an access list on a Cisco Catalyst 4500 switch. I'm hoping somebody can see the mistake I'm making. The subnet is defined like this: (first three octets of IP addresses and vrf name anonymized) interface Vlan40 ip vrf forwarding vrf_name ip address 10.10.10.126 255.255.255.0 secondary ip address 10.10.10.254 255.255.255.0 ip access-group 100 out ip helper-address 10.10.20.36 no ip redirects I tried turning on a VMWare machine on this subnet that was configured to use DHCP, but I never got a DHCP response and the DHCP server didn't receive a request. I tried putting the following in the access-list: access-list 100 permit udp host 10.10.10.254 host 10.10.20.36 eq bootps access-list 100 permit udp host 10.10.10.254 host 10.10.20.36 eq bootpc access-list 100 permit udp host 10.10.20.36 host 10.10.10.254 eq bootps access-list 100 permit udp host 10.10.20.36 host 10.10.10.254 eq bootpc That didn't help. Can anybody see what the problem is? I know that the DHCP server works; our whole network is running off of this DHCP server I also know that the subnet works because we have active servers running on the network The DHCP scope is already defined on the DHCP server The subnet is correctly defined on the VMWare server (already servers running on the subnet on VMWare) Edit 2012-10-19: This is solved! The subnet had formerly been defined as a /25 network, but was then expanded into a /24 network. When the DHCP scope was altered after this change it was done incorrectly; the gateway was moved to .254, the leasable IP range was in the lower half of the /24 subnet but we forgot to change the CIDR prefix from /25 into /24. This happened some 2 years ago, and we didn't need to use DHCP on this server network again until this week. Thank you MDMarra and Jason Seemann for looking at the question and trying to troubleshoot. Now I'm wondering if I should mark Jason's answer as the accepted answer (I am new to the Stack Exchange network, so I don't know the etiquette of what to do if I misstated the question like in this case).

    Read the article

  • Computer loses all installed programs and appears to return OS-only state

    - by Jake
    This is a story regarding 3 laptops of different brand and models. On separate occassions, I configured each of these Windows 7 / Vista computers with the necessary configuration and applications (which are supposedly the same actually), e.g. join office domain, same windows updates, microft office etc. These machinese were configure in our office in Singapore, and then they were taken to India for use. Someday in India, when booting up the laptop, all went fine except when it reach the log in screen, it was no longer possible to login with domain credentials. Logging into the laptop local admin account will lead to discover that the machine has returned to "OS-only state". All the configurations and applications were gone. The actual user profiles are still in the C: drive so files can still be retrieved, but under Control Panel Uninstall Programs it is evident that at least the registry is corrupted. The above scenario happen to the first 2 laptops. For the third, the system reports "Operating System Not Found" on boot up. I cannot think of any reason except to suspect a power fluctuation issue. Question is, will a power issue create this behaviour? What else can cause this issue?

    Read the article

  • Can't Get Mac Mini to turn on - no screen, no beep, only the fan, power light, and optical drive noi

    - by pibyers
    I have an Intel-based Mac mini mid-2007 (Model A1176). This is the computer my kids use so I don't use it regularly. The computer had been working fine until one day my kids told me that it no longer works. The computer will not boot up. When I turn it on the fan turns, the white power light in the front turns on, and there is a sound that appears to be from the optical drive (rather than hard drive). I don't get anything to the monitor, nor do I get any dings or other start up sounds from the computer. Here is what I've tried thus far to no avail: 1) Swapped out the monitors early on since I figured that was my weak link - no change 2) Reset the PMU - no change 3) Tried to boot up from the System Disk - The mini loaded the dvd into the drive, but nothing else (I can't eject the disk so I can put it back) 3) Start up the computer in target mode connected to another mac - I tried this too, but I never received a chime or the disk show up on the other mac. I'm about out of ideas apart from scraping the computer. Does anyone have any ideas that I can try? Again, nothing has been done to the computer in at least 6 months when I upgraded the RAM. I'm also still on Leopard. Thanks.

    Read the article

  • Linux udev persistent net rule

    - by Anonymous
    I have a Linux system (Slackware Linux 13.0) with two network interfaces. Let's call them NIC0 and NIC1 My goal is to make NIC0 to appear as eth0 in the system. I know this can be achieved via udev rules that map network aliases to MAC addresses of network interfaces. In Slackware Linux the file /etc/udev/rules.d/70-persistent-net.rules contains such rules. The trickiest part of my problem is that I need to fake the MAC address of NIC0. I know I can dynamically change the MAC addres of a network interface with the command: ifconfig eth0 hw ether <new MAC address> Do you see the problem? This supposes that the network interfaces are already set up. So my question is: If I would have an udev rule for NIC1(the one that shall go up as eth1, with its original MAC address), would it be enough for the system to bring the other network interface (NIC0) as eth0 by default? This way I could change its MAC address later, after the udev machinery completes and the network aliases are brought up.

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >