Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 247/590 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • Problem Disabling Roaming Profiles on Grouped Users

    - by user43207
    I'm having some serious issues getting a group of users to stop using roaming profiles. As expected, I have roaming profiles enabled accross the domain. - But am doing GPO filtering, limiting the scope. I originally had it set to authenticated users for Roaming, but as the domain has branched out to multiple locations, I've limited the scope to only people that are near the central office. The GPO that I have linked filtered to a group I have created that include users that I don't want to have roaming profiles. This GPO is sitting at the root of the domain, with the "Forced" setting enabled, so it should override any setting below it. *On a side note, it is the ONLY GPO that I have set to "Forced" right now. I know the GPO is working, since I can see the original registy settings on a user that logged in under roaming profiles - and then that same user logging in after I made the Group Policy changes, the registry reflects a local profile. But unfortunately, even after making those settings - the user is given a roaming profile on one of the servers. A gpresult of that same user account (after the updated gpo) is listed in the code block below. You can see right at the top of that output, that it is infact dealing with a roaming profile. - And sure enough, on the server that's hosting the file share for roaming profiles, it creates a folder for the user once they log in. For testing purposes, I've deleted all copies of the user's profile, roaming and local. But the problem is still here. - So I'm aparently missing something in the group policy settings on a wider scale. Would anybody be able to point me in the direction of what I'm missing here? *gpresult /r*** Microsoft (R) Windows (R) Operating System Group Policy Result tool v2.0 Copyright (C) Microsoft Corp. 1981-2001 Created On 5/15/2010 at 8:59:00 AM RSOP data for ** on * : Logging Mode OS Configuration: Member Workstation OS Version: 6.1.7600 Site Name: N/A Roaming Profile: \\profiles$** Local Profile: C:\Users*** Connected over a slow link?: No USER SETTINGS CN=*****,OU=*****,OU=*****,OU=*****,DC=*****,DC=***** Last time Group Policy was applied: 5/15/2010 at 8:52:02 AM Group Policy was applied from: *****.*****.com Group Policy slow link threshold: 500 kbps Domain Name: USSLINDSTROM Domain Type: Windows 2000 Applied Group Policy Objects ----------------------------- ForceLocalProfilesOnly InternetExplorer_***** GlobalPasswordPolicy The following GPOs were not applied because they were filtered out ------------------------------------------------------------------- DAgentFirewallExceptions Filtering: Denied (Security) WSAdmin_***** Filtering: Denied (Security) NetlogonFirewallExceptions Filtering: Not Applied (Empty) NetLogon_***** Filtering: Denied (Security) WSUSUpdateScheduleManualInstall Filtering: Denied (Security) WSUSUpdateScheduleDaily_0300 Filtering: Denied (Security) WSUSUpdateScheduleThu_0100 Filtering: Denied (Security) AlternateSSLFirewallExceptions Filtering: Denied (Security) SNMPFirewallExceptions Filtering: Denied (Security) WSUSUpdateScheduleSun_0100 Filtering: Denied (Security) SQLServerFirewallExceptions Filtering: Denied (Security) WSUSUpdateScheduleTue_0100 Filtering: Denied (Security) WSUSUpdateScheduleSat_0100 Filtering: Denied (Security) DisableUAC Filtering: Denied (Security) ICMPFirewallExceptions Filtering: Denied (Security) AdminShareFirewallExceptions Filtering: Denied (Security) GPRefreshInterval Filtering: Denied (Security) ServeRAIDFirewallExceptions Filtering: Denied (Security) WSUSUpdateScheduleFri_0100 Filtering: Denied (Security) BlockFirewallExceptions(8400-8410) Filtering: Denied (Security) WSUSUpdateScheduleWed_0100 Filtering: Denied (Security) Local Group Policy Filtering: Not Applied (Empty) WSUS_***** Filtering: Denied (Security) LogonAsService_Idaho Filtering: Denied (Security) ReportServerFirewallExceptions Filtering: Denied (Security) WSUSUpdateScheduleMon_0100 Filtering: Denied (Security) TFSFirewallExceptions Filtering: Denied (Security) Default Domain Policy Filtering: Not Applied (Empty) DenyServerSideRoamingProfiles Filtering: Denied (Security) ShareConnectionsRemainAlive Filtering: Denied (Security) The user is a part of the following security groups --------------------------------------------------- Domain Users Everyone BUILTIN\Users BUILTIN\Administrators NT AUTHORITY\INTERACTIVE CONSOLE LOGON NT AUTHORITY\Authenticated Users This Organization LOCAL *****Users VPNAccess_***** NetAdmin_***** SiteAdmin_***** WSAdmin_***** VPNAccess_***** LocalProfileOnly_***** NetworkAdmin_***** LocalProfileOnly_***** VPNAccess_***** NetAdmin_***** Domain Admins WSAdmin_***** WSAdmin_***** ***** ***** Schema Admins ***** Enterprise Admins Denied RODC Password Replication Group High Mandatory Level

    Read the article

  • Need help with yum,python and php in CentOS. (I made a complete mess!)

    - by pek
    a while back I wanted to install some plugins for Trac but it required python 2.5 I tried installing it (I don't remember how) and the only thing I managed was to have two versions of python (2.4 and 2.5). Trac still uses the old version but the console uses 2.5 (python -V = Python 2.5.2). Anyway, the problem is not python, the problem is yum (which uses python). I am trying to upgrade my PHP version from 5.1.x to 5.2.x. I tried following this tutorial but when I reach the step with yum I get this error: >[root@XXX]# yum update Loading "installonlyn" plugin Setting up Update Process Setting up repositories Reading repository metadata in from local files Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.main(sys.argv[1:]) File "/usr/share/yum-cli/yummain.py", line 94, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 381, in doCommands return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds) File "/usr/share/yum-cli/yumcommands.py", line 150, in doCommand return base.updatePkgs(extcmds) File "/usr/share/yum-cli/cli.py", line 672, in updatePkgs self.doRepoSetup() File "/usr/share/yum-cli/cli.py", line 109, in doRepoSetup self.doSackSetup(thisrepo=thisrepo) File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 338, in doSackSetup self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 200, in populateSack sack.populate(repo, with, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 91, in populate dobj = repo.cacheHandler.getPrimary(xml, csum) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 100, in getPrimary return self._getbase(location, checksum, 'primary') File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 86, in _getbase (db, dbchecksum) = self.getDatabase(location, metadatatype) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 82, in getDatabase db = self.makeSqliteCacheFile(filename,cachetype) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 245, in makeSqliteCacheFile self.createTablesPrimary(db) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 165, in createTablesPrimary cur.execute(q) File "/usr/lib/python2.4/site-packages/sqlite/main.py", line 244, in execute self.rs = self.con.db.execute(SQL) _sqlite.DatabaseError: near "release": syntax error Any help? Thank you. Update OK, so I've managed to update yum hoping it would solve my problems but now I get a slightly different version of the same error: [root@XXX]# yum -y update Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.skiplink.com * base: www.gtlib.gatech.edu * epel: mirrors.tummy.com * extras: yum.singlehop.com * updates: centos-distro.cavecreek.net (process:30840): GLib-CRITICAL **: g_timer_stop: assertion `timer != NULL' failed (process:30840): GLib-CRITICAL **: g_timer_destroy: assertion `timer != NULL' failed Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 309, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 178, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 345, in doCommands self._getTs(needTsRemove) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 101, in _getTs self._getTsInfo(remove_only) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 112, in _getTsInfo pkgSack = self.pkgSack File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 661, in <lambda> pkgSack = property(fget=lambda self: self._getSacks(), File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 501, in _getSacks self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 260, in populateSack sack.populate(repo, mdtype, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 190, in populate dobj = repo_cache_function(xml, csum) File "/usr/lib/python2.4/site-packages/sqlitecachec.py", line 42, in getPrimary self.repoid)) TypeError: Can not create packages table: near "release": syntax error I'm guessing that this "release" thing has something to do with a repository, but I didn't find anything... I went to the sqlitecachec.py at line 42 which writes (line numbers added for convenience): 39: return self.open_database(_sqlitecache.update_primary(location, 40: checksum, 41: self.callback, 42: self.repoid)) Update 2 I think I found the problem. This post suggests that the problem is sqlite and not yum. The version of sqlite I have installed is 3.6.10 but I have no idea which version does python 2.4 uses. ld.so.config contains the following: include ld.so.conf.d/*.conf /usr/local/lib In folder /usr/local/lib I find a symbolic link named libsqlite3.so that points to libsqlite3.so.0.8.6 WHAT IS HAPPENING??????? :S

    Read the article

  • Dual head setup for Ubuntu 10.04.1 and Windows XP Pro with same hardware configuration

    - by mejpark
    Hello. I have a Dell OptiPlex 360 workstation at work, with 2 x ATI RV280 [Radeon 9200 PRO] graphics cards installed, which are attached to two identical 19" HII flat panel monitors. I'm using the open source Radeon driver with Ubuntu, and the proprietary drivers with Windows. The good news is that dual head configuration works for both OSes. The bad news is, I have to use a different hardware configuration for each OS to achieve this. Hardware config #1: Dual monitors work for Windows XP Pro like this: First display -> external VGA port Second display -> DVI input on gfx card Hardware config #2: Dual monitors work for Ubuntu 10.04.1 like this: First display -> VGA port on gfx card Second display -> DVI input on gfx card I connected up the displays according to Config #2 and booted up Windows, which resulted in a mirror image on both screens. I was unable to login, as the login box was not visible. I unplugged the VGA lead from gfx card and plugged it into the external VGA port (Config #1) - Windows dual head works again, but the VGA-connected screen is not recognised by Ubuntu and remains in standby mode. Is it possible to configure a dual head setup for Ubuntu using Config #1, or am I missing something? I tried setting up dual monitors using Config #1, this morning which didn't work. By default, there is no xorg.conf file in Ubuntu 10.04.1, so I generated one using: $ sudo X :2 -configure X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux harrier 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=a34c1931-98d4-4a34-880c-c227a2936c4a ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.2.log", Time: Mon Sep 13 10:02:02 2010 List of video drivers: apm ark intel mach64 s3virge trident mga tseng ati nouveau neomagic i740 openchrome voodoo s3 i128 radeon siliconmotion nv ztv vmware v4l chips rendition savage sisusb tdfx geode sis r128 cirrus fbdev vesa (++) Using config file: "/home/michael/xorg.conf.new" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Xorg detected your mouse at device /dev/input/mice. Please check your config if the mouse is still not operational, as by default Xorg tries to autodetect the protocol. Xorg has configured a multihead system, please check your config. Your xorg.conf file is /home/michael/xorg.conf.new To test the server, run 'X -config /home/michael/xorg.conf.new' ddxSigGiveUp: Closing log $ sudo X -config /home/michael/xorg.conf.new Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. Please consult the The X.Org Foundation support at http://wiki.x.org for help. ddxSigGiveUp: Closing log I then booted Ubuntu in failsafe mode, dropped into root shell, and executed $ X -config /home/michael/xorg.conf.new again. The screen went blank and turned off, so I reset the machine. There must be a way round this. Any help to set up a dual head config for Ubuntu using Config #1 would be hugely appreciated. TIA, Mike

    Read the article

  • Does Apache ever give incorrect "out of threads" errors?

    - by Eli Courtwright
    Lately our Apache web server has been giving us this error multiple times per day: [Tue Apr 06 01:07:10 2010] [error] Server ran out of threads to serve requests. Consider raising the ThreadsPerChild setting We raised our ThreadsPerChild setting from 50 to 100, but we still get the error. Our access logs indicate that these errors never even happen at periods of high load. For example, here's an excerpt from our access log (ip addresses and some urls are edited for privacy). As you can see, the above error happened at 1:07 and only a small handful of requests occurred in the several minutes leading up to the error: 99.88.77.66 - - [06/Apr/2010:00:59:33 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-icons_222222_256x240.png HTTP/1.1" 304 - 99.88.77.66 - - [06/Apr/2010:00:59:34 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_dadada_1x400.png HTTP/1.1" 200 111 99.88.77.66 - - [06/Apr/2010:00:59:34 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_dadada_1x400.png HTTP/1.1" 200 111 99.88.77.66 - mpeu [06/Apr/2010:00:59:40 -0400] "GET /some/dynamic/content HTTP/1.1" 200 145049 55.44.33.22 - mpeu [06/Apr/2010:01:06:56 -0400] "GET /other/dynamic/content HTTP/1.1" 200 12311 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/jquery-ui-1.7.1.custom.css HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-1.3.2.min.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-ui-1.7.1.custom.min.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery.tablesorter.min.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/date.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image1.gif HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image2.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image3.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image4.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image5.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image6.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image7.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/pdfs/image8.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/pdfs/image9.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/pdfs/imageA.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_flat_75_ffffff_40x100.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:59 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_highlight-soft_75_cccccc_1x100.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:59 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_e6e6e6_1x400.png HTTP/1.1" 200 110 55.44.33.22 - - [06/Apr/2010:01:06:59 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_e6e6e6_1x400.png HTTP/1.1" 200 110 11.22.33.44 - mpeu [06/Apr/2010:01:18:03 -0400] "GET /other/dynamic/content HTTP/1.1" 200 12311 11.22.33.44 - - [06/Apr/2010:01:18:03 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-1.3.2.min.js HTTP/1.1" 304 - 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/jquery-ui-1.7.1.custom.css HTTP/1.1" 200 27374 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-ui-1.7.1.custom.min.js HTTP/1.1" 304 - 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/jquery.tablesorter.min.js HTTP/1.1" 200 12795 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/date.js HTTP/1.1" 200 25809 For what it's worth, we're running the version of Apache that ships with Oracle 10g (some 2.0 version), and we're using mod_plsql to generate our dynamic content. Since the Apache server runs as a separate process and the database doesn't record any problems when this error occurs, I'm doubtful that Oracle is the problem. Unfortunately, the errors are freaking out our sysadmins, who are inclined to blame any and all problems which occur with the server on this error. Is this a known bug in Apache that I simply haven't been able to find any reference to through Google?

    Read the article

  • TS-7800 Hangs on bootup

    - by Reid
    I have a TS-7800, and it typically boots from the SD card inserted in it. When I tried to boot it up today, it hung on the syslog line. I am now having "Read only file system" problems. What has gone wrong? Bootup console: >> Copyright (c) 2008, Technologic Systems >> Booting from SD card... . . . . >> Booting to SD Card... INIT: version 2.86 booting Starting the hotplug events dispatcher: udevd. Synthesizing the initial hotplug events...done. Waiting for /dev to be fully populated...done. mount: can't find / in /etc/fstab or /etc/mtab Cleaning up ifupdown...rm: cannot remove `/etc/network/run/ifstate': Read-only file system Loading kernel modules...done. Checking all file systems... fsck 1.37 (21-Mar-2005) ... done. none on /dev/pts type devpts (rw,gid=5,mode=620) /etc/init.d/rcS: line 39: /tmp/.clean: Read-only file system Setting up networking...done. Setting up IP spoofing protection: rp_filter. Enabling packet forwarding...done. Configuring network interfaces...ifup: failed to open statefile /etc/network/run/ifstate: Read-only file system done. Starting portmap daemon: portmap. /etc/init.d/rcS: line 39: /tmp/.clean: Read-only file system /etc/init.d/rcS: line 24: /var/run/utmp: Read-only file system rm: cannot remove `/var/lib/urandom/random-seed': Read-only file system urandom start: failed. Recovering nvi editor sessions... done. INIT: Entering runlevel: 3 Starting system log daemon: syslogd . Starting kernel log daemon: klogd. Starting MTA: open: Read-only file system touch: cannot touch `/var/lib/exim4/config.autogenerated.tmp': Read-only file system chown: cannot access `/var/lib/exim4/config.autogenerated.tmp': No such file or directory chmod: cannot access `/var/lib/exim4/config.autogenerated.tmp': No such file or directory chmod: changing permissions of `/var/lib/exim4/config.autogenerated': Read-only file system /usr/sbin/update-exim4.conf: line 260: cannot create temp file for here document: Read-only file system /usr/sbin/update-exim4.conf: line 387: /var/lib/exim4/config.autogenerated.tmp: Read-only file system 2002-01-01 01:31:36 Cannot open main log file "/var/log/exim4/mainlog": Read-only file system: euid=0 egid=0 2002-01-01 01:31:36 non-existent configuration file(s): /var/lib/exim4/config.autogenerated.tmp 2002-01-01 01:31:36 Cannot open main log file "/var/log/exim4/mainlog": Read-only file system: euid=0 egid=0 exim: could not open panic log - aborting: see message(s) above Invalid new configfile /var/lib/exim4/config.autogenerated.tmp not installing /var/lib/exim4/config.autogenerated.tmp to /var/lib/exim4/config.autogenerated Starting internet superserver: inetd. Starting OpenBSD Secure Shell server: sshd. Starting NFS common utilities: statdStarting periodic command scheduler: cron/usr/sbin/cron: can't open or create /var/run/crond.pid: Read-only file system . Starting web server (apache2)...(30)Read-only file system: apache2: could not open error log file /var/log/apache2/error.log. Unable to open logs failed! Debian GNU/Linux 3.1 ts7800 ttyS0 ts7800 login:

    Read the article

  • Huge performance difference between two web servers, odd behavior seen using process monitor

    - by Francis Gagnon
    We have two Coldfusion servers that have a huge performance difference running the exact same code on the exact same input data. The code in questions instantiates a large amount of CFCs (Coldfusion Components, which are similar to objects in OOP languages). I compared the two servers by running Process Monitor and then calling the problematic code on both machines. I learned two things. First, Coldfusion opens CFC files every time it instantiates an object. Both servers do this, so it cannot be the cause of the performance difference. Second, the fast server opens the CFC files directly while the server with the performance problem seems to navigate its way through the path until it reaches the desired CFC file. It does this for every file, even the ones it has previously loaded, and because the code instantiates so many CFCs it becomes very slow. See below the partial Promon traces that show this behavior. It can take over 60 seconds for the slow server to do what the fast one does in 2 seconds. Can anyone tell me what causes this behavior? Is it a Coldfusion setting? Since Coldfusion runs on top of Java, is it a Java setting? Is it an OS option? The fast server is running Windows XP and I think the slow server is a Windows Server 2003. Bonus question: Coldfusion doesn't seem to perform any READ FILE operations on any of the CFC or CFM files. How can this be? Sample of the fast server opening CFC files: 11:25:14.5588975 jrun.exe QueryOpen C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5592758 jrun.exe CreateFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5595024 jrun.exe QueryBasicInformationFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5595940 jrun.exe CloseFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5599628 jrun.exe CreateFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5601600 jrun.exe QueryBasicInformationFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5602463 jrun.exe CloseFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc Equivalent sample of the slow server opening CFC files: 11:15:08.1249230 jrun.exe CreateFile D:\ 11:15:08.1250100 jrun.exe QueryDirectory D:\org 11:15:08.1252852 jrun.exe CloseFile D:\ 11:15:08.1259670 jrun.exe CreateFile D:\org 11:15:08.1260319 jrun.exe QueryDirectory D:\org\cli 11:15:08.1260769 jrun.exe CloseFile D:\org 11:15:08.1269451 jrun.exe CreateFile D:\org\cli 11:15:08.1270613 jrun.exe QueryDirectory D:\org\cli\cpn 11:15:08.1271140 jrun.exe CloseFile D:\org\cli 11:15:08.1279312 jrun.exe CreateFile D:\org\cli\cpn 11:15:08.1280086 jrun.exe QueryDirectory D:\org\cli\cpn\APP 11:15:08.1280789 jrun.exe CloseFile D:\org\cli\cpn 11:15:08.1291034 jrun.exe CreateFile D:\org\cli\cpn\APP 11:15:08.1291709 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com 11:15:08.1292224 jrun.exe CloseFile D:\org\cli\cpn\APP 11:15:08.1300568 jrun.exe CreateFile D:\org\cli\cpn\APP\com 11:15:08.1301321 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1301843 jrun.exe CloseFile D:\org\cli\cpn\APP\com 11:15:08.1312049 jrun.exe CreateFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1314409 jrun.exe QueryBasicInformationFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1314633 jrun.exe CloseFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1315881 jrun.exe CreateFile D:\ 11:15:08.1316379 jrun.exe QueryDirectory D:\org 11:15:08.1316926 jrun.exe CloseFile D:\ 11:15:08.1330951 jrun.exe CreateFile D:\org 11:15:08.1338656 jrun.exe QueryDirectory D:\org\cli 11:15:08.1339118 jrun.exe CloseFile D:\org 11:15:08.1526468 jrun.exe CreateFile D:\org\cli 11:15:08.1527295 jrun.exe QueryDirectory D:\org\cli\cpn 11:15:08.1527989 jrun.exe CloseFile D:\org\cli 11:15:08.1531977 jrun.exe CreateFile D:\org\cli\cpn 11:15:08.1532589 jrun.exe QueryDirectory D:\org\cli\cpn\APP 11:15:08.1533575 jrun.exe CloseFile D:\org\cli\cpn 11:15:08.1538457 jrun.exe CreateFile D:\org\cli\cpn\APP 11:15:08.1539083 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com 11:15:08.1539553 jrun.exe CloseFile D:\org\cli\cpn\APP 11:15:08.1544126 jrun.exe CreateFile D:\org\cli\cpn\APP\com 11:15:08.1544980 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1545482 jrun.exe CloseFile D:\org\cli\cpn\APP\com 11:15:08.1551034 jrun.exe CreateFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1552878 jrun.exe QueryBasicInformationFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1553044 jrun.exe CloseFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc Thanks

    Read the article

  • apache+mod_wsgi configuration for django project(s) on a quad core

    - by Stefano
    I've been experiment quite some time with a "typical" django setting upon nginx+apache2+mod_wsgi+memcached(+postgresql) (reading the doc and some questions on SO and SF, see comments) Since I'm still unsatisfied with the behavior (definitely because of some bad misconfiguration on my part) I would like to know what a good configuration would look like with these hypotesis: Quad-Core Xeon 2.8GHz 8 gigs memory several django projects (anything special related to this?) These are excerpts form my current confs: apache2 SetEnv VHOST null #WSGIPythonOptimize 2 <VirtualHost *:8082> ServerName subdomain.domain.com ServerAlias www.domain.com SetEnv VHOST subdomain.domain AddDefaultCharset UTF-8 ServerSignature Off LogFormat "%{X-Real-IP}i %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" custom ErrorLog /home/project1/var/logs/apache_error.log CustomLog /home/project1/var/logs/apache_access.log custom AllowEncodedSlashes On WSGIDaemonProcess subdomain.domain user=www-data group=www-data threads=25 WSGIScriptAlias / /home/project1/project/wsgi.py WSGIProcessGroup %{ENV:VHOST} </VirtualHost> wsgi.py import os import sys # setting all the right paths.... _realpath = os.path.realpath(os.path.dirname(__file__)) _public_html = os.path.normpath(os.path.join(_realpath, '../')) sys.path.append(_realpath) sys.path.append(os.path.normpath(os.path.join(_realpath, 'apps'))) sys.path.append(os.path.normpath(_public_html)) sys.path.append(os.path.normpath(os.path.join(_public_html, 'libs'))) sys.path.append(os.path.normpath(os.path.join(_public_html, 'django'))) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' import django.core.handlers.wsgi _application = django.core.handlers.wsgi.WSGIHandler() def application(environ, start_response): """ Launches django passing over some environment (domain name) settings """ application_group = environ['mod_wsgi.application_group'] """ wsgi application group is required. It's also used to generate the HOST.DOMAIN.TLD:PORT parameters to pass over """ assert application_group fields = application_group.replace('|', '').split(':') server_name = fields[0] os.environ['WSGI_APPLICATION_GROUP'] = application_group os.environ['WSGI_SERVER_NAME'] = server_name if len(fields) > 1 : os.environ['WSGI_PORT'] = fields[1] splitted = server_name.rsplit('.', 2) assert splitted >= 2 splited.reverse() if len(splitted) > 0 : os.environ['WSGI_TLD'] = splitted[0] if len(splitted) > 1 : os.environ['WSGI_DOMAIN'] = splitted[1] if len(splitted) > 2 : os.environ['WSGI_HOST'] = splitted[2] return _application(environ, start_response)` folder structure in case it matters (slightly shortened actually) /home/www-data/projectN/var/logs /project (contains manage.py, wsgi.py, settings.py) /project/apps (all the project ups are here) /django /libs Please forgive me in advance if I overlooked something obvious. My main question is about the apache2 wsgi settings. Are those fine? Is 25 threads an /ok/ number with a quad core for one only django project? Is it still ok with several django projects on different virtual hosts? Should I specify 'process'? Any other directive which I should add? Is there anything really bad in the wsgi.py file? I've been reading about potential issues with the standard wsgi.py file, should I switch to that? Or.. should this conf just be running fine, and I should look for issues somewhere else? So, what do I mean by "unsatisfied": well, I often get quite high CPU WAIT; but what is worse, is that relatively often apache2 gets stuck. It just does not answer anymore, and has to be restarted. I have setup a monit to take care of that, but it ain't a real solution. I have been wondering if it's an issue with the database access (postgresql) under heavy load, but even if it was, why would the apache2 processes get stuck? Beside these two issues, performance is overall great. I even tried New Relic and got very good average results.

    Read the article

  • Nginx + PHP-FPM = "Random" 502 Bad Gateway

    - by david
    I am running Nginx and proxying php requests via FastCGI to PHP-FPM for processing. I will randomly receive 502 Bad Gateway error pages - I can reproduce this issue by clicking around my PHP websites very rapidly/refreshing a page for a minute or two. When I get the 502 error page all I have to do is refresh the browser and the page refreshes properly. Here is my setup: nginx/0.7.64 PHP 5.3.2 (fpm-fcgi) (built: Apr 1 2010 06:42:04) Ubuntu 9.10 (Latest 2.6 Paravirt) I compiled PHP-FPM using this ./configure directive ./configure --enable-fpm --sysconfdir=/etc/php5/conf.d --with-config-file-path=/etc/php5/conf.d/php.ini --with-zlib --with-openssl --enable-zip --enable-exif --enable-ftp --enable-mbstring --enable-mbregex --enable-soap --enable-sockets --disable-cgi --with-curl --with-curlwrappers --with-gd --with-mcrypt --enable-memcache --with-mhash --with-jpeg-dir=/usr/local/lib --with-mysql=/usr/bin/mysql --with-mysqli=/usr/bin/mysql_config --enable-pdo --with-pdo-mysql=/usr/bin/mysql --with-pdo-sqlite --with-pspell --with-snmp --with-sqlite --with-tidy --with-xmlrpc --with-xsl My php-fpm.conf looks like this (the relevant parts): ... <value name="pm"> <value name="max_children">3</value> ... <value name="request_terminate_timeout">60s</value> <value name="request_slowlog_timeout">30s</value> <value name="slowlog">/var/log/php-fpm.log.slow</value> <value name="rlimit_files">1024</value> <value name="rlimit_core">0</value> <value name="chroot"></value> <value name="chdir"></value> <value name="catch_workers_output">yes</value> <value name="max_requests">500</value> ... I've tried increasing the max_children to 10 and it makes no difference. I've also tried setting it to 'dynamic' and setting max_children to 50, and start_server to '5' without any difference. I have tried using both 1 and 5 nginx worker processes. My fastcgi_params conf looks like: fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; Nginx logs the error as: [error] 3947#0: *10530 connect() failed (111: Connection refused) while connecting to upstream, client: 68.40.xxx.xxx, server: www.domain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.domain.com" PHP-FPM logs the follow at the time of the error: [NOTICE] pid 17161, fpm_unix_init_main(), line 255: getrlimit(nofile): max:1024, cur:1024 [NOTICE] pid 17161, fpm_event_init_main(), line 93: libevent: using epoll [NOTICE] pid 17161, fpm_init(), line 50: fpm is running, pid 17161 [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17162 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17163 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17164 started [NOTICE] pid 17161, fpm_event_loop(), line 111: ready to handle connections My CPU usage maxes out around 10-15% when I recreate the issue. My Free mem (free -m) is 130MB I had this intermittent 502 Bad Gateway issue when in was using php5-cgi to service my php requests as well. Does anyone know how to fix this?

    Read the article

  • Disk operations freeze Debian

    - by Grzenio
    Hi, I have just installed Debian testing on my new desktop and I am not very happy with performance - when I perform a disk intensive operation, e.g. upgrade packages in the system, everything seems to freeze, e.g. changing tabs in Iceweasel takes 3 seconds. I run the Debian on my 3 year old Thinkpad X60 ultra-portable, and I don't have these issues. (every single parameter of the laptop is much worse than the desktop). I am using the default packaged kernel and scripts. I run hdparm -t /dev/sda1 And I got around 96GB/s, which is expected. What else can I try to make it work better? EDIT: grzes:/home/ga# hdparm -i /dev/sda /dev/sda: Model=WDC WD15EARS-00Z5B1, FwRev=80.00A80, SerialNo=WD-WMAVU1362357 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=2930277168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode EDIT2: Even my wife said "on this new computer I can't do anything when I copy the photos from the camera and its much worse than on the old one". So it must be serious. EDIT3: Updated to 2.6.32, but still no improvement EDIT4: I forgot to mention that the new disk is ext4, the old was ext3. EDIT5: Still not solved. I have a P43 ASUS P5QL-E board. Lines from dmesg that seem relevant: [ 0.370850] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.370852] io scheduler noop registered [ 0.370853] io scheduler anticipatory registered [ 0.370854] io scheduler deadline registered [ 0.370876] io scheduler cfq registered (default) ... [ 0.908233] ata_piix 0000:00:1f.2: version 2.13 [ 0.908243] ata_piix 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.908246] ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] [ 0.908275] ata_piix 0000:00:1f.2: setting latency timer to 64 [ 0.908316] scsi0 : ata_piix [ 0.908374] scsi1 : ata_piix [ 0.909180] ata1: SATA max UDMA/133 cmd 0xa000 ctl 0x9c00 bmdma 0x9480 irq 19 [ 0.909183] ata2: SATA max UDMA/133 cmd 0x9880 ctl 0x9800 bmdma 0x9488 irq 19 [ 0.909199] ata_piix 0000:00:1f.5: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.909202] ata_piix 0000:00:1f.5: MAP [ P0 -- P1 -- ] [ 0.909228] ata_piix 0000:00:1f.5: setting latency timer to 64 [ 0.909279] scsi2 : ata_piix [ 0.909326] scsi3 : ata_piix [ 0.910021] ata3: SATA max UDMA/133 cmd 0xb000 ctl 0xac00 bmdma 0xa480 irq 19 [ 0.910024] ata4: SATA max UDMA/133 cmd 0xa880 ctl 0xa800 bmdma 0xa488 irq 19 [ 0.915575] FDC 0 is a post-1991 82077 ... [ 1.716062] ata1.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1.716074] ata1.01: SATA link down (SStatus 0 SControl 300) [ 1.724318] ata1.00: ATA-8: WDC WD15EARS-00Z5B1, 80.00A80, max UDMA/133 [ 1.724322] ata1.00: 2930277168 sectors, multi 16: LBA48 NCQ (depth 0/32) [ 1.740339] ata1.00: configured for UDMA/133 [ 1.740428] scsi 0:0:0:0: Direct-Access ATA WDC WD15EARS-00Z 80.0 PQ: 0 ANSI: 5 [ 1.746788] scsi 6:0:0:0: CD-ROM ASUS DRW-1608P 1.17 PQ: 0 ANSI: 5 ... [ 1.925981] sd 0:0:0:0: [sda] 2930277168 512-byte logical blocks: (1.50 TB/1.36 TiB) [ 1.926005] sd 0:0:0:0: [sda] Write Protect is off [ 1.926007] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 1.926020] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 1.926092] sda:sr0: scsi3-mmc drive: 40x/40x writer cd/rw xa/form2 cdda tray [ 1.931106] Uniform CD-ROM driver Revision: 3.20 [ 1.931191] sr 6:0:0:0: Attached scsi CD-ROM sr0 ... [ 1.941936] sda1 sda2 sda3 sda4 < sda5 sda6 > [ 1.967691] sd 0:0:0:0: [sda] Attached SCSI disk [ 1.970938] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 1.970959] sr 6:0:0:0: Attached scsi generic sg1 type 5 ... [ 2.500086] EXT4-fs (sda3): mounted filesystem with ordered data mode ... [ 7.150468] EXT4-fs (sda6): mounted filesystem with ordered data mode

    Read the article

  • Unable to connect to OpenVPN server

    - by Incognito
    I'm trying to get a working setup of OpenVPN on my VM and authenticate into it from a client. I'm not sure but it looks to me like it's socket related, as it's not set to LISTEN, and localhost seems wrong. I've never set up VPN before. # netstat -tulpn | grep vpn Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name udp 0 0 127.0.0.1:1194 0.0.0.0:* 24059/openvpn I don't think this is set up correctly. Here's some detail into what I've done. I have a VPS from MediaTemple: These are my interfaces before starting openvpn: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:39482 errors:0 dropped:0 overruns:0 frame:0 TX packets:39482 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3237452 (3.2 MB) TX bytes:3237452 (3.2 MB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:4885284 errors:0 dropped:0 overruns:0 frame:0 TX packets:4679884 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:835278537 (835.2 MB) TX bytes:1989289617 (1.9 GB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:205.[redacted] P-t-P:205.186.148.82 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 I've followed this guide on setting up a basic server and getting a .p12 file, however, I was receiving an error that stated /dev/net/tun was missing, so I created it mkdir -p /dev/net mknod /dev/net/tun c 10 200 chmod 600 /dev/net/tun This resolved the error preventing the service from launching, however, I am unable to connect. On the server I've set up the myserver.conf file (as per the tutorial) to indicate local 127.0.0.1 (I've also attempted with the public IP address, perhaps I don't understand what they mean by local IP?). The server launches without error, this is what the log looks like when it starts: Sun Apr 1 17:21:27 2012 OpenVPN 2.1.3 x86_64-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Mar 11 2011 Sun Apr 1 17:21:27 2012 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Sun Apr 1 17:21:27 2012 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Sun Apr 1 17:21:27 2012 /usr/bin/openssl-vulnkey -q -b 1024 -m <modulus omitted> Sun Apr 1 17:21:27 2012 TUN/TAP device tun0 opened Sun Apr 1 17:21:27 2012 /sbin/ifconfig tun0 10.8.0.1 pointopoint 10.8.0.2 mtu 1500 Sun Apr 1 17:21:27 2012 GID set to openvpn Sun Apr 1 17:21:27 2012 UID set to openvpn Sun Apr 1 17:21:27 2012 UDPv4 link local (bound): [AF_INET]127.0.0.1:1194 Sun Apr 1 17:21:27 2012 UDPv4 link remote: [undef] Sun Apr 1 17:21:27 2012 Initialization Sequence Completed This creates a tun0 interface that looks like this: tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) And the netstat command still indicates the state is not set to LISTEN. On the client-side I've installed the p12 certs onto two devices (one is an android tablet, the other is an Ubuntu desktop). I don't see port 1194 as open either. Both clients install the cert files and then ask me for the L2TP secret (which was set on the file), but then they oddly ask me for a username and a password, which I don't know where I could possibly get those from. I attempted all of my logins, and some whacky guesses that were frantically pulling at straws. If there's any more information I could provide let me know.

    Read the article

  • Secure method of changing a user's password via Python script/non-interactively

    - by Matthew Rankin
    I've created a Python script using Fabric to configure a freshly built Slicehost Ubuntu slice. In case you're not familiar with Fabric, it uses Paramiko, a Python SSH2 client, to provide remote access "for application deployment or systems administration tasks." One of the first things I have the Fabric script do is to create a new admin user and set their password. Unlike Pexpect, Fabric cannot handle interactive commands on the remote system, so I need to set the user's password non-interactively. At present, I'm using the chpasswd command to change the password. This transmits the password as clear text over SSH to the remote system. Questions Is my current method of setting the password a security concern? Currently, the drawback I see is that Fabric shows the password as clear text on my local system as follows: [xxx.xx.xx.xxx] run: echo "johnsmith:supersecretpassw0rd" | chpasswd. Since I only run the Fabric script from my laptop, I don't think this is a security issue, but I'm interested in others' input. Is there a better method for setting the user's password non-interactively? Another option, would be to use Pexpect from within the Fabric script to set the password. Current Code # Fabric imports and host configuration excluded for brevity root_password = getpass.getpass("Root's password given by SliceManager: ") admin_username = prompt("Enter a username for the admin user to create: ") admin_password = getpass.getpass("Enter a password for the admin user: ") env.user = 'root' env.password = root_password # Create the admin group and add it to the sudoers file admin_group = 'admin' run('addgroup {group}'.format(group=admin_group)) run('echo "%{group} ALL=(ALL) ALL" >> /etc/sudoers'.format( group=admin_group) ) # Create the new admin user (default group=username); add to admin group run('adduser {username} --disabled-password --gecos ""'.format( username=admin_username) ) run('adduser {username} {group}'.format( username=admin_username, group=admin_group) ) # Set the password for the new admin user run('echo "{username}:{password}" | chpasswd'.format( username=admin_username, password=admin_password) ) Local System Terminal I/O $ fab config_rebuilt_slice Root's password given by SliceManager: Enter a username for the admin user to create: johnsmith Enter a password for the admin user: [xxx.xx.xx.xxx] run: addgroup admin [xxx.xx.xx.xxx] out: Adding group `admin' (GID 1000) ... [xxx.xx.xx.xxx] out: Done. [xxx.xx.xx.xxx] run: echo "%admin ALL=(ALL) ALL" >> /etc/sudoers [xxx.xx.xx.xxx] run: adduser johnsmith --disabled-password --gecos "" [xxx.xx.xx.xxx] out: Adding user `johnsmith' ... [xxx.xx.xx.xxx] out: Adding new group `johnsmith' (1001) ... [xxx.xx.xx.xxx] out: Adding new user `johnsmith' (1000) with group `johnsmith' ... [xxx.xx.xx.xxx] out: Creating home directory `/home/johnsmith' ... [xxx.xx.xx.xxx] out: Copying files from `/etc/skel' ... [xxx.xx.xx.xxx] run: adduser johnsmith admin [xxx.xx.xx.xxx] out: Adding user `johnsmith' to group `admin' ... [xxx.xx.xx.xxx] out: Adding user johnsmith to group admin [xxx.xx.xx.xxx] out: Done. [xxx.xx.xx.xxx] run: echo "johnsmith:supersecretpassw0rd" | chpasswd [xxx.xx.xx.xxx] run: passwd --lock root [xxx.xx.xx.xxx] out: passwd: password expiry information changed. Done. Disconnecting from [email protected]... done.

    Read the article

  • How can I tell Firefox to ignore unprintable characters?

    - by BrianH
    Edit: Summary Apparently the intended character to display in this case is an "en-dash". This page has a table half way down that shows that for the &ndash;, some software will convert the correct hex code of 2013 to 0096. (look at the first row in the table). This answer on Stackoverflow explains that somehow this is a mixup between Windows-1252 and UTF-8 This blog article enforces this: Character 150 (0x96) is the unicode character "START OF GUARDED AREA" in the non-displayed C1 control character range, but in the Windows-1252 encoding it's mapped to to the displayable character 0x2013 "en-dash" (a short dash). Others have struggled with this when producing content, as this answer on Stackoverflow shows how to replace 0x0096 with 0x2013. Google must realize this, because as stated in my original question below, Google's cached version of the Amazon page has &ndash; so it seems they are automatically correcting these mistakes on pages they cache. I have tried setting my encoding to Windows-1252 but that does not help. So now I guess my question is, how can I tell Firefox to ignore unprintable characters like these? Original content below: (Firefox 3.6.13 on Windows XP) Every once in a while I notice an odd character on certain web pages when browsing the web. It is a outline of a box with a 4-digit number inside. And example of a page that has these characters is: http://aws.amazon.com/ec2/#highlights After each section heading (Elastic, Completely Controlled, ...) I see a box with the number "0096" inside. I looked at the cached version on Google, and google has &ndash; in it's place, so I'm guessing I should be seeing a dash there instead of the box with the numbers in it. I have tried changing the character encoding in Firefox but haven't been able to find one that shows these characters correctly. Is there a way to allow Firefox to view these characters? Thanks in advance! Edit - adding a screen shot of the "special" characters: Edit #2 - tried in Ubuntu - new screenshots I logged into my Ubuntu desktop and browsed to the amazon page in Chrome and Firefox. Chrome completely ignores character, even if I inspect or view page source. Firefox in Unbutu displays the character exactly like Firefox on my Windows XP box. I copied the character and played around with it at the command line - here is a screenshot of the results: It looks like I can paste the character into this post as well: `` It is definitely not isolated to Windows XP. I tried setting the character encoding for my terminal to Windows 1252 (from Dennis' comment below), but then it just displays this character as a question mark. I pulled the webpage down with wget and with curl, and both outputs show this characters as: <96> It makes me wonder if this character renders correctly for anyone? It appears webkit just ignores it, my IE6 ignores it, Firefox displays the box with the numbers in it. I would have to imagine the design team at Amazon can see it correctly? It's not a huge deal to get these characters displaying correctly, but it would be nice to know if there is a solution to this.

    Read the article

  • UFW as an active service on Ubuntu

    - by lamcro
    Every time I restart my computer, and check the status of the UFW firewall (sudo ufw status), it is disabled, even if I then enable and restart it. I tried putting sudo ufw enable as one of the startup applications but it asks for the sudo password every time I log on, and I'm guessing it does not protect anyone else who logs on my computer. How can I setup ufw so it is activated when I turn on my computer, and protects all accounts? Update I just tried /etc/init.d/ufw start, and it activated the firewall. Then I restarted the computer, and again it was disabled. content of /etc/ufw/ufw.conf # /etc/ufw/ufw.conf # # set to yes to start on boot ENABLED=yes # set to one of 'off', 'low', 'medium', 'high' LOGLEVEL=full content of /etc/default/ufw # /etc/default/ufw # # Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback # accepted). You will need to 'disable' and then 'enable' the firewall for # the changes to take affect. IPV6=no # Set the default input policy to ACCEPT, ACCEPT_NO_TRACK, DROP, or REJECT. # ACCEPT enables connection tracking for NEW inbound packets on the INPUT # chain, whereas ACCEPT_NO_TRACK does not use connection tracking. Please note # that if you change this you will most likely want to adjust your rules. DEFAULT_INPUT_POLICY="DROP" # Set the default output policy to ACCEPT, ACCEPT_NO_TRACK, DROP, or REJECT. # ACCEPT enables connection tracking for NEW outbound packets on the OUTPUT # chain, whereas ACCEPT_NO_TRACK does not use connection tracking. Please note # that if you change this you will most likely want to adjust your rules. DEFAULT_OUTPUT_POLICY="ACCEPT" # Set the default forward policy to ACCEPT, DROP or REJECT. Please note that # if you change this you will most likely want to adjust your rules DEFAULT_FORWARD_POLICY="DROP" # Set the default application policy to ACCEPT, DROP, REJECT or SKIP. Please # note that setting this to ACCEPT may be a security risk. See 'man ufw' for # details DEFAULT_APPLICATION_POLICY="SKIP" # By default, ufw only touches its own chains. Set this to 'yes' to have ufw # manage the built-in chains too. Warning: setting this to 'yes' will break # non-ufw managed firewall rules MANAGE_BUILTINS=no # # IPT backend # # only enable if using iptables backend IPT_SYSCTL=/etc/ufw/sysctl.conf # extra connection tracking modules to load IPT_MODULES="nf_conntrack_ftp nf_nat_ftp nf_conntrack_irc nf_nat_irc" Update Followed your advise and ran update-rc.d with no luck. lester@mcgrath-pc:~$ sudo update-rc.d ufw defaults update-rc.d: warning: /etc/init.d/ufw missing LSB information update-rc.d: see <http://wiki.debian.org/LSBInitScripts> Adding system startup for /etc/init.d/ufw ... /etc/rc0.d/K20ufw -> ../init.d/ufw /etc/rc1.d/K20ufw -> ../init.d/ufw /etc/rc6.d/K20ufw -> ../init.d/ufw /etc/rc2.d/S20ufw -> ../init.d/ufw /etc/rc3.d/S20ufw -> ../init.d/ufw /etc/rc4.d/S20ufw -> ../init.d/ufw /etc/rc5.d/S20ufw -> ../init.d/ufw lester@mcgrath-pc:~$ ls -l /etc/rc?.d/*ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc0.d/K20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc1.d/K20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc2.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc3.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc4.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc5.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc6.d/K20ufw -> ../init.d/ufw

    Read the article

  • virturalmin webmin dose not respond

    - by Miranda
    I have installed Virtualmin on a CentOS remote server, but it dose not seem to work https://115.146.95.118:10000/ at least the Webmin page dose not work. I have opened those ports http ALLOW 80:80 from 0.0.0.0/0 ALLOW 443:443 from 0.0.0.0/0 ssh ALLOW 22:22 from 0.0.0.0/0 virtualmin ALLOW 20000:20000 from 0.0.0.0/0 ALLOW 10000:10009 from 0.0.0.0/0 And restarting Webmin dose not solve it: /etc/rc.d/init.d/webmin restart Stopping Webmin server in /usr/libexec/webmin Starting Webmin server in /usr/libexec/webmin And I have tried to use Amazon EC2 this time, still couldn't get it to work. http://ec2-67-202-21-21.compute-1.amazonaws.com:10000/ [ec2-user@ip-10-118-239-13 ~]$ netstat -an | grep :10000 tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN udp 0 0 0.0.0.0:10000 0.0.0.0:* [ec2-user@ip-10-118-239-13 ~]$ sudo iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:20 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:21 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:20000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:10000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:993 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:143 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:995 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:110 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:20 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:21 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:587 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Since I need more than 10 reputation to post image, you can find the screenshots of the security group setting at the Webmin Support Forum. I have tried: sudo iptables -A INPUT -p tcp -m tcp --dport 10000 -j ACCEPT It did not change anything. [ec2-user@ip-10-118-239-13 ~]$ sudo yum install openssl perl-Net-SSLeay perl-Crypt-SSLeay Loaded plugins: fastestmirror, priorities, security, update-motd Loading mirror speeds from cached hostfile * amzn-main: packages.us-east-1.amazonaws.com * amzn-updates: packages.us-east-1.amazonaws.com amzn-main | 2.1 kB 00:00 amzn-updates | 2.3 kB 00:00 Setting up Install Process Package openssl-1.0.0j-1.43.amzn1.i686 already installed and latest version Package perl-Net-SSLeay-1.35-9.4.amzn1.i686 already installed and latest version Package perl-Crypt-SSLeay-0.57-16.4.amzn1.i686 already installed and latest version Nothing to do [ec2-user@ip-10-118-239-13 ~]$ nano /etc/webmin/miniserv.conf GNU nano 2.0.9 File: /etc/webmin/miniserv.conf port=10000 root=/usr/libexec/webmin mimetypes=/usr/libexec/webmin/mime.types addtype_cgi=internal/cgi realm=Webmin Server logfile=/var/webmin/miniserv.log errorlog=/var/webmin/miniserv.error pidfile=/var/webmin/miniserv.pid logtime=168 ppath= ssl=1 env_WEBMIN_CONFIG=/etc/webmin env_WEBMIN_VAR=/var/webmin atboot=1 logout=/etc/webmin/logout-flag listen=10000 denyfile=\.pl$ log=1 blockhost_failures=5 blockhost_time=60 syslog=1 session=1 server=MiniServ/1.585 userfile=/etc/webmin/miniserv.users keyfile=/etc/webmin/miniserv.pem passwd_file=/etc/shadow passwd_uindex=0 passwd_pindex=1 passwd_cindex=2 passwd_mindex=4 passwd_mode=0 preroot=virtual-server-theme passdelay=1 sessiononly=/virtual-server/remote.cgi preload= mobile_preroot=virtual-server-mobile mobile_prefixes=m. mobile. anonymous=/virtualmin-mailman/unauthenticated=anonymous ssl_cipher_list=ECDHE-RSA-AES256-SHA384:AES256-SHA256:AES256-SHA256:RC4:HIGH:MEDIUM:+TLSv1:!MD5:!SSLv2:+SSLv3:!ADH:!aNULL:!eNULL:!NULL:!DH:!ADH:!EDH:!AESGCM

    Read the article

  • Debian Lenny to Debian Squeeze upgrade problems

    - by Roland Soós
    Hi! Yesterday I made a dist-upgrade on my Debian Lenny server. I thought it will be easy as an usual upgrade, but it's not. I got a lot of problem after the update: # apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: linux-image-2.6-amd64 : Depends: linux-image-2.6.32-5-amd64 but it is not installed E: Unmet dependencies. Try using -f. Then I tried the suggestion: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libio-compress-base-perl libatk1.0-0 libts-0.0-0 libmime-types-perl libc-client2007b libgtk2.0-common libxfixes3 libgsf-1-common hicolor-icon-theme libfile-remove-perl libxcomposite1 libltdl3-dev libneon27 libmd5-perl libwmf0.2-7 libilmbase6 libatk1.0-data djvulibre-desktop libdirectfb-1.0-0 fam libxinerama1 libcroco3 libopenexr6 libgsf-1-114 libmail-box-perl libdjvulibre21 openssl-blacklist librsvg2-2 libio-compress-zlib-perl libsysfs2 libbeecrypt6 libxdamage1 libobject-realize-later-perl libuser-identity-perl libgtk2.0-bin libxi6 libxcursor1 portmap libxrandr2 libgtk2.0-0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-image-2.6.32-5-amd64 Suggested packages: linux-doc-2.6.32 The following NEW packages will be installed: linux-image-2.6.32-5-amd64 0 upgraded, 1 newly installed, 0 to remove and 121 not upgraded. 98 not fully installed or removed. Need to get 0 B/28.6 MB of archives. After this operation, 103 MB of additional disk space will be used. Do you want to continue [Y/n]? y perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "hu_HU.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r Preconfiguring packages ... (Reading database ... 37915 files and directories currently installed.) Unpacking linux-image-2.6.32-5-amd64 (from .../linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r dpkg: error processing /var/cache/apt/archives/linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb (--unpack): failed in write on buffer copy for backend dpkg-deb during `./lib/modules/2.6.32-5-amd64/kernel/sound/pci/hda/snd-hda-codec-realtek.ko': No space left on device configured to not write apport reports dpkg-deb: subprocess paste killed by signal (Broken pipe) locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r Running postrm hook script /sbin/update-grub. Searching for GRUB installation directory ... found: /boot/grub Searching for default file ... found: /boot/grub/default Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst Searching for splash image ... none found, skipping ... Found kernel: /boot/vmlinuz-2.6.26-2-amd64 Updating /boot/grub/menu.lst ... done Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.32-5-amd64 /boot/vmlinuz-2.6.32-5-amd64 Errors were encountered while processing: /var/cache/apt/archives/linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) # dpkg-reconfigure locales perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "hu_HU.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r /usr/sbin/dpkg-reconfigure: locales is broken or not fully installed Then I stucked. Do you have any idea how could I solve this?

    Read the article

  • Random Slow Response

    - by ARehman
    We have an ASP.NET MVC 1.0 application running on Windows Server 2008 – Standard (32 –bit), Dual Core Xeon (3.0 GHz), 2 G.B R.A.M. Most of the times application renders response in 3-4 seconds, but sometimes users get very late response and delay is up to 40 seconds or more than a minute. It happens in following way: User browsed a page, idle for 5, 10 or 15 minutes, tried to browse same page or some other. Now, there is a chance that he will see late response whereas the app pool is still up and running. This can happen with any arbitrary page. We have tried followings/observations. Moved the application to stand alone web server App Pool idle shutdown time is 60 minutes. There are no abrupt shut downs/restarts. CPU or memory doesn’t spike. No delays in SQL queries. Modified App Pool setting to run in classic-mode. It didn’t help. Plugged-in custom module to log all those requests which took more than 5 seconds to complete. It didn’t pick any request of interest. Enabled ‘Failed Request Tracing’ to log all those requests which take 20 or more seconds to complete. It didn’t log anything. Event Viewer, HTTPER log, W3SVC logs or WAS logs don’t indicate anything. HTTPERR only has ‘_ _ Timer_ConnectionIdle _ _’ entries. There is not much traffic to server. This can happen also if only two users are active. Next we captured TCP/IP terrific on both a user and server end with Wireshark and below are details in brief of this slowness: Browser sends a request for ~/User/Home/ (GET Request) by setting up a receiving end point using port 'wlbs(port-2504)'. I'm not sure if this could be a problem in some way that browser didn't hand-shake with the server first and assumed that last connection is still open, whereas, I browsed the same page 4 minutes ago and didn't perform any activity with site after that. If I see the HTTPERR log, it indicates that it has ‘_ _ Timer_ConnectionIdle _ _ _’ entry for my last activity with server. Browser (I was using Chrome) waits for any response from the server, doesn’t find any then starts retransmitting the same request using same end point after incrementing wait intervals, e.g. after 8, 18, 29, 40, 62, and 92 seconds. All these GET requests were received by server as well. But, server didn’t send any packet to client. Browser didn't see any response on the end point it set up in point 1, it opened a new end point 'optiwave-lm (port-2524)', did a hand shake with the server and transmitted the same request again. Server received, processed it, and returned successful response. What happened to earlier 6-7 requests? Whether they were passed on to HTTP.SYS or not? Why Failed Request Tracing not logged anything, we didn't find any clue yet. Server served the same page successfully just 4 minutes ago. Looking forward for more suggestions/solutions. -- Thanks

    Read the article

  • Why is IIS Anonymous authentication being used with administrative UNC drive access?

    - by Mark Lindell
    My account is local administrator on my machine. If I try to browse to a non-existent drive letter on my own box using a UNC path name: \mymachine\x$ my account would get locked out. I would also get the following warning (Event ID 100, Type “Warning”) 5 times under the “System” group in Event Viewer on my box: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: Logon failure: unknown user name or bad password. I would also get the following warning 3 times: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: The referenced account is currently locked out and may not be logged on to. On the domain controller, Event ID 680 of type “Failure Audit” would appear 4 times under the “Security” group in Event Viewer: Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon account: myaccount Followed by Event ID 644: User Account Locked Out: Target Account Name: myaccount Target Account ID: OURDOMAIN\myaccount Caller Machine Name: MYMACHINE Caller User Name: STAN$ Caller Domain: OURDOMAIN Caller Logon ID: (0x0,0x3E7) Followed by another 4 errors having Event ID 680. Strangely, every time I tried to browse to the UNC path I would be prompted for a user name and password, the above errors would be written to the log, and my account would be locked out. When I hit “Cancel” in response to the user name/password prompt, the following message box would display: Windows cannot find \mymachine\x$. Check the spelling and try again, or try searching for the item by clicking the Start button and then clicking Search. I checked with others in the group using XP and they only got the above message box when browsing to a “bad” drive letter on their box. No one else was prompted for a user name/password and then locked out. So, every time I tried to browse to the “bad” drive letter, behind the scenes XP was trying to login 8 times using bad credentials (or, at least a bad password as the login was correct), causing my account to get locked out on the 4th try. Interestingly, If I tried browsing to a “good” drive such as “c$” it would work fine. As a test, I tried logging on to my box as a different login and browsing the “bad” UNC path. Strangely, my “ourdomain\myaccount” account was getting locked out – not the one I was logged in as! I was totally confused as to why the credentials for the other login were being passed. After much Googling, I found a link referring to some IIS settings I was vaguely familiar with from the past but could not see how they would affect this issue. It was related to the IIS directory security setting “Anonymous access and authentication control” located under: Control Panel/Administrative Tools/Computer Management/Services and Applications/Internet Information Services/Web Sites/Default Web Site/Properties/Directory Security/Anonymous access and authentication control/Edit/Password I found no indication while scouring the Internet that this property was related to my UNC problem. But, I did notice that this property was set to my domain user name and password. And, my password did age recently but I had not reset the password accordingly for this property. Sure enough, keying in the new password corrected the problem. I was no longer prompted for a user name/password when browsing the UNC path and the account lock-outs ceased. Now, a couple of questions: Why would an IIS setting affect the browsing of a UNC path on a local box? Why had I not encountered this problem before? My password has aged several times and I’ve never encountered this problem. And, I can’t remember the last time I updated the “Anonymous access” IIS password it’s been so long. I’ve run the script after a password reset before and never had my account locked-out due to the UNC problem (the script accesses UNC paths as a normal part of its processing). Windows Update did install “Cumulative Security Update for Internet Explorer 7 for Windows XP (KB972260)” on my box on 7/29/2009. I wonder if this is responsible.

    Read the article

  • Installation error on Ubuntu 11.10

    - by Abhishek Chanda
    I upgraded to Ubuntu 11.10 and now, when I try to install or uninstall a software, I get this error installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 158945 files and directories currently installed.) Removing aisleriot ... Processing triggers for gconf2 ... Processing triggers for man-db ... Processing triggers for hicolor-icon-theme ... Processing triggers for libglib2.0-0 ... Processing triggers for gnome-menus ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up flashplugin-downloader (11.0.1.152ubuntu1) ... Downloading... --2012-05-02 18:47:29-- http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.0.1.152.orig.tar.gz Resolving archive.canonical.com... 91.189.92.150, 91.189.92.191 Connecting to archive.canonical.com|91.189.92.150|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2012-05-02 18:47:29 ERROR 404: Not Found. download failed The Flash plugin is NOT installed. dpkg: error processing flashplugin-downloader (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of flashplugin-installer: flashplugin-installer depends on flashplugin-downloader (>= 11.0.1.152ubuntu1); however: Package flashplugin-downloader is not configured yet. dpkg: error processing flashplugin-installer (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: flashplugin-downloader flashplugin-installer Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up flashplugin-downloader (11.0.1.152ubuntu1) ... Downloading... --2012-05-02 18:47:33-- http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.0.1.152.orig.tar.gz Resolving archive.canonical.com... 91.189.92.191, 91.189.92.150 Connecting to archive.canonical.com|91.189.92.191|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2012-05-02 18:47:34 ERROR 404: Not Found. download failed The Flash plugin is NOT installed. dpkg: error processing flashplugin-downloader (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of flashplugin-installer: flashplugin-installer depends on flashplugin-downloader (>= 11.0.1.152ubuntu1); however: Package flashplugin-downloader is not configured yet. dpkg: error processing flashplugin-installer (--configure): dependency problems - leaving unconfigured This seems to be a bug that has been reported.Does anyone know a workaround?

    Read the article

  • Cannot Start MySQL Server on Fresh MAMP Install

    - by alexpelan
    I'm using Mac OS X 10.6.2 on my Macbook Pro. I can get the apache server to start, but not the mysql server, on both the default apache and default MAMP ports. When I try to go to my start page, I get the message "Error: Could not connect to MySQL server!" . Here's what's in my mysql error log: 00513 02:00:07 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended 100513 02:00:16 mysqld_safe Starting mysqld daemon with databases from /Applications/MAMP/db/mysql 100513 2:00:16 [Warning] The syntax '--log_slow_queries' is deprecated and will be removed in a future release. Please use '--slow_query_log'/'--slow_query_log_file' instead. 100513 2:00:16 [Warning] You have forced lower_case_table_names to 0 through a command-line option, even though your file system '/Applications/MAMP/db/mysql/' is case insensitive. This means that you can corrupt a MyISAM table by accessing it with different cases. You should consider changing lower_case_table_names to 1 or 2 100513 2:00:16 [Warning] One can only use the --user switch if running as root 100513 2:00:16 [Note] Plugin 'FEDERATED' is disabled. 100513 2:00:16 [Note] Plugin 'ndbcluster' is disabled. InnoDB: Error: log file /usr/local/mysql/data/ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 16777216 bytes! 100513 2:00:16 [ERROR] Plugin 'InnoDB' init function returned error. 100513 2:00:16 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100513 2:00:16 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100513 2:00:16 [ERROR] Aborting 100513 2:00:16 [Note] /Applications/MAMP/Library/libexec/mysqld: Shutdown complete 100513 02:00:16 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended A couple of things: 1) There are a bunch of different .cnf files that come with MAMP (my-huge, my-medium, etc.)...how can I tell which one is actually being used? 2) I deleted the ib_logfile0 and ib_logfile1 as recommended by another post on serverfault, and then ended up with more errors: 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile0 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile0 size to 16 MB InnoDB: Database physically writes the file full: wait... 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile1 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile1 size to 16 MB InnoDB: Database physically writes the file full: wait... InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100519 16:01:31 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100519 16:01:31 InnoDB: Started; log sequence number 0 44556 100519 16:01:31 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100519 16:01:31 [ERROR] Aborting And then I got this the next time I tried to run it: InnoDB: Unable to lock /usr/local/mysql/data/ibdata1, error: 35 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. Sorry that this is a lot of information, but I don't want to leave anything out. Thanks.

    Read the article

  • Dual head setup for Ubuntu 10.04.1 and Windows XP Pro with same hardware configuration

    - by mejpark
    I have a Dell OptiPlex 360 workstation at work, with 2 x ATI RV280 [Radeon 9200 PRO] graphics cards installed, which are attached to two identical 19" HII flat panel monitors. I'm using the open source Radeon driver with Ubuntu, and the proprietary drivers with Windows. The good news is that dual head configuration works for both OSes. The bad news is, I have to use a different hardware configuration for each OS to achieve this. Hardware config #1: Dual monitors work for Windows XP Pro like this: First display -> external VGA port Second display -> DVI input on gfx card Hardware config #2: Dual monitors work for Ubuntu 10.04.1 like this: First display -> VGA port on gfx card Second display -> DVI input on gfx card I connected up the displays according to Config #2 and booted up Windows, which resulted in a mirror image on both screens. I was unable to login, as the login box was not visible. I unplugged the VGA lead from gfx card and plugged it into the external VGA port (Config #1) - Windows dual head works again, but the VGA-connected screen is not recognised by Ubuntu and remains in standby mode. Is it possible to configure a dual head setup for Ubuntu using Config #1, or am I missing something? I tried setting up dual monitors using Config #1, this morning which didn't work. By default, there is no xorg.conf file in Ubuntu 10.04.1, so I generated one using: $ sudo X :2 -configure X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux harrier 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=a34c1931-98d4-4a34-880c-c227a2936c4a ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.2.log", Time: Mon Sep 13 10:02:02 2010 List of video drivers: apm ark intel mach64 s3virge trident mga tseng ati nouveau neomagic i740 openchrome voodoo s3 i128 radeon siliconmotion nv ztv vmware v4l chips rendition savage sisusb tdfx geode sis r128 cirrus fbdev vesa (++) Using config file: "/home/michael/xorg.conf.new" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Xorg detected your mouse at device /dev/input/mice. Please check your config if the mouse is still not operational, as by default Xorg tries to autodetect the protocol. Xorg has configured a multihead system, please check your config. Your xorg.conf file is /home/michael/xorg.conf.new To test the server, run 'X -config /home/michael/xorg.conf.new' ddxSigGiveUp: Closing log $ sudo X -config /home/michael/xorg.conf.new Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. Please consult the The X.Org Foundation support at http://wiki.x.org for help. ddxSigGiveUp: Closing log I then booted Ubuntu in failsafe mode, dropped into root shell, and executed $ X -config /home/michael/xorg.conf.new again. The screen went blank and turned off, so

    Read the article

  • How can I work around problems with certificate configuration in Remote Desktop Services?

    - by Michael Steele
    I am setting up a Remote Desktop Services farm, and am having trouble configuring certificates for it to use. A demonstration of the problem I'm seeing can be found in Step #4. At this point I am convinced that there are problems with the user interface, and am looking for ways around them. Is there any way to configure certificates in Remote Desktop Services so that the settings hold and are reflected in the GUI? If not, is there any way for me to verify that the settings are correct? Step #1 - Create certificate to be used. I've configured a certificate to use with RD Web Access. The certificate is stored with in the Certificates MMC on my RD Connection Broker, and I am configuring the farm from that computer. I found by letting RD Web Access generate its own certificate that the following properties are required: Enhanced Key Usage Server Authentication Client Authentication This may not be required, but the self-signed certificate includes it. Key Usage Digital Signature Key Agreement Subject Alternative Name DNS Name=domain.com Detour about self-signed certificate generation As a quick detour, I was able to work around a problem with creating self-signed certificates using powershell. The documentation for the New-RDCertificate cmdlet gives the following example: PS C:\> $password = ConvertTo-SecureString -string "password" -asplaintext -force New-RDCertificate -Role RDWebAccess -DnsName "test-rdwa.contoso.com" -Password $password -ConnectionBroker rdcb.contoso.com -ExportPath "c:\test-rdwa.pfx" Typing this into the shell will result in an error message claiming that a function, Get-Server cannot be found. Prior to using New-RDCertificate, you must import the RemoteDesktop Module with Import-Module RemoteDesktop. Step #2 - Observe out-of-box behavior The first time you visit the Deployment Properties dialog box by navigating to Server Manager - Remote Desktop Services - Collections and selecting "Edit Deployment Properties" from the "TASKS" dropdown list in the "COLLECTIONS" grouping, you will see the following screen: This window is misleading because the level field is listed as "Not Configured". If I understand correctly all three of the role services are using a self-signed certificate. For the RD Web Access role this can be verified by visiting the website: The certificate being used also appears in the Certificates MMC: Step #3 - Assign new certificate The Deployment Properties dialog box will allow me to select my existing certificate. The certificate must be placed within the local computers Certificates MMC in the "Personal" certificate store. The private key will need to be exportable, and you will need to provide the password. I temporarily exported my certificate to a file named temp.pfx with a password, and then imported it into Remote Desktop Services from there. Once this is done the GUI will indicate that it is ready to accept the new configuration. Once I click the "Apply" button, the GUI indicates success. This can be verified by visiting the RD Web Access web site a second time. There is no certificate error. Step #4 - The GUI fails to maintain its state If the GUI is closed and reopened, all of these settings appear to be lost. Actually, the certificate I configured is still being used. I am able to continue accessing the RD Web Access site without any certificate errors. Oddly, if I use the "Create new certificate..." button to generate a self-signed certificate this window will update to an "Untrusted" level. This setting will then be maintained through the opening and closing of the Deployment Properties dialog box. Is there anything I can do to have my settings appear to stick? I feel like something is wrong when the GUI claims I haven't fully configured certificates.

    Read the article

  • Possible to give one connection to each IP?

    - by Alice
    I am having overloading problems. Too many connections, and some IP has more than 20 connection at once. I do this command. netstat -anp |grep 'tcp\|udp' | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n To get total of connection and this is the output: 1 106.3.98.81 1 106.3.98.82 1 108.171.251.2 1 110.85.103.207 1 111.161.30.217 1 113.53.103.55 1 119.235.237.20 1 124.106.19.34 1 157.55.32.166 1 157.55.33.49 1 157.55.34.28 1 175.141.103.239 1 180.76.5.59 1 180.76.5.61 1 188.235.165.216 1 205.213.195.70 1 216.157.222.25 1 218.93.205.100 1 222.77.209.105 1 27.153.148.109 1 27.159.194.242 1 27.159.253.71 1 54.242.122.201 1 61.172.50.99 1 65.55.24.239 1 71.179.78.5 1 74.125.136.27 1 74.125.182.30 1 74.125.182.36 1 79.112.225.39 1 93.190.139.208 2 124.227.191.67 2 157.55.33.84 2 157.55.35.34 2 190.66.3.107 2 203.87.153.38 2 220.161.119.3 2 221.6.15.156 2 27.153.148.116 2 27.159.197.0 2 96.47.224.42 3 202.14.70.1 3 218.6.15.42 3 222.77.218.226 3 222.77.224.187 3 37.59.66.100 3 46.4.181.244 3 87.98.254.192 3 91.207.8.62 4 188.143.233.222 4 218.108.168.166 4 221.12.154.18 4 93.182.157.8 4 94.142.128.183 5 180.246.170.187 5 8.21.6.226 6 178.137.94.87 6 218.93.205.112 7 199.15.234.222 9 9 125.253.97.6 10 178.137.17.196 11 46.118.192.179 12 212.79.14.14 21 72.201.187.135 27 0.0.0.0 Can anyone give me some directions, my server crashed few times this week because of this. Thanks. EDIT: Alright, my error logs says: [Thu Oct 18 12:17:39 2012] [error] could not make child process 4842 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4843 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4855 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4856 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4861 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4869 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4872 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4873 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4874 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4875 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4876 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4880 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4882 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4885 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4897 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4900 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4901 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4906 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4907 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4925 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4926 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4927 exit, attempting to continue anyway [Thu Oct 18 12:17:39 2012] [error] could not make child process 4931 exit, attempting to continue anyway [Thu Oct 18 12:17:40 2012] [notice] caught SIGTERM, shutting down PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20060613+lfs/curl.iso' - /usr/lib/php5/20060613+lfs/curl.iso: cannot open shared object file: No such file or directory in Unknown on line 0 [Thu Oct 18 12:17:45 2012] [notice] Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny10 with Suhosin-Patch configured -- resuming normal operations And I have over thousands of line saying:(each has different process id) [Thu Oct 18 12:17:38 2012] [error] child process 4906 still did not exit, sending a SIGKILL And I also have line saying: [Wed Oct 17 09:44:58 2012] [error] server reached MaxClients setting, consider raising the MaxClients setting <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 50 MaxClients 300 MaxRequestsPerChild 5000 </IfModule>

    Read the article

  • vagrant fails to bring up additional adapter for centos vm using virtual box provider

    - by Anadi Misra
    this is in continuation of the question asked here about host only adapter on dhcp I upgraded to vagrant 1.6.3 and the updated Vagrantfile to following setting for multiple adapters # add additional adapter for inter machine networking dev.vm.network :private_network, :type => "dhcp", :adapter => "2", :netmask => "255.255.255.0" it goes through creating adapters but then fails bringing up the mic on vm Anadis-MacBook-Pro:full-stack-env anadi$ vagrant up Bringing machine 'full-stack-env' up with 'virtualbox' provider... ==> full-stack-env: Clearing any previously set forwarded ports... ==> full-stack-env: Clearing any previously set network interfaces... ==> full-stack-env: Preparing network interfaces based on configuration... full-stack-env: Adapter 1: nat full-stack-env: Adapter 2: hostonly ==> full-stack-env: Forwarding ports... full-stack-env: 22 => 4223 (adapter 1) full-stack-env: 8080 => 8090 (adapter 1) ==> full-stack-env: Running 'pre-boot' VM customizations... ==> full-stack-env: Booting VM... ==> full-stack-env: Waiting for machine to boot. This may take a few minutes... full-stack-env: SSH address: 127.0.0.1:4223 full-stack-env: SSH username: vagrant full-stack-env: SSH auth method: private key full-stack-env: Warning: Connection timeout. Retrying... full-stack-env: Warning: Connection timeout. Retrying... full-stack-env: Warning: Remote connection disconnect. Retrying... ==> full-stack-env: Machine booted and ready! ==> full-stack-env: Checking for guest additions in VM... ==> full-stack-env: Setting hostname... ==> full-stack-env: Configuring and enabling network interfaces... The following SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed! ARPCHECK=no /sbin/ifup eth 2> /dev/null Stdout from the command: Device eth does not seem to be present, delaying initialization. Stderr from the command: how ever when I log in to the environment I see two network interfaces as expected Anadis-MacBook-Pro:full-stack-env anadi$ vagrant ssh Last login: Wed Jun 4 12:54:47 2014 from 10.0.2.2 [vagrant@full-stack-env ~]$ ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:BD:39:57 inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:febd:3957/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:511 errors:0 dropped:0 overruns:0 frame:0 TX packets:360 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:54574 (53.2 KiB) TX bytes:46675 (45.5 KiB) eth1 Link encap:Ethernet HWaddr 08:00:27:A3:86:C9 inet addr:172.28.128.3 Bcast:172.28.128.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fea3:86c9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1360 (1.3 KiB) TX bytes:894 (894.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) I am bit confused here on why it is trying to add another mic (eth2)? In the VM I used for creating this vagrant box, I had added two NICs already.

    Read the article

  • IPv6: Should I have private addresses?

    - by AlReece45
    Right now, we have a rack of servers. Every server right now has at least 2 IP addresses, one for the public interface, another for the private. The servers that have SSL websites on them have more IP addresses. We also have virtual servers, that are configured similarly. Private Network The private range is currently just used for backups and monitoring. Its a gigabit port, the interface usage does not usually get very high. There are other technologies we're considering using that would use this port: iSCSI (implementations usually recommends dedicating an interface to it, which would be yet another IP network), VPN to get access to the private range (something I'd rather avoid) dedicated database servers LDAP centralized configuration (like puppet) centralized logging We don't have any private addresses in our DNS records (only public addresses). For our servers to utilize the correct IP address for the right interface (and not hard code the IP address) probably requires setting up a private DNS server (So now we add 2 different dns entries to 2 different systems). Public Network Our public range has a variety of services include web, email, and ftp. There is a hardware firewall between our network and the "public" network. We have (relatively secure) method to instruct the firewall to open and close administrative access (web interfaces, ssh, etc) for our current IP address. With either solution discussed, the host-based firewalls will be configured as well. The public network currently runs at a dedicated 20Mbps link. There are a couple of legacy servers with fast-ethernet ports, but they are scheduled for decommissioning. All of the other production boxes have at least 2 Gigabit Ethernet ports. The more traffic-heavy servers have 4-6 available (none is using more than the 2 Gigabit ports right now). IPv6 I want to get an IPv6 prefix from our ISP. So at least every "server" has at least one IPv6 interface. We'll still need to keep the IPv4 addressees up and available for legacy clients (web servers and email at the very least). We have two IP networks right now. Adding the public IPv6 address would make it three. Just use IPv6? I'm thinking about just dumping the private IPv4 range and using the IPv6 range as the primary means of all communications. If an interface starts reaching its capacity, utilize the newly free interfaces to create a trunk. It has the advantage that if either the public or private traffic needs to exceed 1Gbps. The traffic for each interface is already analyzed on a regular basis to predict future bandwidth use. In the rare instances where bandwidth unexpected peaks: utilize QoS to ensure traffic (like our limited SSH access) is prioritized correctly so the problem can be corrected (if possible, our WAN is the bottleneck right now). It also has the advantage of not needing to make an entry for every private address. We may have private DNS (or just LDAP), but it'll be much more limited in scope with less entries to duplicate. Summary I'm trying to make this network as "simple" as possible. At the same time, I want to make sure its reliable, upgradeable, scalable, and (eventually) redundant. Having one IPv6 network, and a legacy IPv4 network seems to be the best solution to me. Regarding using assigned IPv6 addresses for both networks, sharing the available bandwidth on one (more trunked if needed): Are there any technical disadvantages (limitations, buffers, scalability)? Are there any other security considerations (asides from firewalls mentioned above) to consider? Are there regulations or other security requirements (like PCI-DSS) that this doesn't meet? Is there typical software for setting up a Linux network that doesn't have IPv6 support yet? (logging, ldap, puppet) Some other thing I didn't consider?

    Read the article

  • Embedding ADF UI Components into OAF regions

    - by Juan Camilo Ruiz
    Having finished the 2 Webcast on ADF integration with Oracle E-Business Suite, Sara Woodhull, Principal Product Manager on the Oracle E-Business Suite Applications Technology team and I are going to continue adding entries to the series on this topic, trying to cover as many use cases as possible. In this entry, Sara created an overview on how Oracle ADF pages can be embedded into an Oracle Application Framework region. This is a very interesting approach that will enable those of you who are exploring ADF as a technology stack to enhanced some of the Oracle E-Business Suite flows and leverage your skill on Oracle Applications Framework (OAF). In upcoming entries we will start unveiling the internals needed to achieve session sharing between the regions. Stay tuned for more entries and enjoy this new post.   Document Scope This document only covers information that is specific to embedding an Oracle ADF page in an Oracle Application Framework–based page. It assumes knowledge of Oracle ADF and Oracle Application Framework development. It also assumes knowledge of the material in My Oracle Support Note 974949.1, “Oracle E-Business Suite SDK for Java” and My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Prerequisite Patch Download Patch 12726556:R12.FND.B from My Oracle Support and install it. The implementation described below requires Patch 12726556:R12.FND.B to provide the accessors for the ADF page. This patch is required in addition to the Oracle E-Business Suite SDK for Java patch described in My Oracle Support Note 974949.1. Development Environments You need two different JDeveloper environments: Oracle ADF and OA Framework. Oracle ADF Development Environment You build your Oracle ADF page using JDeveloper 11g. You should use JDeveloper 11g R1 (the latest is 11.1.1.6.0) if you need to use other products in the Oracle Fusion Middleware Stack, such as Oracle WebCenter, Oracle SOA Suite, or BI. You should use JDeveloper 11g R2 (the latest is 11.1.2.3.0) if you do not need other Oracle Fusion Middleware products. JDeveloper 11g R2 is an Oracle ADF-specific release that supports the latest Java EE standards and has various core improvements. Oracle Application Framework Development Environment Build your OA Framework page using a development environment corresponding to your Oracle E-Business Suite version. You must use Release 12.1.2 or later because the rich content container was introduced in Release 12.1.2. See “OA Framework - How to find the correct version of JDeveloper to use with eBusiness Suite 11i or Release 12.x” (My Oracle Support Doc ID 416708.1). Building your Oracle ADF Page Typically you build your ADF page using the session management feature of the Oracle E-Business Suite SDK for Java as described in My Oracle Support Note 974949.1. Also see My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Building an ADF Page with the Hierarchy Viewer If you are using the ADF hierarchy viewer, you should set up the structure and settings of the ADF page as follows or the hierarchy viewer may not fill the entire area it is supposed to fill (especially a problem in Firefox). Create a stretchable component as the parent component for the hierarchy viewer, such as af:panelStretchLayout (underneath the af:form component in the structure). Use af:panelStretchLayout for Oracle ADF 11.1.1.6 and earlier. For later versions of Oracle ADF, use af:panelGridLayout. Create your hierarchy viewer component inside the stretchable component. Create Function in Oracle E-Business Suite Instance In your Oracle E-Business Suite instance, create a function for your ADF page with the following parameters. You can use either the Functions window in the System Administrator responsibility or the Functions page in the Functional Administrator responsibility. Function Function Name Type=External ADF Function (ADFX) HTML Call=GWY.jsp?targetPage=faces/<your ADF page> ">You must also add your function to an Oracle E-Business Suite menu or permission set and set up function security or role-based access control (RBAC) so that the user has authorization to access the function. If you do not want the function to appear on the navigation menu, add the function without a menu prompt. See the Oracle E-Business Suite System Administrator's Guide Documentation Set for more information. Testing the Function from the Oracle E-Business Suite Home Page It’s a good idea to test launching your ADF page from the Oracle E-Business Suite Home Page. Add your function to the navigation menu for your responsibility with a prompt and try launching it. If your ADF page expects parameters from the surrounding page, those might not be available, however. Setting up the Oracle Application Framework Rich Container Once you have built your Oracle ADF 11g page, you need to embed it in your Oracle Application Framework page. Create Rich Content Container in your OA Framework JDeveloper environment In the OA Extension Structure pane for your OAF page, select the region where you want to add the rich content, and add a richContainer item to the region. Set the following properties on the richContainer item: id Content Type=Others (for Release 12.1.3. This property value may change in a future release.) Destination Function=[function code] Width (in pixels or percent, such as 100%) Height (in pixels) Parameters=[any parameters your Oracle ADF page is expecting to receive from the Oracle Application Framework page] Parameters In the Parameters property, specify parameters that will be passed to the embedded content as a list of comma-separated, name-value pairs. Dynamic parameters may be specified as paramName={@viewAttr}. Dynamic Rich Content Container Properties If you want your rich content container to display a different Oracle ADF page depending on other information, you would set up a different function for each different Oracle ADF page. You would then set the Destination Function and Parameters properties programmatically, instead of setting them in the Property Inspector. In the processRequest() method of your Oracle Application Framework page controller, where OAFRichContentPage is the ID of your richContainer item and the parameters are whatever parameters your ADF page expects, your code might look similar to this code fragment: OARichContainerBean richBean = (OARichContainerBean) webBean.findChildRecursive("OAFRichContentPage"); if(richBean != null){ if(isFirstCondition){ richBean.setFunctionName("ADF_EXAMPLE_EMBEDDED"); richBean.setParameters("ParamLoginPersonId="+loginPersonId +"&ParamPersonId="+personId+"&ParamUserId="+userId +"&ParamRespId="+respId+"&ParamRespApplId="+respApplId +"&ParamFromOA=Y"+"&ParamSecurityGroupId="+securityGroupId); } else if(isSecondCondition){ richBean.setFunctionName("ADF_EXAMPLE_OTHER_FUNCTION"); richBean.setParameters("ParamLoginPersonId=" +loginPersonId+"&ParamPersonId="+personId +"&ParamUserId="+userId+"&ParamRespId="+respId +"&ParamRespApplId="+respApplId +"&ParamFromOA=Y" +"&ParamSecurityGroupId="+securityGroupId); } }

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >