Search Results

Search found 17924 results on 717 pages for 'z order'.

Page 638/717 | < Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >

  • How to setup ssh's umask for all type of connections

    - by Unode
    I've been searching for a way to setup OpenSSH's umask to 0027 in a consistent way across all connection types. By connection types I'm referring to: sftp scp ssh hostname ssh hostname program The difference between 3. and 4. is that the former starts a shell which usually reads the /etc/profile information while the latter doesn't. In addition by reading this post I've became aware of the -u option that is present in newer versions of OpenSSH. However this doesn't work. I must also add that /etc/profile now includes umask 0027. Going point by point: sftp - Setting -u 0027 in sshd_config as mentioned here, is not enough. If I don't set this parameter, sftp uses by default umask 0022. This means that if I have the two files: -rwxrwxrwx 1 user user 0 2011-01-29 02:04 execute -rw-rw-rw- 1 user user 0 2011-01-29 02:04 read-write When I use sftp to put them in the destination machine I actually get: -rwxr-xr-x 1 user user 0 2011-01-29 02:04 execute -rw-r--r-- 1 user user 0 2011-01-29 02:04 read-write However when I set -u 0027 on sshd_config of the destination machine I actually get: -rwxr--r-- 1 user user 0 2011-01-29 02:04 execute -rw-r--r-- 1 user user 0 2011-01-29 02:04 read-write which is not expected, since it should actually be: -rwxr-x--- 1 user user 0 2011-01-29 02:04 execute -rw-r----- 1 user user 0 2011-01-29 02:04 read-write Anyone understands why this happens? scp - Independently of what is setup for sftp, permissions are always umask 0022. I currently have no idea how to alter this. ssh hostname - no problem here since the shell reads /etc/profile by default which means umask 0027 in the current setup. ssh hostname program - same situation as scp. In sum, setting umask on sftp alters the result but not as it should, ssh hostname works as expected reading /etc/profile and both scp and ssh hostname program seem to have umask 0022 hardcoded somewhere. Any insight on any of the above points is welcome. EDIT: I would like to avoid patches that require manually compiling openssh. The system is running Ubuntu Server 10.04.01 (lucid) LTS with openssh packages from maverick. Answer: As indicated by poige, using pam_umask did the trick. The exact changes were: Lines added to /etc/pam.d/sshd: # Setting UMASK for all ssh based connections (ssh, sftp, scp) session optional pam_umask.so umask=0027 Also, in order to affect all login shells regardless of if they source /etc/profile or not, the same lines were also added to /etc/pam.d/login. EDIT: After some of the comments I retested this issue. At least in Ubuntu (where I tested) it seems that if the user has a different umask set in their shell's init files (.bashrc, .zshrc,...), the PAM umask is ignored and the user defined umask used instead. Changes in /etc/profile did't affect the outcome unless the user explicitly sources those changes in the init files. It is unclear at this point if this behavior happens in all distros.

    Read the article

  • How Do I Restrict Repository Access via WebSVN?

    - by kaybenleroll
    I have multiple subversion repositories which are served up through Apache 2.2 and WebDAV. They are all located in a central place, and I used this debian-administration.org article as the basis (I dropped the use of the database authentication for a simple htpasswd file though). Since then, I have also started using WebSVN. My issue is that not all users on the system should be able to access the different repositories, and the default setup of WebSVN is to allow anyone who can authenticate. According to the WebSVN documentation, the best way around this is to use subversion's path access system, so I looked to create this, using the AuthzSVNAccessFile directive. When I do this though, I keep getting "403 Forbidden" messages. My files look like the following: I have default policy settings in a file: <Location /svn/> DAV svn SVNParentPath /var/lib/svn/repository Order deny,allow Deny from all </Location> Each repository gets a policy file like below: <Location /svn/sysadmin/> Include /var/lib/svn/conf/default_auth.conf AuthName "Repository for sysadmin" require user joebloggs jimsmith mickmurphy </Location> The default_auth.conf file contains this: SVNParentPath /var/lib/svn/repository AuthType basic AuthUserFile /var/lib/svn/conf/.dav_svn.passwd AuthzSVNAccessFile /var/lib/svn/conf/svnaccess.conf I am not fully sure why I need the second SVNParentPath in default_auth.conf, but I just added that today as I was getting error messages as a result of adding the AuthzSVNAccessFile directive. With a totally permissive access file [/] joebloggs = rw the system worked fine (and was essentially unchanged), but as I soon as I start trying to add any kind of restrictions such as [sysadmin:/] joebloggs = rw instead, I get the 'Permission denied' errors again. The log file entries are: [Thu May 28 10:40:17 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET websvn:/ [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET svn:/sysadmin What do I need to do to get this to work? Have configured apache wrong, or is my understanding of the svnaccess.conf file incorrect? If I am going about this the wrong way, I have no particular attachment to my overall approach, so feel free to offer alternatives as well. UPDATE (20090528-1600): I attempted to implement this answer, but I still cannot get it to work properly. I know most of the configuration is correct, as I have added [/] joebloggs = rw at the start and 'joebloggs' then has all the correct access. When I try to go repository-specific though, doing something like [/] joebloggs = rw [sysadmin:/] mickmurphy = rw then I got a permission denied error for mickmurphy (joebloggs still works), with an error similar to what I already had previously [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'mickmurphy' GET svn:/sysadmin Also, I forgot to explain previously that all my repositories are underneath /var/lib/svn/repository UPDATE (20090529-1245): Still no luck getting this to work, but all the signs seem to be pointing to the issue being with path-access control in subversion not working properly. My assumption is that I have not conf

    Read the article

  • My mail going on spam from SMTP server

    - by user1767434
    I am trying to send a registration confirmation mail from my site to user who are registering from my site. my code is:- $drg_name = addslashes(trim($_POST['drg_name'])); $drg_surname = addslashes(trim($_POST['drg_surname'])); $drg_email = addslashes(trim($_POST['drg_email'])); $drg_username = addslashes(trim($_POST['drg_username'])); $drg_pass = addslashes(base64_encode($_POST['drg_pass'])); $drg_addr1 = addslashes(trim($_POST['drg_addr1'])); $drg_addr2 = addslashes(trim($_POST['drg_addr2'])); $drg_addr3 = addslashes(trim($_POST['drg_addr3'])); $drg_town = addslashes(trim($_POST['drg_town'])); $drg_county = addslashes(trim($_POST['drg_county'])); $drg_zip = addslashes(trim($_POST['drg_zip'])); $drg_country = addslashes(trim($_POST['drg_country'])); $drg_phone = addslashes(trim($_POST['drg_phone'])); $drg_gender = addslashes(trim($_POST['drg_gender'])); $drg_pstatus = addslashes(trim($_POST['drg_pstatus'])); $drg_dod = addslashes(trim($_POST['drg_dod'])); $drg_dom = addslashes(trim($_POST['drg_dom'])); $drg_doy = addslashes(trim($_POST['drg_doy'])); $drg_dob=$drg_dod.'/'.$drg_dom.'/'.$drg_doy; $drg_question = addslashes(trim($_POST['drg_question'])); $drg_answer = addslashes(trim($_POST['drg_answer'])); //send confirmation email to user to activate his/her acc $encoded_usr_id=base64_encode($usr_id); $en_id=base64_encode($insert_id); $subject = "Confirmation From dragonsnet.biz" ; $message = "Thank you to register with dragonsnet.biz<br>\n In order to >activate your account please click here: http://My SITE URL/registration_success.php?envar=".$encoded_usr_id."&euid=".$en_id."' Activate\n Thank you for taking the time to register to the dragonsnet.biz Website. "; $this->_globalObj->send_email('support@ MY-Site', $drg_email, $subject, $message, 'Site Name'); $cnf=base64_encode("confirmation"); die($this->_globalObj->redirect("registration_confirmation.php?eml=$cnf")); } my mail is going in user mail ID but in Spam not in inbox. Please help Thanks In Advance.

    Read the article

  • How to Reinstalling MSSQL Server 2008 with SP1? (Windows 7)

    - by user23884
    I am using Windows 7 Ultimate x64. I had earlier installed SQL server 2008 with SP1 with Visual Studio 2008 Team System with sp1. Now that VS2010 is out I wanted to install it so I uninstalled visual studio then MSSLQ Server 2008 SP1 and then SQL Server 2008 as suggested here: h**p://mark.michaelis.net/Blog/SQLServer2008InstallNightmare.aspx But now when I try to reinstall it I am unable to get it right I am getting the ERROR: “Attempted to perform an unauthorized operation.” (Following is part of the log file): 2010-04-16 04:54:57 Slp: Sco: Attempting to replace account with sid in security descriptor D:(A;CI;KR;;;S-1-5-21-2213424280-2581054173-1939225444-1027) 2010-04-16 04:54:57 Slp: ReplaceAccountWithSidInSddl -- SDDL to be processed: D:(A;CI;KR;;;S-1-5-21-2213424280-2581054173-1939225444-1027) 2010-04-16 04:54:57 Slp: ReplaceAccountWithSidInSddl -- SDDL to be returned: D:(A;CI;KR;;;S-1-5-21-2213424280-2581054173-1939225444-1027) 2010-04-16 04:54:57 Slp: Prompting user if they want to retry this action due to the following failure: 2010-04-16 04:54:57 Slp: ---------------------------------------- 2010-04-16 04:54:57 Slp: The following is an exception stack listing the exceptions in outermost to innermost order 2010-04-16 04:54:57 Slp: Inner exceptions are being indented 2010-04-16 04:54:57 Slp: 2010-04-16 04:54:57 Slp: Exception type: Microsoft.SqlServer.Configuration.Sco.ScoException 2010-04-16 04:54:57 Slp: Message: 2010-04-16 04:54:57 Slp: Attempted to perform an unauthorized operation. 2010-04-16 04:54:57 Slp: Data: 2010-04-16 04:54:57 Slp: WatsonData = Microsoft SQL Server 2010-04-16 04:54:57 Slp: DisableRetry = true 2010-04-16 04:54:57 Slp: Inner exception type: System.UnauthorizedAccessException 2010-04-16 04:54:57 Slp: Message: 2010-04-16 04:54:57 Slp: Attempted to perform an unauthorized operation. 2010-04-16 04:54:57 Slp: Stack: 2010-04-16 04:54:57 Slp: at System.Security.AccessControl.Win32.GetSecurityInfo(ResourceType resourceType, String name, SafeHandle handle, AccessControlSections accessControlSections, RawSecurityDescriptor& resultSd) 2010-04-16 04:54:57 Slp: at System.Security.AccessControl.NativeObjectSecurity.CreateInternal(ResourceType resourceType, Boolean isContainer, String name, SafeHandle handle, AccessControlSections includeSections, Boolean createByName, ExceptionFromErrorCode exceptionFromErrorCode, Object exceptionContext) 2010-04-16 04:54:57 Slp: at Microsoft.SqlServer.Configuration.Sco.SqlRegistrySecurity..ctor(ResourceType resourceType, SafeRegistryHandle handle, AccessControlSections includeSections) 2010-04-16 04:54:57 Slp: at Microsoft.SqlServer.Configuration.Sco.SqlRegistrySecurity.Create(InternalRegistryKey key) 2010-04-16 04:54:57 Slp: at Microsoft.SqlServer.Configuration.Sco.InternalRegistryKey.SetSecurityDescriptor(String sddl, Boolean overwrite) 2010-04-16 04:54:57 Slp: ---------------------------------------- 2010-04-16 10:37:19 Slp: User has chosen to cancel this action 2010-04-16 10:37:19 Slp: Watson Bucket 2 Original Parameter Values 2010-04-16 10:37:19 Slp: Parameter 0 : SQL2008@RTM@ 2010-04-16 10:37:19 Slp: Parameter 2 : System.Security.AccessControl.Win32.GetSecurityInfo 2010-04-16 10:37:19 Slp: Parameter 3 : Microsoft.SqlServer.Configuration.Sco.ScoException@1211@1 2010-04-16 10:37:19 Slp: Parameter 4 : System.UnauthorizedAccessException@-2147024891 2010-04-16 10:37:19 Slp: Parameter 5 : SqlBrowserConfigAction_install_ConfigNonRC 2010-04-16 10:37:19 Slp: Parameter 7 : Microsoft SQL Server 2010-04-16 10:37:19 Slp: Parameter 8 : Microsoft SQL Server 2010-04-16 10:37:19 Slp: Final Parameter Values I have googled around for the error given error but all I could find is to regedit and reset permissions on certain reg keys but I don’t see any reg keys with access problem in the log file the log file can be download here: http://www.mediafire.com/?dznizytjznn. Please guys help me out here I am a developer and I cannot afford an OS reinstallation! Thanks in advance…

    Read the article

  • Cygwin's RSYNC for large data transfer

    - by Tim Brigham
    I'm using rsync from Cygwin to do a large scale data transfer from an aging HP MSA 1000 to a new DAS attached to a different server. I have a daemon running on the remote server in read only mode and a local copy writing the files to disk. One of my servers is an image repository with over a million files spread across about 300 directories. Each file averages only a couple hundred kilobytes. More so than any other box this one is proving problematic. The rsync process will work for a while - some times 20 minutes, some times an hour - and then it simply quits and sits idle at a given file name. I have verified that the file isn't corrupt on the remote server and that the file is successfully created on the local drive. I ran the rsync client in -vv mode, which returns nothing. I checked out the logs created by the daemon. I looked at the network utilization on the interface, which is sitting idle. I looked at the AV settings to see if anything could pose a problem there. I even updated to the latest release of Cygwin. What do I need to in order to keep this connection up? EDIT: The client system is using the command rsync.exe server::Drives/f/Repo/ /cygdrive/T/Repo --archive -P -vv The server is using the command rsync.exe --daemon --no-detach --config "rsyncd.conf" The contents of rsyncd.conf: use chroot = false strict modes = false hosts allow = 192.168.100.9 log file = c:/rsyncd.log uid=0 gid=0 [Drives] path = /cygdrive read only = yes EDIT: The file server is 2003, the disk type on the array is GPT and the size is of the array is about 4 TB. EDIT: Stranger.. It looks like the process is reliably erroring out at about 175,000 files. Rsync runs fine when I pick the same directory it has problems with one at a time. EDIT: rsync version 3.0.9 protocol version 30 Copyright (C) 1996-2011 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints, no socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes A similar failure occurred when going from the same set of files with Cygwin to a Linux install. It didn't happen until several hours later than normal however.

    Read the article

  • VirtualBox - multiple guests, each with a single bridged adapter?

    - by Martin
    I am running a dedicated server (located at Hetzner, Germany) that runs VirtualBox in order to virtualize several services accross multiple virtual guests. Those guests are supposed to communicate with each other (for instance, a virtual web server has to access a virtual database server); to be reachable from the dedicated server (for instance, SSH access); and to access the Internet via the dedicated server (for instance, to download security updates) Currently, this is achieved by having host-only adapter vboxnet0 on the dedicated server and two virtual interfaces on each guest. There, virtual adapter eth0 is attached to vboxnet0 (to achieve (1) and (2)), virtual adapter eth1 is attached to VirtualBox' NAT (to achieve (3)). Via eth0, the guests have access to a DHCP and a DNS server, both running on the dedicated server (there, bound to vboxnet0). This allows me to assign custom IP addresses and names. Via eth1, VirtualBox pushes a proper route that enables each guest to access the Internet (via eth0 on the dedicated server). This setup with two virtual adapters frequently leads to problems and at leasts complicates many things. For instance, on the dedicated server there is OpenVPN which allows to access the virtual machines via the Internet; futhermore, there is Shorwall that controls the incoming and outgoing network traffic between the Internet, the dedicated server, and the individual virtual machines. Not to mention automatic installation of servers via PXE... Therefore, I would prefer to have only one single virtual adapter on each guest which would be used for both incoming and outgoing connections. As far as I understand, one would basically use a bridged interface for that very purpose. Now the question arises: Which interface on the dedicated server would the bridge use? eth0 on the host server is not an option, as this is prohibited by the provider. A virtual interface eth0:0 would not make any sense, as a bridge always uses a physical interface (eth0 in this case). Would it be possible to create a bridged interface in each virtual machine that would "dangle in the air"? Thus, without a complement on the dedicated server? How would I have to set up the routing on the host server? Please note that the host / dedicated server has only one network adapter (eth0) which is connected to the provider's network. Regards, Martin

    Read the article

  • Windows startup Powershell script not closing after Start-Process

    - by Matthew Phipps
    I've got a Powershell V2.0 startup script for my work computer (XP Professional 64-bit), as follows: start "C:\Program Files (x86)\Microsoft Office\Office12\OUTLOOK.EXE" -ArgumentList "/recycle" sleep -S 2 start "C:\Program Files (x86)\Mozilla Firefox\firefox.exe" -ArgumentList "https://mail.google.com" sleep -S 2 start "C:\Program Files (x86)\Mozilla Firefox\firefox.exe" -ArgumentList "-new-window https://www.google.com/calendar" sleep -S 2 start "C:\Program Files (x86)\Skype\Phone\Skype.exe" The sleeps are to ensure that the windows appear on the taskbar in the correct order. I run this from a shortcut on my Quick Launch with the following Target: C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe C:\scripts\initialize.ps1 (Yes, this is 2.0: powershell -Version 2.0 works, as does -Version 1.0, but not -Version 3.0) Problem is, the command window stays open until the Firefox windows are closed, which is not what I want. Looking at Process Explorer when I run the script, here's what happens: powershell.exe appears under explorer.exe and the Powershell window appears (with a black background, oddly. But it's not cmd.exe, since when I was debugging the script error messages would appear in red). outlook.exe appears under powershell.exe and the Outlook window appears. firefox.exe appears under powershell.exe and a Firefox window appears. A second firefox.exe appears under powershell.exe and another Firefox window appears. The second Firefox process then exits, as expected, since Firefox only uses one process. skype.exe appears under powershell.exe and the Skype window appears. The powershell.exe process inexplicably sticks around, as does the Powershell window. If I close both Firefox windows, the powershell.exe process exits and the Powershell window closes, and the outlook.exe and skype.exe processes appear under explorer.exe as expected. I suspect this has something to do with Firefox's standard input, output and error: I wouldn't expect Outlook or Skype to ever output anything to the console, but Firefox has command-line options that allow it to do so. I've looked over my about:config's user set values and didn't find anything suspicious. Finally, if I have a firefox.exe instance already running (started from the desktop shortcut) the problem doesn't occur (the powershell.exe process exits as it ought to). So what's going on here? I'm going to try adding -WindowStyle hidden to the shortcut next (gotta close this Firefox to test it), but I want to get to the bottom of this, if only to improve my understanding of how Windows consoles work.

    Read the article

  • Bypassing "Found New Hardware Wizard" / Setting Windows to Install Drivers Automatically

    - by Synetech inc.
    Hi, My motherboard finally died after the better part of a decade, so I bought a used system. I put my old hard-drive and sound-card in the new system, and connected my old keyboard and mouse (the rest of the components—CPU, RAM, mobo, video card—are from the new system). I knew beforehand that it would be a challenge to get Windows to boot and install drivers for the new hardware (particularly since the foundational components are new), but I am completely unable to even attempt to get through the work of installing drivers for things like the video card because the keyboard and mouse won't work (they do work, in the BIOS screen, in DOS mode, in Windows 7, in XP's boot menu, etc., just not in Windows XP itself). Whenever I try to boot XP (in normal or safe mode), I get a bunch of balloons popping up for all the new hardware detected, and a New Hardware Found Wizard for Processor (obviously it has to install drivers for the lowest-level components on up). Unfortunately I cannot click Next since the keyboard and mouse won't work yet because the motherboard drivers (for the PS/2 or USB ports) are not yet installed. I even tried a serial mouse, but to no avail—again, it does work in DOS, 7, etc., but not XP because it doesn't have the serial port driver installed. I tried mounting the SOFTWARE and SYSTEM hives under Windows 7 in order to manually set the "unsigned drivers warning" to ignore (using both of the driver-signing policy settings that I found references to). That didn't work; I still get the wizard. They are not even fancy, proprietary, third-party, or unsigned drivers. They are drivers that come with Windows—as the drivers for CPU, RAM, IDE controller, etc. tend to be. And the keyboard and mouse drivers are the generic ones at that (but like I said, those are irrelevant since the drivers for the ports that they are connected to are not yet installed). Obviously at some point in time over the past several years, a setting got changed to make Windows always prompt me when it detects new hardware. (It was also configured to show the Shutdown Event Tracker on abnormal shutdowns, so I had to turn that off so that I could even see the desktop.) Oh, and I tried deleting all of the PNF files so that they get regenerated, but that too did not help. Does anyone know how I can reset Windows to at least try to automatically install drivers for new hardware before prompting me if it fails? Conversely, does anyone know how exactly one turns off automatic driver installation (and prompt with the wizard)? Thanks a lot.

    Read the article

  • How to get wireless working (properly) with Sitecom Wireless USB micro adapter 300N on Windows 7?

    - by Timo
    The question says it all, but more detail follows ;) I've got a new computer that runs Windows 7 64-bits (Home Edition) and I'd like to connect it to my wireless home network (Sitecom wireless gigabit router 300N wl-352 v1 002) with a Sitecom wireless USB micro adaptapter 300 wl-352 V2 001. After installing the router (i.e. connected to the modem and power) and ensuring that wireless is indeed enabled, I've installed the driver of the USB adapter on the new computer described above. After the installation (drivers and utility on CD) completes successfull I rebooted my computer and inserted the USB adapter. After discovering the right network and connecting to it using the network key, a connection is succesfully made. (Using the Sitecom 300N USB Wireless LAN utility). In the LAN utility I can see that the signal strength is approximately 50% and connection quality is approximately 80%. Judging from these numbers I assumed that all was fine and started to use the connection (reading news on nu.nl, a dutch news site), but noticed that the connection was lost several times in a very short time span, but each time the connections was resumed, resulting in the 50/80 percent numbers described above. However, the website was not loaded completely and often a timeout would be reported. When inspecting the drivers through Device Management (Windows' Apparaatbeheer in dutch) there were no errors/warnings; everything seemed to be in order. In an attempt to solve this, I downloaded the latest drivers for the USB adapter, but the problems remained. Finally I tried to connect the computer with a Siemens Gigaset USB Adapter 108. This process was a troublesome since I had to download a driver (from the site above) and tell Windows (7) to use the Windows Vista driver when installing the new hardware, since there is (was) no Windows 7 driver available. This resulted in a usable connection, although not very stable when reconfiguring the router. Which took the form of selecting a different wireless channel on the router, even using the Sitecom utility mentioned above to check if there were other networks communicating on that channel (and thus picking a channel that was not used by other networks). Again no result when changing back to the Sitecom USB adapter. Note that this means (I think) that I could use the internet connection with the Siemens adapter, meaning the problem was not in the router. So: How to get wireless working (properly) with Sitecom Wireless USB micro adapter 300N on Windows 7? PS Sorry, but should be able to post one link, while I had links in place for the USB adapter, router and the siemens adapter in place as well, but I'm not (yet) allowed to post these... (The site says I can post one link, but only when no links are present will it allow me to post the question...)

    Read the article

  • isa 2004 - banned site rule cause slow internet

    - by Holian
    Hi Gods, We have windows server 2003 with isa 2004. Our clients uses internet with proxy. We have two isa rule: order name action protocolls from/listener to condition 1. trafic ALLOW all outbound all networks all networks all users 2. FTP ALLOW FTP Server EXTERNAL/INTERNAL/Local host 10.1.1.1 So we have to "bann" a few webpage (like facebook, youtube...etc...), so we make a new rule 0. banned DENY HTTP internal denied pages all users In the denied pages we have the *.facebook.com domain set. After we enable this rule, the entire internet slows down. The banning rule works well, redirect to an internal site, but the other sites.... If i open a page..it normally takes 3-10 sec to load, but after this rule this time is: 2-4 minutes. In the monitor / logging menu we got a few FAILED CONNECTION ATTEMPT like: Log type: Web Proxy (Forward) Status: 304 Not Modified Rule: All local traffic Source: Internal ( 10.1.1.1:0 ) Destination: External ( 172.24.28.22:3128 ) Request: GET http://www.konyvelozona.hu/wp-content/uploads/nyugdijas-holgy-2.jpg Filter information: Req ID: 17270b72 Protocol: http User: anonymous Additional information Client agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.3072... Object source: Verified Cache Processing time: 9047 Cache info: 0x18801002 MIME type: - In the event log we got a few log: Description: The Web Proxy filter failed to bind its socket to 10.1.1.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d The Web Proxy filter failed to bind its socket to 127.0.0.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d If i tpye: netstat -o -n -a | findstr 0.0:80 then i got, tcp 0.0.0.0:80 0.0.0.0:0 LISTEN 4 udp 0.0.0.0:8031 *.* 2780 udp 0.0.0.0:8082 *.* 2780 Some month ago we installed XMAP, but now we only use mysql. Apache service stopped. In the Xamp port check menu i see: Service POrt Status Apache (http) 80 Process: System Maybee this is the problem? I dont know what should i do now... Thank you folks.

    Read the article

  • UCARP: prevent the original master from taking over the VIP when it comes back after failure?

    - by quanta
    Keepalived can do this by combining the nopreempt option and the BACKUP state on the both nodes: Prevent VRRP Master from becoming Master once it has failed Prevent master to fall back to master after failure How about the UCARP? Name : ucarp Arch : x86_64 Version : 1.5.2 Release : 1.el5.rf Size : 81 k Repo : installed Summary : Common Address Redundancy Protocol (CARP) for Unix URL : http://www.ucarp.org/ License : BSD Description: UCARP allows a couple of hosts to share common virtual IP addresses in order : to provide automatic failover. It is a portable userland implementation of the : secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD's : alternative to the patents-bloated VRRP). : Strong points of the CARP protocol are: very low overhead, cryptographically : signed messages, interoperability between different operating systems and no : need for any dedicated extra network link between redundant hosts. If I don't use the --preempt option and set the --advskew to the same value, both nodes become master. /etc/sysconfig/carp/vip-010.conf # Virtual IP configuration file for UCARP # The number (from 001 to 255) in the name of the file is the identifier # $Id: vip-001.conf.example 1527 2004-07-09 15:23:54Z dude $ # Set the same password on all mamchines sharing the same virtual IP PASSWORD="pa$$w0rd" # You are required to have an IPADDR= line in the configuration file for # this interface (so no DHCP allowed) BIND_INTERFACE="eth0" # Do *NOT* use a main interface for the virtual IP, use an ethX:Y alias # with the corresponding /etc/sysconfig/network-scripts/ifcfg-ethX:Y file # already configured and ith ONBOOT=no VIP_INTERFACE="eth0:0" # If you have extra options to add, see "ucarp --help" output # (the lower the "-k <val>" the higher priority and "-P" to become master ASAP) OPTIONS="-z -k 255" /etc/sysconfig/network-scripts/ifcfg-eth0:0 DEVICE=eth0:0 ONBOOT=no BOOTPROTO= IPADDR=192.168.6.8 NETMASK=255.255.255.0 USERCTL=yes IPV6INIT=no node 1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether c6:9b:8e:af:a7:69 brd ff:ff:ff:ff:ff:ff inet 192.168.6.192/24 brd 192.168.6.255 scope global eth0 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth0:0 inet6 fe80::c49b:8eff:feaf:a769/64 scope link valid_lft forever preferred_lft forever node 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:30:48:f7:0f:81 brd ff:ff:ff:ff:ff:ff inet 192.168.6.38/24 brd 192.168.6.255 scope global eth1 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth1:0 inet6 fe80::230:48ff:fef7:f81/64 scope link valid_lft forever preferred_lft forever

    Read the article

  • Operative systems on SD cards

    - by HisDudeness
    I was getting some wild ideas the last days, like putting some operative systems into SD cards rather than on my hard drive. I'll go further into details now and explain what lead me to consider this probably abominable decision. I am on a laptop (that means I have a native SD-card reader) which is currently running a cross-distro setup, with a bunch of Linux systems (placed in dedicated ext4 logical partitions into a huge extended one) regulated by an unique GRUB. Since today, my laptop haven't even seen any Windows system with binoculars. I was thinking about placing all the os part of my setup into a Secure Digital to save all my 500 Gb Hard Drive for documents, music, videos and so on, and being able to just remove the SD and boot my system into another computer too, as well as having the possibility of booting other systems into mine by just plugging in another SD, without having to keep it constantly placed in my PC. Also, in the remote case in the near future I just wanted to boot Windows 8 in it, I read it causes major boot incompatibility issues with other systems by needing a digital signature in order for them to start. By having it in a removable drive, I could just get rid of it when I'm needing him and switch its card with Linux one, and so not having any obstacles to their boot. Now, my questions are: I know unlikely traditional rotating disk drives, integrated circuits ones have a limited lifespan in terms of cluster rewriting. Is it an obstacle to that kind of usage? I mean, some Ultrabooks are using SSD now, is it the same issue, or there are some differences between Solid State Drives and Secure Digitals in that sense? Maybe having them to store system files which are in fixed positions (making the even-usage of cluster technology useless) constantly being re-read and updated and similar things just gets them soon unserviceable, do it? Second question: are all motherboards and BIOSes able to boot from SDs just like they are from USB pen drives (I mean, provided card reader is USB-connected, isn't it)? Or can't bootloaders like GRUB be installed on SDs working? If they can't, is it a solution installing GRUB to MBR and making boot option pointing to SD? Will it work? Are there any other problems to installing OSs on a Secure Digital?

    Read the article

  • Force caching of handler output which actively resists caching

    - by deceze
    I'm trying to force caching of a very obnoxious piece of PHP script which actively tries to resist caching for no good reason by actively setting all the anti-cache headers: Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Content-Type: text/html; charset=UTF-8 Date: Thu, 22 May 2014 08:43:53 GMT Expires: Thu, 19 Nov 1981 08:52:00 GMT Last-Modified: Pragma: no-cache Set-Cookie: ECSESSID=...; path=/ Vary: User-Agent,Accept-Encoding Server: Apache/2.4.6 (Ubuntu) X-Powered-By: PHP/5.5.3-1ubuntu2.3 If at all avoidable I do not want to have to modify this 3rd party piece of code at all and instead just get Apache to cache the page for a while. I'm doing this very selectively to only very specific pages which have no real impact on session cookies or the like, i.e. which do not contain any personalised information. CacheDefaultExpire 600 CacheMinExpire 600 CacheMaxExpire 1800 CacheHeader On CacheDetailHeader On CacheIgnoreHeaders Set-Cookie CacheIgnoreCacheControl On CacheIgnoreNoLastMod On CacheStoreExpired On CacheStoreNoStore On CacheLock On CacheEnable disk /the/script.php Apache is caching the page alright: [cache:debug] AH00698: cache: Key for entity /the/script.php?(null) is http://example.com:80/the/script.php? [cache_disk:debug] AH00709: Recalled cached URL info header http://example.com:80/the/script.php? [cache_disk:debug] AH00720: Recalled headers for URL http://example.com:80/the/script.php? [cache:debug] AH00695: Cached response for /the/script.php isn't fresh. Adding conditional request headers. [cache:debug] AH00750: Adding CACHE_SAVE filter for /the/script.php [cache:debug] AH00751: Adding CACHE_REMOVE_URL filter for /the/script.php [cache:debug] AH00769: cache: Caching url: /the/script.php [cache:debug] AH00770: cache: Removing CACHE_REMOVE_URL filter. [cache_disk:debug] AH00737: commit_entity: Headers and body for URL http://example.com:80/the/script.php? cached. However, it is always insisting that the "cached response isn't fresh" and is never serving the cached version. I guess this has to do with the Expires header, which marks the document as expired (but I don't know whether that's the correct assumption). I've tried to overwrite and unset headers using mod_headers, but this doesn't help; whatever combination I try the cache is not impressed at all. I'm guessing that the order of operation is wrong, and headers are being rewritten after the cache sees them. early header processing doesn't help either. I've experimented with CacheQuickHandler Off and trying to set explicit filter chains, but nothing is helping. But I'm really mostly poking in the dark, as I do not have a lot of experience with configuring Apache filter chains. Is there a straight forward solution for how to cache this obnoxious piece of code?

    Read the article

  • Sony VGN-NR260E "External Device Boot"

    - by user72158
    [A LITTLE BACKGROUND] On all modern Dell computers pushing the F12 on bios boot will allow for a screen that lets you choose what boot option you need. For example if I want to boot off of a USB flash drive to boot into a live Linux distribution in order to clean virus's on netbooks that do not have CD drives to boot from I would push F12 and choose USB device from the list of options. If this does not show up then I can always go to the F2 bios setup and choose flash drive to be the first option. When I restart the computer it will boot into the flash device. I understand that I can purchase an external USB CD drive and then boot from that. I do not want to use that option. The reason for using a flash device instead of a CD is: A: This USB flash device has several different boot OS's on it that are used. B: The antivirus disks are updated often and burning cd's and throwing away others is wasteful compared to simply updating a flash drive. There is nothing wrong with the flash drive. It works perfect on many other PC's. [PROBLEM] Booting this flashdrive has been working for years on hundreds of computers... I just have this ONE computer that I cannot figure out how to get it to boot on... I have a Sony Vaio that will not boot to this device. I've tried pushing every key combo I can think of (F12, Esc, Del, F10...) and none of these key combinations will bring up the boot menu. I chose F2 and went into the bios and changed the first boot device to USB flash device. This did not work either. There is an astrix next to the device and the note states: "This Drive is available when External Device Boot is Enable." [WHAT I NEED] I need to know How to enable External Device Boot on the Sony Vaio VGN-NR260E laptop. OR How to bring up the Boot Menu to allow me to boot off a flash device. Thanks for anyone that can help!

    Read the article

  • BIND DNS server (Windows) - Unable to access my local domain from other computers on LAN

    - by Ricardo Saraiva
    I have a BIND DNS server running on my Windows 7 development machine and I'm serving pages with WAMPSERVER. My ideia is to develop some tools (in PHP) for my intranet at work and I want them to be accessible via LAN in this format: http://tools.mycompany.com I've already placed BIND and I can access http://tools.mycompany.com on the machine that holds BIND server, but I cannot access it from other LAN computers. I've done the following on my router: defined static IP's for all LAN computers set Port Forwarding to my server (remember: it serves DNS and Web pages) set DNS server configuration to point to my LAN server On LAN computers, I went to Local Area Network properties and also changed the DNS server IP in order to point to my local DNS server. If it helps, here is my named.conf file: options { directory "c:\windows\SysWOW64\dns\etc"; forwarders {127.0.0.1; 8.8.8.8; 8.8.4.4;}; pid-file "run\named.pid"; allow-transfer { none; }; recursion no; }; logging{ channel my_log{ file "log\named.log" versions 3 size 2m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ my_log; }; }; zone "mycompany.com" IN { type master; file "zones\db.mycompany.com.txt"; allow-transfer { none; }; }; key "rndc-key" { algorithm hmac-md5; secret "qfApxn0NxXiaacFHpI86Rg=="; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; ...and a single zone I've defined - file db.mycompany.com.txt: $TTL 6h @ IN SOA tools.mycompany.com. hostmaster.mycompany.com. ( 2014042601 10800 3600 604800 86400 ) @ NS tools.mycompany.com. tools IN A 192.168.1.4 www IN A 192.168.1.4 On the file above 192.168.1.4 is the IP of the local machine inside my LAN. Can someone help me here? I need my web pages to be accessible from other computers inside my LAN using my custom domain name. I've tried on other computers and they can access my server via http://192.168.1.4/, but no able when using http://tools.mycompany.com . Please, consider the following: I'm completely new to BIND I have basic knowledge in Apache configuration Thanks a lot for your help.

    Read the article

  • mac + parallels and https site test = router restarts

    - by Erik
    Ok I have an interesting & very frustrating problem happening. I'm going to explain it the best I can. I work as a graphic designer & web designer on a mac and have a Comcast internet connection that comes through a Comcast branded router (SMC8014) which then ties into an Airport Extreme Base Station which runs my office network. I run OS 10.5.7 and also run Parallels 4.0.3 (running Windows XP) for testing websites in Internet Explorer and so on. Ok, so that the basic background. Here's my issue. I've been collaborating an ecommerce website with another designer/developer and when testing the site on the PC side we have started to run into some sort of network problem. The site is https if that matters at all I suspect it may. Basically when I run parallels for testing I am constantly having to restart the router in order to connect to the test site (it's hosted). Funny thing is I can access the rest of the internet fine, just not this site I'm working on until I restart the router (It's sorta like the site is timing out). This never happens when just running the Mac side of things. It only becomes an issue when Parallels is open and I am doing page refreshes while making css or HTML edits via something like Coda or CSS edit (connected to the hosting server via ftp). The real problem is that once the problem starts I only get about 2 or 3 page loads before I have to restart the router again. It's absolutely crippling. I cannot get any work done when I have to restart the router every couple of minutes. So, if you think this problem is isolated to me, the answer is no. The designer/developer I'm collaborating with has an office a couple miles away and experiences very similar problems under slightly different setup. He also has Comcast as his internet provider and connects his router to an Airport and primarily works on a mac. The main difference is that rather than using a visualizer like parallels to test the website on the PC he uses a real live PC that is on his network. Once he fire up the PC to do testing he runs into the same issue described above. After a couple of page refreshes in Internet Explorer or other browser on the PC the site becomes unresponsive and the router has to get restarted. Any thoughts on what is going on here would be greatly appreciated. Thanks in advance.

    Read the article

  • Start a software company offshore

    - by Mascarpone
    Hello Everybody, I own a small, very young, EU based (Italy) company, and among other things, we sell IT solutions. I have a degree in applied mathematics, and I mainly deal with user interfaces, embedded systems, automation and web applications. You can say that I'm an enlightened entrepreneur because I work only with open source software (OS, IDE, I release under BSD , ... everything is free as in freedom), I give high importance to post sales services and customer satisfaction, plus I think I'm the best boss someone could desire (LOL), as I have google in mind when I think about IT workers rights. But the most beautiful thing is that, although everybody advised us not to use open source, is that we are quite profitable!!! (for the sixth trimester in a row). Now I offshore most of the work to an Indian company. I divide the work in modules and I outsource the longer or more trivial ones. I spend a lot of time defining the specifications and I leave the hard work to them. Using productivity bonuses, a lot of prototypes and third-party audits I think that my software has reached a very good quality level. I would like to start my own software development company, in order to improve control over process and cut costs. Obviously I can't afford the cost of labor in the EU, so I thought about opening a company in Asia. What I need Is: 1) Cheap labor - I can afford to give productivity bonuses and higher than average wages and stay profitable just because labor is cheap. 2) Many talents - I need a good level of tertiary education, and a good number of graduates, so I can hire junior developers and train and teach them according to my needs and philosophies (e.g.: open source mind) 3) Good infrastructure - buildings, transport, internet, .... everything that a company might need. I thought about 3 possible candidates: 1) India - I already work with indian people, I know that they are realiable and speak a good english. Big cities are too expensive, but maybe a small city like lucknow http://en.wikipedia.org/wiki/Lucknow could suits my needs. 2) China - They say it's cheaper than India, but I everytime I worked with a chineese company the language was a big barrier. They work hard, are somewhat skilled and cheap but maybe it's a risky path. Plus I feel a little uncofortable with their lack of human rights. 3) Philippines - Same as china: cheaper than india, but maybe less educated. Where do you think it's the best place to start a software company? Any reading or book to advise? thank you very much

    Read the article

  • Random Computer Crashes

    - by Josh W.
    Ok, here's a wierd one for you all. Occasionally my PC here at work will crash in a very peculiar way. My dual monitors will suddenly go blank as if there is no longer a video signal, the USB mouse light will go dark and mouse stays unresponsive, the keyboard lights will not change status when the appropriate keys are pressed (Num/Caps/Scoll Lock). The CD Tray WILL open and close. But the computer will not respond to a ping request. For all intents & purposes it's as if the computer is off, except it wasn't intentional by me. The power light and internal fans are still on and I've now lost any unsaved work. Now here's where it gets wierd. This PC is part of a batch of PC's we got from a local vendor who does our initial system builds. Mine, and 6 other co-workers PCs all have the same issue. Originally we thought it was a bad combination of hardware, but through trial and error the only thing we haven't eliminated are the OS, Mobo & CPU. The problem was so bad for some of them that they ended up going back to their 5 year old dinosaurs in order to get some work done, for me the problem isn't as bad, maybe once every other day or so, but still enough to bite me in the ass if I've forgotten my ritualistic pressing of CTRL-S every 1-5 minutes. In this case we've tried two different video cards, two different power supplies, two different memory configurations, running on a UPS/not on UPS, updating/rolling back video drivers, three different bios revisions. The only things we haven't swapped are the mobo & cpu, mainly because a new mobo means a new Hardware Abstraction Layer, ie re-install of windows and there's alot of other software on this PC that takes forever to reload by hand. There was a base image that our systems team created with all the drivers installed and the basic setup of software our company uses, but they then must customize the setup for us programmers so it takes a while to get a new configuration up and going. I'm a programmer by day and am usually pretty good at diagnosing computer problems whether through trial and error or not. We've pretty much exhausted all the ideas we can think of here, short of a new mobo/cpu. Was hoping someone out there might have anything else we can try.. Relevant Parts: OS: XP Pro 32-bit Motherboard: Intel DG41RQ CPU: Intel Core-2 Quad Q9400 @ 2.66GHz Current BIOS Version/Date Intel Corp. RQG4110H.86A.0014.2010.0306.1151, 3/6/2010 Dual LCD's, Viewsonic VG930m & Samsung SyncMaster 910v (other people have different models, but listed in case there's some very wierd problem with the signals being sent/received) PS/2 Keyboard USB Microsoft Intellimouse BIOS Versions Tried: R 0013 12/23/2009 R 0014 3/6/2010 Video cards Tried GeForce 8400 GS Radeon HD 4350 - ASUS EAH4350 Two Different Power Supplies a 380W & 550W Ram Configurations 2GB - 1 x 2GB 4GB - 2 x 2GB

    Read the article

  • Proxy Error 502 "Reason: Error reading from remote server" with Apache 2.2.3 (Debian) mod_proxy and Jetty 6.1.18

    - by Martin
    Apache is receiving requests at port :80 and proxying them to Jetty at port :8080 The proxy server received an invalid response from an upstream server The proxy server could not handle the request GET /. My dilemma: Everything works fine normally (fast requests, few seconds or few tens of seconds long requests are processed ok). Problems occur when request processing takes long (few minutes?). If I issue request instead directly to Jetty at port :8080 the request is processed OK. So problem is likely to sit somewhere between Apache and Jetty where I am using mod_proxy. How to solve this? I have already tried some "tricks" related to KeepAlive settings, without luck. Here is my current configuration, any suggestions? #keepalive Off ## I have tried this, does not help #SetEnv force-proxy-request-1.0 1 ## I have tried this, does not help #SetEnv proxy-nokeepalive 1 ## I have tried this, does not help #SetEnv proxy-initial-not-pooled 1 ## I have tried this, does not help KeepAlive 20 ## I have tried this, does not help KeepAliveTimeout 600 ## I have tried this, does not help ProxyTimeout 600 ## I have tried this, does not help NameVirtualHost *:80 <VirtualHost _default_:80> ServerAdmin [email protected] ServerName www.mydomain.fi ServerAlias mydomain.fi mydomain.com mydomain www.mydomain.com ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass / http://www.mydomain.fi:8080/ retry=1 acquire=3000 timeout=600 ProxyPassReverse / http://www.mydomain.fi:8080/ RewriteEngine On RewriteCond %{SERVER_NAME} !^www\.mydomain\.fi RewriteRule /(.*) http://www.mydomain.fi/$1 [redirect=301L] ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On </VirtualHost> Here is also the debug log from a failing request: 74.125.43.99 - - [29/Sep/2010:20:15:40 +0300] "GET /?wicket:bookmarkablePage=newWindow:com.mydomain.view.application.reports.SaveReportPage HTTP/1.1" 502 355 "https://www.mydomain.fi/?wicket:interface=:0:2:::" "Mozilla/5.0 (Windows; U; Windows NT 6.1; fi; rv:1.9.2.10) Gecko/20100914 Firefox/3.6.10" [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: error reading status line from remote server www.mydomain.fi, referer: https://www.mydomain.fi/?wicket:interface=:0:2::: [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: Error reading from remote server returned by /, referer: https://www.mydomain.fi/?wicket:interface=:0:2:::

    Read the article

  • How to get the best LINPACK result and conquer the Top500?

    - by knweiss
    Given a large Linux HPC cluster with hundreds/thousands of nodes. What are your best practices to get the best possible LINPACK benchmark (HPL) result to submit for the Top500 supercomputer list? To give you an idea what kind of answers I would appreciate here are some sub-questions (with links): How to you tune the parameters (N, NB, P, Q, memory-alignment, etc) for the HPL.dat file (without spending too much time trying each possible permutation - esp with large problem sizes N)? Are there any Top500 submission rules to be aware of? What is allowed, what isn't? Which MPI product, which version? Does it make a difference? Any special host order in your MPI machine file? Do you use CPU pinning? How to you configure your interconnect? Which interconnect? Which BLAS package do you use for which CPU model? (Intel MKL, AMD ACML, GotoBLAS2, etc.) How do you prepare for the big run (on all nodes)? Start with small runs on a subset of nodes and then scale up? Is it really necessary to run LINPACK with a big run on all of the nodes (or is extrapolation allowed)? How do you optimize for the latest Intel/AMD CPUs? Hyperthreading? NUMA? Is it worth it to recompile the software stack or do you use precompiled binaries? Which settings? Which compiler optimizations, which compiler? (What about profile-based compilation?) How to get the best result given only a limited amount of time to do the benchmark run? (You can block a huge cluster forever) How do you prepare the individual nodes (stopping system daemons, freeing memory, etc)? How do you deal with hardware faults (ruining a huge run)? Are there any must-read documents or websites about this topic? E.g. I would love to hear about some background stories of some of the current Top500 systems and how they did their LINPACK benchmark. I deliberately don't want to mention concrete hardware details or discuss hardware recommendations because I don't want to limit the answers. However, feel free to mention hints e.g. for specific CPU models.

    Read the article

  • 2nd Year College - Learning - Microsoft Server Products

    - by Ryan
    As the title says, I just finished my first year of college (majoring in Software Engineering). Fortunately my school likes Microsoft enough, and I can get pretty much anything I want that Microsoft sells. I also can get IBM Websphere and the like for free as well. Earlier this year, I set up an oldish computer (2.6 Pentium D, x64) to run ubuntu server headless. I'm predominately a Java developer, so Apache, Maven, Nexus, Sonar, SVN, etc made it onto the machine. It worked really well for personal and school projects, especially team projects (quick ramp up). Anyways, I started to pick up C# to complement my Java knowledge (don't judge me :P), and am interested in working with some of the associated Microsoft equivalents. The machine currently has the Ubuntu install, as well as Windows 7 Ultimate. I do all of my actual development work off my laptop, also running Windows 7 Ultimate. I was wondering what software you would recommend putting on the machine. I’m not actually serving anything off the machine itself, but in Ubuntu I had it doing integration tests with Hudson on every commit, and profiling my applications, etc, etc. The machine would be running headless, and I would remote into it. Here is what I am currently leaning towards / wondering about: Windows 7 Ultimate vs Windows Server 2008 (R2) (no one is really clear why I should go with one over the other) Windows Team Foundation Sharepoint (Never used it before, kind of meh about it) IBM Websphere or Glassfish (Some Java EE web server) SQL Server 2008 A DVCS In order to better control product conflicts / limit resource use, I’m wondering if I should install things into virtual machines (I can get VmWare or Microsoft Virtualization Products) I also plan on installing everything I had running under Linux (it’s almost entirely Java based development software, so it’ll run on both, only reason I went with ubuntu during the year was because the apache build seemed better). I’m primarily looking to become familiar with enterprise software development tools, as well as get something functional that will help my development process. (IE, I’ll still use project and assign tasks even though I might be the only one to assign tasks to, just to practice doing so). Is there any other software / configuration details I should explore? Opinions on my current list? I primarily use C#, Java, and PHP. I'm familiar with ruby, and python as well. Thanks!

    Read the article

  • Apache won't follow Symlink

    - by Marvin Dickhaus
    I have a LAMP server (Ubuntu 12.10) setup on my development machine. It is a T60 modified with an SSD. The server base is in /var/www. Apache has the following config: DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews SymLinksIfOwnerMatch AllowOverride all Order allow,deny allow from all </Directory> I'm currently developing a SilverStripe CMS featured site. The folder for the server is /var/www/sfk/. The framework and all cms relavant features are in their respective folders. The only folder that need to be modified would be the /var/www/sfk/mysite folder. Because of that I want to keep the mysite folder under my home directory and symlink it into the server folder. So here is what I've done: ln -s ~/sfk/mysite/ /var/www/sfk/ sudo chgrp www-data /var/www/sfk/mysite -R ls tells me the following: /var/www/sfk (exerpt) drwxr-xr-x 3 marvin www-data 4096 Nov 16 16:53 assets drwxr-xr-x 12 marvin www-data 4096 Nov 16 16:53 cms drwxr-xr-x 29 marvin www-data 4096 Nov 16 16:53 framework -rw-r--r-- 1 marvin www-data 2410 Nov 16 16:53 index.php lrwxrwxrwx 1 marvin www-data 24 Nov 20 17:45 mysite -> /home/marvin/sfk/mysite/ -rw-rw-r-- 1 marvin www-data 514 Nov 16 16:55 _ss_environment.php drwxr-xr-x 4 marvin www-data 4096 Nov 16 16:53 themes and ls /var/www/sfk/mysite/ drwxrwxr-x 6 marvin www-data 4096 Nov 16 00:15 code drwxrwxr-x 2 marvin www-data 4096 Nov 16 11:51 _config -rwxrwxr-x 1 marvin www-data 2685 Nov 16 15:39 _config.php drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 css drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 images drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 javascript drwxrwxr-x 5 marvin www-data 4096 Nov 16 00:15 templates This is literally the same setup I have on my desktop machine. The problem I have is that the mysite/ folder is just not recognized. I'm thankful for every advice I get. I'm frustrated because I'm stuck with this issue for hours.

    Read the article

  • Nginx fastcgi problems with django

    - by wizard
    I'm deploying my first django app. I'm familiar with nginx and fastcgi from deploying php-fpm. I can't get python to recognize the urls. I'm also at a loss on how to debug this further. I'd welcome solutions to this problem and tips on debugging fastcgi problems. Currently I get a 404 page regardless of the url and for some reason a double slash For http://www.site.com/admin/ Page not found (404) Request Method: GET Request URL: http://www.site.com/admin// My urls.py from the debug output - which work in the dev server. Using the URLconf defined in ahrlty.urls, Django tried these URL patterns, in this order: ^listings/ ^admin/ ^accounts/login/$ ^accounts/logout/$ my nginx config server { listen 80; server_name beta.ahrlty.com; access_log /home/ahrlty/ahrlty/logs/access.log; error_log /home/ahrlty/ahrlty/logs/error.log; location /static/ { alias /home/ahrlty/ahrlty/ahrlty/static/; break; } location /media/ { alias /usr/lib/python2.6/dist-packages/django/contrib/admin/media/; break; } location / { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:8001; break; } } and my fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param PATH_INFO $fastcgi_script_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; And lastly I'm running fastcgi from the commandline with django's manage.py. python manage.py runfcgi method=threaded host=127.0.0.1 port=8080 pidfile=mysite.pid minspare=4 maxspare=30 daemonize=false I'm having a hard time debugging this one. Does anything jump out at anybody?

    Read the article

  • "log on as a batch job" user rights removed by what GPO?

    - by LarsH
    I am not much of a server administrator, but get my feet wet when I have to. Right now I'm running some COTS software on a Windows 2008 Server machine. The software installer creates a few user accounts for running its processes, and then gives those users the right to "log on as a batch job". Every so often (e.g. yesterday at 2:52pm and this morning at 7:50am), those rights disappear. The software then stops working. I can verify that the user rights are gone by using secedit /export /cfg e:\temp\uraExp.inf /areas USER_RIGHTS and I have a script that does this every 30 seconds and logs the results with a timestamp, so I know when the rights disappear. What I see from the export is that in the "good" state, i.e. after I install the software and it's working correctly, the line for SeBatchLogonRight from the secedit export includes the user accounts created by the software. But every few hours (sometimes more), those user accounts are removed from that line. The same thing can be seen by using the GUI tool Local Security Policy > Security Settings > Local Policies > User Rights Assignment > Log on as a batch job: in the "good" state, that policy includes the needed user accounts, and in the bad state, the policy does not. Based on the above-mentioned logging script and the timestamps at which the user rights are being removed, I can see clearly that some GPOs are causing the change. The GPO Operational log shows GPOs being processed at exactly the right times. E.g.: Starting Registry Extension Processing. List of applicable GPOs: (Changes were detected.) Local Group Policy I have run GPOs on demand using gpupdate /force, and was able to verify that this caused the User Rights to be removed. We have looked over local group policies till our eyes are crossed, trying to figure out which one might be stripping these User Rights to "log on as a batch job." We have not configured any local group policies on this machine, that we know of; so is there a default local group policy that might typically do such a thing? Are there typical domain policies that would do this? I have been working with our IT staff colleagues to troubleshoot the problem, but none of them are really GPO experts... They wear many hats, and they do what they need to do in order to keep most things running. Any suggestions would be greatly appreciated!

    Read the article

  • Rebasing a branch which is public

    - by Dror
    I'm failing to understand how to use git-rebase, and I consider the following example. Let's start a repository in ~/tmp/repo: $ git init Then add a file foo $ echo "hello world" > foo which is then added and committed: $ git add foo $ git commit -m "Added foo" Next, I started a remote repository. In ~/tmp/bare.git I ran $ git init --bare In order to link repo to bare.git I ran $ git remote add origin ../bare.git/ $ git push --set-upstream origin master Next, lets branch, add a file and set an upstream for the new branch b1: $ git checkout -b b1 $ echo "bar" > foo2 $ git add foo2 $ git commit -m "add foo2 in b1" $ git push --set-upstream origin b1 Now it is time to switch back to master and change something there: $ echo "change foo" > foo $ git commit -a -m "changed foo in master" $ git push At this point in master the file foo contain changed foo, while in b1 it is still hello world. Finally, I want to sync b1 with the progress made in master. $ git checkout b1 $ git fetch origin $ git rebase origin/master At this point git st returns: # On branch b1 # Your branch and 'origin/b1' have diverged, # and have 2 and 1 different commit each, respectively. # (use "git pull" to merge the remote branch into yours) # nothing to commit, working directory clean At this point the content of foo in the branch b1 is change foo as well. So what does this warning mean? I expected I should do a git push, git suggests to do git pull... According to this answer, this is more or less it, and in his comment @FrerichRaabe explicitly say that I don't need to do a pull. What's going on here? What is the danger, how should one proceed? How should the history be kept consistent? What is the interplay between the case described above and the following citation: Do not rebase commits that you have pushed to a public repository. taken from pro git book. I guess it is somehow related, and if not I would love to know why. What's the relation between the above scenario and the procedure I described in this post.

    Read the article

< Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >