Search Results

Search found 18251 results on 731 pages for 'rc local'.

Page 174/731 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Database Rebuild

    - by Robert May
    I promised I’d have a simpler mechanism for rebuilding the database.  Below is a complete MSBuild targets file for rebuilding the database from scratch.  I don’t know if I’ve explained the rational for this.  The reason why you’d WANT to do this is so that each developer has a clean version of the database on their local machine.  This also includes the continuous integration environment.  Basically, you can do whatever you want to the database without fear, and in a minute or two, have a completely rebuilt database structure. DBDeploy (including the KTSC build task for dbdeploy) is used in this script to do change tracking on the database itself.  The MSBuild ExtensionPack is used in this target file.  You can get an MSBuild DBDeploy task here. There are two database scripts that you’ll see below.  First is the task for creating an admin (dbo) user in the system.  This script looks like the following: USE [master] GO If not Exists (select Name from sys.sql_logins where name = '$(User)') BEGIN CREATE LOGIN [$(User)] WITH PASSWORD=N'$(Password)', DEFAULT_DATABASE=[$(DatabaseName)], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF END GO EXEC master..sp_addsrvrolemember @loginame = N'$(User)', @rolename = N'sysadmin' GO USE [$(DatabaseName)] GO CREATE USER [$(User)] FOR LOGIN [$(User)] GO ALTER USER [$(User)] WITH DEFAULT_SCHEMA=[dbo] GO EXEC sp_addrolemember N'db_owner', N'$(User)' GO The second creates the changelog table.  This script can also be found in the dbdeploy.net install\scripts directory. CREATE TABLE changelog ( change_number INTEGER NOT NULL, delta_set VARCHAR(10) NOT NULL, start_dt DATETIME NOT NULL, complete_dt DATETIME NULL, applied_by VARCHAR(100) NOT NULL, description VARCHAR(500) NOT NULL ) GO ALTER TABLE changelog ADD CONSTRAINT Pkchangelog PRIMARY KEY (change_number, delta_set) GO Finally, Here’s the targets file. <Projectxmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0" DefaultTargets="Update">   <PropertyGroup>     <DatabaseName>TestDatabase</DatabaseName>     <Server>localhost</Server>     <ScriptDirectory>.\Scripts</ScriptDirectory>     <RebuildDirectory>.\Rebuild</RebuildDirectory>     <TestDataDirectory>.\TestData</TestDataDirectory>     <DbDeploy>.\DBDeploy</DbDeploy>     <User>TestUser</User>     <Password>TestPassword</Password>     <BCP>bcp</BCP>     <BCPOptions>-S$(Server) -U$(User) -P$(Password) -N -E -k</BCPOptions>     <OutputFileName>dbDeploy-output.sql</OutputFileName>     <UndoFileName>dbDeploy-output-undo.sql</UndoFileName>     <LastChangeToApply>99999</LastChangeToApply>   </PropertyGroup>     <ImportProject="$(MSBuildExtensionsPath)\ExtensionPack\4.0\MSBuild.ExtensionPack.tasks"/>   <UsingTask TaskName="Ktsc.Build.DBDeploy" AssemblyFile="$(DbDeploy)\Ktsc.Build.dll"/>   <ItemGroup>     <VariableInclude="DatabaseName">       <Value>$(DatabaseName)</Value>     </Variable>     <VariableInclude="Server">       <Value>$(Server)</Value>     </Variable>     <VariableInclude="User">       <Value>$(User)</Value>     </Variable>     <VariableInclude="Password">       <Value>$(Password)</Value>     </Variable>   </ItemGroup>     <TargetName="Rebuild">     <!--Take the database offline to disconnect any users. Requires that the current user is an admin of the sql server machine.-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd Variables="@(Variable)" Database="$(DatabaseName)" TaskAction="Execute" CommandLineQuery ="ALTER DATABASE $(DatabaseName) SET OFFLINE WITH ROLLBACK IMMEDIATE"/>         <!--Bring it back online.  If you don't, the database files won't be deleted.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="SetOnline" DatabaseItem="$(DatabaseName)"/>     <!--Delete the database, removing the existing files.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="Delete" DatabaseItem="$(DatabaseName)"/>     <!--Create the new database in the default database path location.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="Create" DatabaseItem="$(DatabaseName)" Force="True"/>         <!--Create admin user-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" InputFiles="$(RebuildDirectory)\0002 Create Admin User.sql" Variables="@(Variable)" />     <!--Create the dbdeploy changelog.-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" LogOn="$(User)" Password="$(Password)" InputFiles="$(RebuildDirectory)\0003 Create Changelog.sql" Variables="@(Variable)" />     <CallTarget Targets="Update;ImportData"/>     </Target>    <TargetName="Update" DependsOnTargets="CreateUpdateScript">     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" LogOn="$(User)" Password="$(Password)" InputFiles="$(OutputFileName)" Variables="@(Variable)" />   </Target>   <TargetName="CreateUpdateScript">     <ktsc.Build.DBDeploy DbType="mssql"                                        DbConnection="User=$(User);Password=$(Password);Data Source=$(Server);Initial Catalog=$(DatabaseName);"                                        Dir="$(ScriptDirectory)"                                        OutputFile="..\$(OutputFileName)"                                        UndoOutputFile="..\$(UndoFileName)"                                        LastChangeToApply="$(LastChangeToApply)"/>   </Target>     <TargetName="ImportData">     <ItemGroup>       <TestData Include="$(TestDataDirectory)\*.dat"/>     </ItemGroup>     <ExecCommand="$(BCP) $(DatabaseName).dbo.%(TestData.Filename) in&quot;%(TestData.Identity)&quot;$(BCPOptions)"/>   </Target> </Project> Technorati Tags: MSBuild

    Read the article

  • Having Trouble Granting Access Via Squid

    - by Muhnamana
    I'm by far no expert at this but how do I grant access to Squid? I'm current using 2.7.STABLE9. I've read you need to add a couple of lines, an acl and http_access line. So here's what I added and where. I highly doubt this is right since I'm trying to connect via my laptop and Firefox is yelling at me saying the proxy server is refusing connections. ACL Part: # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed **acl all_computers scr 192.168.1.0/255.255.255.0** acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network http_access part: # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed #http_access allow localnet http_access allow localhost http_access allow all_computers Any suggestions on what I'm doing wrong?

    Read the article

  • Drive Innovation from Data with Oracle Business Analytics

    - by Mike.Hallett(at)Oracle-BI&EPM
    Oracle is doing a big marketing push on the transformational value of Business Analytics to our customers, and we hope you as partners can get excited, involved and more business from this campaign.  Work with your local in-country BI business development manager and your partner channel manager: if you want to contribute and are struggling to make contact, then let me know ([email protected]) and I will facilitate introductions. Oracle Day Business Analytics Track Invite your customers to register for their local Oracle Day to get the latest news from OpenWorld and learn about Oracle's Big Data strategy and solution. There is a dedicated Business Analytics track. Business Analytics Facebook Hub Encourage your customers to "Like" the Business Analytics Facebook Page @ www.facebook.com/OracleBusinessAnalytics so they can receive useful and interesting information on their Facebook wall.

    Read the article

  • C# - How to detect all IP addresses from a LAN?

    - by SAMIR BHOGAYTA
    string strHostName = string.Empty; cmbIPAddress.Items.Clear(); // Getting Ip address of local machine... // First get the host name of local machine. strHostName = Dns.GetHostName(); // Then using host name, get the IP address list.. IPHostEntry ipEntry = Dns.GetHostByName(strHostName); IPAddress[] iparrAddr = ipEntry.AddressList; if (iparrAddr.Length 0) { for (int intLoop = 0; intLoop cmbIPAddress.Items.Add(iparrAddr[intLoop].ToString()); }

    Read the article

  • ASP.NET MVC 2 and Windows Azure

    If you upgrade an Azure web instance to use ASP.NET MVC 2, make sure you mark the System.Web.Mvc reference as Copy Local = true.  Otherwise, your deployment will fail.  And you wont get any good feedback from Windows Azure as to the cause of the problem.  So youll start searching the web for help, and perhaps youll stumble on this post, and youll realize that you didnt set Copy Local = true on your System.Web.Mvc assembly reference in your ASP.NET MVC 2 web instance.  And youll  leave happy (or at least slightly happier) than when you came. That is all. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Setting APPMENU_DISPLAY_BOTH for gnome applications does not work on Ubuntu 12.04 LTS

    - by Jack'
    To start the application with the menu enabled in the application and the panel, the following command has to be used: APPMENU_DISPLAY_BOTH=1 appname I recently discovered that this only works for non-gnome applications on Ubuntu 12.04 LTS, i.e. if you replace appname with applications such as gnome-terminal, gedit, evince, empathy, evolution, rhythmbox, nautilus, etc. only global menus will be displayed. However, if you start, for example, gimp or inkscape by using APPMENU_DISPLAY_BOTH, both global and local menus will be shown. The questions is: why is APPMENU_DISPLAY_BOTH not taken into account when starting such gnome applications? P.S. I know how to disable global menus in order to get the local ones (UBUNTU_MENUPROXY trick, removing appmenu-gtk/qt packages, removing the indicator-appmenu, etc.) Thanks for the help!

    Read the article

  • Why is this by passing the SUDO password?

    - by John Isaacks
    I have a bash script I am using to automate a SVN checkout. The contents of the file were: #!/bin/bash cd /var/www-cake sudo svn checkout file:///usr/local/svn/bash_repo/repo/ Then when I double click the file it would ask me what to do, I would click the button "Run In Terminal" and then a terminal would pop up and ask me for the SUDO password. I would enter it, the script would execute and the terminal would close. I wanted to give some sort of indication that the script ran successfully so I edited my file to look like: #!/bin/bash cd /var/www-cake sudo svn checkout file:///usr/local/svn/bash_repo/repo/ echo "Head revision has been pushed to live server" I expected the terminal to now stay open and tell me the message afterwards. To my surprise it now opens and immediately closes. The script does execute and I no longer have to put in the SUDO password. Is this right? I do not understand why this is happening, seems like a security issue.

    Read the article

  • RPi and Java Embedded GPIO: Connecting LEDs

    - by hinkmond
    Next, we need some low-level peripherals to connect to the Raspberry Pi GPIO header. So, we'll do what's called a "Fry's Run" in Silicon Valley, which means we go shop at the local Fry's Electronics store for parts. In this case, we'll need some breadboard jumper wires (blue wires in photo), some LEDs, and some resistors (for the RPi GPIO, 150 ohms - 300 ohms would work for the 3.3V output of the GPIO ports). And, if you want to do other projects, you might as well by a breadboard, which is a development board with lots of holes in it. Ask a Fry's clerk for help. Or, better yet, ask the customer standing next to you in the electronics components aisle for help. (Might be faster) So, go to your local hobby electronics store, or go to Fry's if you have one close by, and come back here to the next blog post to see how to hook these parts up. Hinkmond

    Read the article

  • Android application Database Framework

    - by Marek Sebera
    When creating mobile (specially Android) application, I usually come to touch with similar pattern of working with data. Usually I need to fetch some remote data (covered by authorization process) to local cache. And on next request: Check networking Check presence of cache file Check version of cache file (if networking) Get new version and save cache (if networking and file not in cache, or outdated) Data store is no-SQL JSON Document-Based (and yes, I know about CouchDB Android version, but it doesn't fit my needs yet.) Process of authorizing to data source and code for check version of local cache is adapted to application. But the other code (handling network, saving cache, handling exceptions,...) is always the same. Is there any Data Store helper I can use, which provides functions I described above?

    Read the article

  • Code review process when using GIT as a repository?

    - by Sid
    What is the best process for code review when using GIT? Current process: We have a GIT server with a master branch to which everyone commits Devs work off the local master mirror or a local feature branch Devs commit to server's master branch Devs request code review on last commit Problem: Any bug in code review are already in master by the time it's caught. Worse, usually someone has burnt a few hours trying to figure out what happened... So, we would like To do code review BEFORE delivery into the 'master'. Have a process that works with a global team (no over the shoulder reviews!) something that doesn't require an individual dev to be at his desk/machine to be powered up so someone else can remote in (remove human dependency, devs go home at different timezones) We use TortoiseGIT for a visual representation of a list of files changed, diff'ing files etc. Some of us drop into a GIT shell when the GUI isn't enough, but ideally we'd like the workflow to be simple and GUI based (I want the tool to lift any burden, not my devs).

    Read the article

  • MAMP Pro Installed On Mavericks 10.9

    - by cnps
    I have MAMP Pro Installed. I'm well aware with the advanced features of MAMP Pro I can change to different types of Hosts, but I wanted to know why my previous method would not work within mavericks. This was my usual working method in 10.8, but now with 10.9 it's a headache. The previous method would: Write a custom address with the host address: "#Virtual Hosts" 127.0.0.1 nameofsite.local go to /Applications/MAMP/conf/apache and open the httpd.conf file, scroll to the bottom and then add NameVirtualHost * DocumentRoot "/Applications/MAMP/htdocs" ServerName localhost DocumentRoot "/Users/Klimt/Sites/siteoffolder" ServerName clientA.local make sure the ports is set to 80, 443, 3306. reset MAMP PRO and then usually type in the url address and it's gold from there. any help?

    Read the article

  • Which Shopping Cart is better to run online grocery shop Prestrashop or nopCommerce

    - by Bigmunk
    I have been researching on open source software for an online grocery shop project. I have now narrowed by search to .NET based nopCommerce and the PHP based PrestaShop shopping carts. My plan is to acquire an open source shopping cart and hire a local developer to customize it to our local needs & as per our requirement. I'm now wondering whether I should have a developer start the whole project from scratch, or use an open source software such us PrestaShop or nopCommerce which can then be customized? Note that my store will have thousands of products and services so I want something that can handle up to 5000 products and over. Thanks for your thoughts and advice in advance.

    Read the article

  • Basic Ubuntu FTP Server

    - by JPrescottSanders
    I would like to setup a basic FTP server on my Ubuntu Server install. I have been playing with VSFTPD, but am having issues getting the server to allow me to create directories and copy files. I have set the system to allow local users, but it appears that doesn't mean I get access to create directories. This may be an instance where I need to be better grounded in Unbuntu server setup in order to configure this FTP server adequately. The end goal is to be able to move files from my local dev folder into my www folder for deployment. Directories need to be able to move as well. Any help would be greatly appreciated.

    Read the article

  • Where are Nagios 3 Config Files in Ubuntu 12.04?

    - by Aaron James
    I just installed Nagios3 via Synaptic. The package and it's dependencies all installed fine and I log in using a web browser, however I'd like to add hosts now and according to the official Nagios Documentation the config file should be in the /usr/local/nagios/* directory. When I go to /usr/local it's not there. I can't seem to find these config files anywhere. I'm not sure what I did wrong. I'm running Xubuntu 12.04 64-bit Any help at all would be greatly appreciated, Thank you!

    Read the article

  • What is the reason that can't cron automatic run?

    - by Jingqiang Zhang
    OS : Ubuntu 12.04 Now I want to use Backup and Whenever gem to automatic backup my database. When I connect the server by ssh as an added user to run backup perform -t my_backup,it works well.But the cron file: 0 22 * * * /bin/bash -l -c 'backup perform -t my_backup' can't run at 22:00. When I use cat /etc/crontab check the cron's config file,it is: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # The /bin/bash and /bin/sh are different.What's the reason?How to do?

    Read the article

  • Zenoss Setup for Windows Servers

    - by Jay Fox
    Recently I was saddled with standing up Zenoss for our enterprise.  We're running about 1200 servers, so manually touching each box was not an option.  We use LANDesk for a lot of automated installs and patching - more about that later.The steps below may not necessarily have to be completed in this order - it's just the way I did it.STEP ONE:Setup a standard AD user.  We want to do this so there's minimal security exposure.  Call the account what ever you want "domain/zenoss" for our examples.***********************************************************STEP TWO:Make the following local groups accessible by your zenoss account.Distributed COM UsersPerformance Monitor UsersEvent Log Readers (which doesn't exist on pre-2008 machines)Here's the Powershell script I used to setup access to these local groups:# Created to add Active Directory account to local groups# Must be run from elevated prompt, with permissions on the remote machine(s).# Create txt file should contain the names of the machines that need the account added, one per line.# Script will process machines line by line.foreach($i in (gc c:\tmp\computers.txt)){# Add the user to the first group$objUser=[ADSI]("WinNT://domain/zenoss")$objGroup=[ADSI]("WinNT://$i/Distributed COM Users")$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)# Add the user to the second group$objUser=[ADSI]("WinNT://domain/zenoss")$objGroup=[ADSI]("WinNT://$i/Performance Monitor Users")$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)# Add the user to the third group - Group doesn't exist on < Server 2008#$objUser=[ADSI]("WinNT://domain/zenoss")#$objGroup=[ADSI]("WinNT://$i/Event Log Readers")#$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)}**********************************************************STEP THREE:Setup security on the machines namespace so our domain/zenoss account can access itThe default namespace for zenoss is:  root/cimv2Here's the Powershell script:#Grant account defined below (line 11) access to WMI Namespace#Has to be run as account with permissions on remote machinefunction get-sid{Param ($DSIdentity)$ID = new-object System.Security.Principal.NTAccount($DSIdentity)return $ID.Translate( [System.Security.Principal.SecurityIdentifier] ).toString()}$sid = get-sid "domain\zenoss"$SDDL = "A;;CCWP;;;$sid" $DCOMSDDL = "A;;CCDCRP;;;$sid"$computers = Get-Content "c:\tmp\computers.txt"foreach ($strcomputer in $computers){    $Reg = [WMIClass]"\\$strcomputer\root\default:StdRegProv"    $DCOM = $Reg.GetBinaryValue(2147483650,"software\microsoft\ole","MachineLaunchRestriction").uValue    $security = Get-WmiObject -ComputerName $strcomputer -Namespace root/cimv2 -Class __SystemSecurity    $converter = new-object system.management.ManagementClass Win32_SecurityDescriptorHelper    $binarySD = @($null)    $result = $security.PsBase.InvokeMethod("GetSD",$binarySD)    $outsddl = $converter.BinarySDToSDDL($binarySD[0])    $outDCOMSDDL = $converter.BinarySDToSDDL($DCOM)    $newSDDL = $outsddl.SDDL += "(" + $SDDL + ")"    $newDCOMSDDL = $outDCOMSDDL.SDDL += "(" + $DCOMSDDL + ")"    $WMIbinarySD = $converter.SDDLToBinarySD($newSDDL)    $WMIconvertedPermissions = ,$WMIbinarySD.BinarySD    $DCOMbinarySD = $converter.SDDLToBinarySD($newDCOMSDDL)    $DCOMconvertedPermissions = ,$DCOMbinarySD.BinarySD    $result = $security.PsBase.InvokeMethod("SetSD",$WMIconvertedPermissions)     $result = $Reg.SetBinaryValue(2147483650,"software\microsoft\ole","MachineLaunchRestriction", $DCOMbinarySD.binarySD)}***********************************************************STEP FOUR:Get the SID for our zenoss account.Powershell#Provide AD User get SID$objUser = New-Object System.Security.Principal.NTAccount("domain", "zenoss") $strSID = $objUser.Translate([System.Security.Principal.SecurityIdentifier]) $strSID.Value******************************************************************STEP FIVE:Modify the Service Control Manager to allow access to the zenoss AD account.This command can be run from an elevated command line, or through Powershellsc sdset scmanager "D:(A;;CC;;;AU)(A;;CCLCRPRC;;;IU)(A;;CCLCRPRC;;;SU)(A;;CCLCRPWPRC;;;SY)(A;;KA;;;BA)(A;;CCLCRPRC;;;PUT_YOUR_SID_HERE_FROM STEP_FOUR)S:(AU;FA;KA;;;WD)(AU;OIIOFA;GA;;;WD)"******************************************************************In step two the script plows through a txt file that processes each computer listed on each line.  For the other scripts I ran them on each machine using LANDesk.  You can probably edit those scripts to process a text file as well.That's what got me off the ground monitoring the machines using Zenoss.  Hopefully this is helpful for you.  Watch the line breaks when copy the scripts.

    Read the article

  • System user authentication via web interface [closed]

    - by donodarazao
    Background: We have one pretty slow and expensive satellite Internet connection that is shared in a network with 5-50 users. To limit traffic, users shall pay a certain sum of money per hour. Routing and traffic accounting on user basis is done by a opensuse 10.3 server. Login is done via pppoe, and for each connection, username, bytes_sent, bytes_rcvd, start_time, end_time,etc are written into a mysql database. Now it was decided that we want to change from time-based to volume-based pricing. As the original developer who installed the system a couple of years ago isn't available, I'm trying to do the changes. Although I'm absolutely new to all this, there is some progress. However, there's one point I'm absolutely stuck. Up to now, only administrators can access connection details and billing information via a web interface. But as volume-based prices are less transparent to users than time-based prices, it is essential that users themselves can check their connections and how much they cost via the web interface. For this, we need some kind of user authentication. Actual question: How to develop such a user authentication? Every user has a linux system user account. With this user name and password, connection to the pppoe-server is made by the client machines. I thought about two possibles ways to authenticate users: First possibility: Users type username and password in a form. This is then somehow checked. We already have to possibilities to change passwords via the web interface. Here are parts of the code: Part of the Perl script the homepage is linked to: #!/usr/bin/perl use CGI; use CGI::Carp qw(fatalsToBrowser); use lib '../lib'; use own_perl_module; my @error; my $data; $query = new CGI; $username = $query->param('username') || ''; $oldpasswd = $query->param('oldpasswd') || ''; $passwd = $query->param('passwd') || ''; $passwd2 = $query->param('passwd2') || ''; own_perl_module::connect(); if ($query->param('submit')) { my $benutzer = own_perl_module::select_benutzer(username => $username) or push @error, "user not exists"; push @error, "your password?!?" unless $passwd; unless (@error) { own_perl_module::update_benutzer($benutzer->{id}, { oldpasswd => $oldpasswd, passwd => $passwd, passwd2 => $passwd2 }, error => \@error) and push @error, "Password changed."; } } Here's part of the sub update_benutzer in the own_perl_module: if ($dat-{passwd} ne '') { my $username = $dat-{username} || $select-{username}; my $system = "./chpasswd.pl '$username' '$dat-{passwd}'" . (defined($dat-{oldpasswd}) ? " '$dat-{oldpasswd}'" : undef); my $answer = $system; if ($? != 0) { chomp($answer); push @$error, $answer || "error changing password ($?)"; Here's chpasswd.pl: #!/usr/bin/perl use FileHandle; use IPC::Open3; local $username = shift; local $passwd = shift; local $oldpasswd = shift; local $chat = { 'Old Password: $' => sub { print POUT "$oldpasswd\n"; }, 'New password: $' => sub { print POUT "$passwd\n"; }, 'Re-enter new password: $' => sub { print POUT "$passwd\n"; }, '(.*)\n$' => sub { print "$1\n"; exit 1; } }; local $/ = \1; my $command; if (defined($oldpasswd)) { $command = "sudo -u '$username' /usr/bin/passwd"; } else { $command = "sudo /usr/bin/passwd '$username'"; } $pid = open3(\*POUT, \*PIN, \*PERR, $command) or die; my $buffer; LOOP: while($_ = <PERR>) { $buffer .= $_; foreach (keys(%$chat)) { if ($buffer =~ /$_/i) { $buffer = undef; &{$chat->{$_}}; } } } exit; Could this somehow be adjusted to verify users, but not changing user passwords? The second possibility I see: all pppoe connections are logged in the mysql database. If I could somehow retrieve the username (or uid) of the user connected by pppoe, this could be used to authenticate users. Users could only check their internet connections and costs when they are online (and thus paying money), but this could be tolerated. Here's a line of the script that inserts connections into the database: my $username = $ENV{PEERNAME}; I thought it would be easy to use this variable, but $username seems to be always empty in test-scripts (print $username). Any idea how to retrieve the user connected to the pppoe server? Sorry for the long question! Any help would be very much appreciated. :)

    Read the article

  • "Backup Intervals" in rsnapshot.conf?

    - by Patrick
    A simple question about rsnapshot. In order to perform daily backups I'm going to add lines to cron in my Ubuntu. Then, why do I have also these lines in the rsnapshot.conf ? ######################################### # BACKUP INTERVALS # # Must be unique and in ascending order # # i.e. hourly, daily, weekly, etc. # ######################################### interval hourly 6 interval daily 7 interval weekly 4 #interval monthly 3 If I use cron, should I disable them ? thanks ps. I've just realized that in the crontab I still have "hourly" and "daily". Should I then uncomment only the one I use in the crontab ? And what's the point to specify hourly if it is already specified in cron ? I'm a bit confused. # crontab -e 0 */4 * * * /usr/local/bin/rsnapshot hourly 30 23 * * * /usr/local/bin/rsnapshot daily

    Read the article

  • how to uninstall sphinx server

    - by misterjinx
    I installed (from sources) yesterday sphinx server and didn't think of configuring it properly, so I left the default installation directory (/usr/local). Today I realized that it created all of it's directories inside that directory, instead of creating its own (as I wrongly thought). So today I started again the installation and specified explicitly the location where to install. After that I removed the previous directories from /usr/local that were created when the first installation was done (bin, etc, var). So I thought of giving it a test. I created the .conf file and wanted to run the indexer. But now, the indexer always tries to search the .conf file in the old location (obviously does not find anything) and when I specify where to find the .conf file it gives me errors for each of the configuration settings present. I know I did something wrong when I deleted manually those files and that's why I'm asking you if have any ideas how to correctly uninstall sphinx. Thank you.

    Read the article

  • Duplicate domain names - .net and .com Create separate pages, or redirect?

    - by guisasso
    In a SEO point of view: This website has a good amount of traffic for a local business, but also ships some merchandise. While the .net domain (registered first) is associated with the local busines (google places, maps, etc...) the .com domain only redirects to the .net domain. Is it good, bad or okay to create a different page for the domain .com for example, that would be pretty simple, but would link to the 10 different categories of products that this company sells? I know links are good, so there's that, but what else is good, or bad? Thanks in advance!

    Read the article

  • Joomla Backend running slow on localhost

    - by boothe
    I made a local backup of a Joomla site a few months ago to test changes before updating the live site. Everything worked fine. Today I checked the local version after a while but when I open the backend (/administrator) it takes a while until the site is loaded. I tried out different things and accidently disconnected my Network Connection. After that everything loads as fast as before. But when I connect the Network Connection the problem reappears. I am running Joomla 1.5.14 on XAMPP 1.7.0.

    Read the article

  • A little gem from MPN&ndash;FREE online course on Architectural Guidance for Migrating Applications to Windows Azure Platform

    - by Eric Nelson
    I know a lot of technical people who work in partners (ISVs, System Integrators etc). I know that virtually none of them would think of going to the Microsoft Partner Network (MPN) learning portal to find some deep and high quality technical content. Instead they would head to MSDN, Channel 9, msdev.com etc. I am one of those people :-) Hence imagine my surprise when i stumbled upon this little gem Architectural Guidance for Migrating Applications to Windows Azure Platform (your company and hence your live id need to be a member of MPN – which is free to join). This is first class stuff – and represents about 4 hours which is really 8 if you stop and ponder :) Course Structure The course is divided into eight modules.  Each module explores a different factor that needs to be considered as part of the migration process. Module 1:  Introduction:  This section provides an introduction to the training course, highlighting the values of the Windows Azure Platform for developers. Module 2:  Dynamic Environment: This section goes into detail about the dynamic environment of the Windows Azure Platform. This session will explain the difference between current development states and the Windows Azure Platform environment, detail the functions of roles, and highlight development considerations to be aware of when working with the Windows Azure Platform. Module 3:  Local State: This session details the local state of the Windows Azure Platform. This section details the different types of storage within the Windows Azure Platform (Blobs, Tables, Queues, and SQL Azure). The training will provide technical guidance on local storage usage, how to write to blobs, how to effectively use table storage, and other authorization methods. Module 4:  Latency and Timeouts: This session goes into detail explaining the considerations surrounding latency, timeouts and how to assess an IT portfolio. Module 5:  Transactions and Bandwidth: This session details the performance metrics surrounding transactions and bandwidth in the Windows Azure Platform environment. This session will detail the transactions and bandwidth costs involved with the Windows Azure Platform and mitigation techniques that can be used to properly manage those costs. Module 6:  Authentication and Authorization: This session details authentication and authorization protocols within the Windows Azure Platform. This session will detail information around web methods of authorization, web identification, Access Control Benefits, and a walkthrough of the Windows Identify Foundation. Module 7:  Data Sensitivity: This session details data considerations that users and developers will experience when placing data into the cloud. This section of the training highlights these concerns, and details the strategies that developers can take to increase the security of their data in the cloud. Module 8:  Summary Provides an overall review of the course.

    Read the article

  • Should I sacrifice code succintness to ensure the narrowest variable scope? [duplicate]

    - by David Scholefield
    This question already has an answer here: Is the usage of internal scope blocks within a function bad style? 3 answers In many languages (e.g. both Perl and Java - which are the two languages I work most with) it is possible to narrow the scope of local variables by declaring them within a block. Although it adds extra code length (the opening and closing block braces), and possibly reduces readability, should I create blocks purely to narrow the scope of variables to the statements that use the variables and to uphold the principle of narrowest scope or does this sacrifice succinctness and readability just to unnecessarily uphold an agreed 'best practice' principle? I usually declare local variables to functions/methods at the start of the function to aid readability, but I could not do this, and just create blocks throughout the function and declare the variables throughout the code - within those blocks - to narrow their scope.

    Read the article

  • RAID1: can't replace faulty spare (marked again as 'faulty spare' within seconds)

    - by user212475
    I got a problem that I cannot solve: Our fileserver runs XUbuntu and 3 RAID1s. One has a problem since monday: it consists of sdb and sdc. sdb was marked as faulty by mdadm for unknown reasons. I used --remove to remove it from the RAID and then to add it by --add. All was fine, re-syncing started but never got above 0% and after a few seconds, sdb was again marked as 'faulty spare' (and therefore the RAID degraded, but clean). So I saved the first 512 byte of the old sdb to a file, bought a new HDD of same size (4TB), shut down the computer and replaced sdb physically, switched the computer back on and wrote the 512 byte back to the new drive to have the same partition info as the old drive (both are the same type, from same company). But the new drive shows the same behaviour as the old: I can add, re-syncing starts and after a few seconds its marked as 'faulty spare'. Here exactly what i did: mdadm --remove /dev/md/1 /dev/sdb maadm --detail /dev/md/1 gives me: /dev/md/1: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:56:13 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2424 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc mdadm --add /dev/md/1 /dev/sdb mdadm --detail /dev/md/1 gives me: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:57:49 2013 State : clean, degraded, recovering Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Rebuild Status : 0% complete Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2431 Number Major Minor RaidDevice State 2 8 16 0 faulty spare rebuilding /dev/sdb 1 8 32 1 active sync /dev/sdc and after a few seconds: /dev/md/1: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:57:50 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2436 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc 2 8 16 - faulty spare /dev/sdb same behaviour if I zero the superblock (mdadm --zero-superblock /dev/sdb) before adding sdb. I do all commands as root and the system holds 3 more 4TB drives, ie the mainboard can handle them. The old harddrive was checked for errors using badblocks, but all is fine. Does anybody have any idea, what the problem is?

    Read the article

  • How to use PostgreSQL on AWS - Ubuntu 11.10

    - by That1Guy
    I'm extremely new to cloud-computing, Linux, and PostgreSQL, so if this is a stupid question, I apologize. I've managed to create an m1.large instance running Ubuntu 11.10, connect via Putty SSH, and install PostgreSQL (sudo apt-get install postgresql), but that is as far as I've gotten. My goal is to run several python web-scraping scripts that I've written on this instance (so as not to eat up all of our bandwidth (smaller company at the moment)) and insert the scraped data into a PostgreSQL table on the instance and later retrieve that data to store on our local server (as I've heard AWS EBS is unreliable and I don't want to take chances). How can I configure PostgreSQL on my AWS instance? How can I access the data from my machine? I currently use PgAdmin3 to manage PosgreSQL on our local server. Can I use this same interface to manage PostgreSQL on my AWS instance? Any suggestions, solutions, links, etc is greatly appreciated. And again, if this is a dumb question, I apologize.

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >