Search Results

Search found 8389 results on 336 pages for 'shared calendar'.

Page 321/336 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • PHP pages working slow from time to time

    - by user1038179
    I have VPS with limit of 2GB of ram and 8 CPU cores. I have 5 sites on that VPS (one of them is just for testing, no visitors exept me). All 5 sites are image galleries, like wallpaper sites. Last week I noticed problem on one site (main domain, used for name servers, and also with most traffic, visitors). That site has two image galleries, one is old static html gallery made few years ago and another, main, is powered by ZENPhoto CMS. Also I have that same gallery CMS on another two sites on that same VPS (on one running site and on one just for testing site). On other two sites I have diferent PHP driven gallery. Problem is that after some time (it vary from 10 minutes to few hours after apache restart), loading of pages on main site becomes very slow, or I get 503 Service Temporarily Unavailable error. So pages becomes unavailable. But just that part with new CMS gallery, old part of site with static html pages are working fast and just fine. Also other two sites with same CMS gallery and other two with different PHP driven gallery are working fine and fast at the same time. I thought it must be something with CMS on that main site, because other sites are working nice. Then I tryed to open contact and guest book pages on that main site which are outside of that CMS but also PHP pages, and they do not load too, but that same contact php scipts are working on other sites at the same time. So, when site starts to hangs, ONLY PHP generated content is not working, like I said other static pages are working. And, ONLY on that one main site I have problems. Then I need to restart Apache, after restart everything is vorking nice and fast, for some time, than again, just PHP pages on main site are becomming slower. If I do not restart apache that slowness take some time (several minutes, hours, depending ot traffic) and during that time PHP diven content is loading very slow or unavailable on that site. After sime time, on moments everything start to work and is fast again for some time, and again. In hours with more traffic PHP content is loading slowly or it is unavailable, in hours with less traffic it is sometimes fast and sometimes little bit slower than usually. And ones again, only on that main site, and only PHP driven pages, static pages are working fast even in most traffic hours also other sites with even same CMS are working fast. Currently I have about 7000 unique visitors on that site but site worked nice even with 11500 visitors per day. And about 17000 in total visitors on VPS, all sites ( about 3 pages per unique visitor). When site start to slow down sometimes in apache status I can see something like this: mod_fcgid status: Total FastCGI processes: 37 Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 11300 39 28 7 Working 11274 47 28 7 Working 11296 40 29 3 Working 11283 45 30 3 Working 11304 36 31 1 Working 11282 46 32 3 Working 11292 42 33 1 Working 11289 44 34 1 Working 11305 35 35 0 Working 11273 48 36 2 Working 11280 47 39 1 Working 10125 133 40 12 Exiting(communication error) 11294 41 41 1 Exiting(communication error) 11277 47 42 2 Exiting(communication error) 11291 43 43 1 Exiting(communication error) 10187 108 43 10 Exiting(communication error) 10209 95 44 7 Exiting(communication error) 10171 113 44 5 Exiting(communication error) 11275 47 47 1 Exiting(communication error) 10144 125 48 8 Exiting(communication error) 10086 149 48 20 Exiting(communication error) 10212 94 49 5 Exiting(communication error) 10158 118 49 5 Exiting(communication error) 10169 114 50 4 Exiting(communication error) 10105 141 50 16 Exiting(communication error) 10094 146 50 15 Exiting(communication error) 10115 139 51 17 Exiting(communication error) 10213 93 51 9 Exiting(communication error) 10197 103 51 7 Exiting(communication error) Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7983 1079 2 149 Ready 7979 1079 11 151 Ready Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7990 1066 0 57 Ready 8001 1031 64 35 Ready 7999 1032 94 29 Ready 8000 1031 91 36 Ready 8002 1029 34 52 Ready Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7991 1064 29 115 Ready When it is working nicly there is no lines with "Exiting(communication error)" Active and Idle are time active and time since last request, in seconds. Here are system info. Sysem info: Total processors: 8 Processor #1 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5440 @ 2.83GHz Speed 88.320 MHz Cache 6144 KB All other seven are the same. System Information Linux vps.nnnnnnnnnnnnnnnnn.nnn 2.6.18-028stab099.3 #1 SMP Wed Mar 7 15:20:22 MSK 2012 x86_64 x86_64 x86_64 GNU/Linux Current Memory Usage total used free shared buffers cached Mem: 8388608 882164 7506444 0 0 0 -/+ buffers/cache: 882164 7506444 Swap: 0 0 0 Total: 8388608 882164 7506444 Current Disk Usage Filesystem Size Used Avail Use% Mounted on /dev/vzfs 100G 34G 67G 34% / none System Details: Running on: Apache/2.2.22 System info: (Unix) mod_ssl/2.2.22 OpenSSL/0.9.8e-fips-rhel5 DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_fcgid/2.3.6 Powered by: PHP/5.3.10 Current Configuration Default PHP Version (.php files) 5 PHP 5 Handler fcgi PHP 4 Handler suphp Apache suEXEC on Apache Ruid2 off PHP 4 Handler suphp Apache suEXEC on Apache Configuration The following settings have been saved: fileetag: All keepalive: On keepalivetimeout: 3 maxclients: 150 maxkeepaliverequests: 10 maxrequestsperchild: 10000 maxspareservers: 10 minspareservers: 5 root_options: ExecCGI, FollowSymLinks, Includes, IncludesNOEXEC, Indexes, MultiViews, SymLinksIfOwnerMatch serverlimit: 256 serversignature: Off servertokens: Full sslciphersuite: ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP:!kEDH startservers: 5 timeout: 30 I hope, I explained my problem nicely. Any help would be nice.

    Read the article

  • How to Add, Edit and Display one to many relationship entities in ASP.Net MVC 2?

    - by Picflight
    I am looking for best practices conforming to the MVC design pattern. My Entities have the following relationship. tblPortal PortalId PrortalName tblPortalAlias AliasId PortalId HttpAlias Each Portal can have many PortalAlias. I want to Add a New Portal and then Add the associated PortalAlias. I am confused on how I should structure the Views and how I should present the Views to the user. I am looking for some sample code on how to accomplish this. My thoughts are first present the Portal View, let the user add the Portal. Then click the Edit link on the Portal List View and on the Portal Edit View let them Add the PortalAlias. If so, what should the Edit View look like? So far I have: Edit <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MyProject.Mvc.Models.PortalFormViewModel>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Edit </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Edit</h2> <% Html.RenderPartial("PortalForm", Model); %> <div> <%= Html.ActionLink("Back to List", "Index") %> </div> </asp:Content> PortalForm <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<MyProject.Mvc.Models.PortalFormViewModel>" %> <%= Html.ValidationSummary("Please correct the errors and try again.") %> <% using (Html.BeginForm()) {%> <%= Html.ValidationSummary(true) %> <fieldset> <legend>Fields</legend> <div class="editor-label"> <%= Html.LabelFor(model => model.Portal.PortalId) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Portal.PortalId) %> <%= Html.ValidationMessageFor(model => model.Portal.PortalId) %> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Portal.PortalName) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Portal.PortalName) %> <%= Html.ValidationMessageFor(model => model.Portal.PortalName) %> </div> <p> <input type="submit" value="Save" /> </p> </fieldset> <% } %> Alias<br /><%-- This display is for debug --%> <% foreach (var item in Model.PortalAlias) { %> <%= item.HTTPAlias %><br /> <% } %> PortalFormViewModel public class PortalFormViewModel { public Portal Portal { get; private set; } public IEnumerable<PortalAlias> PortalAlias { get; private set; } public PortalFormViewModel() { Portal = new Portal(); } public PortalFormViewModel(Portal portal) { Portal = portal; PortalAlias = portal.PortalAlias; } }

    Read the article

  • How to install PyQt on Mac OS X 10.6.

    - by Jebagnanadas
    Hello all, I'm quite new to Mac OS X. when i tried to install PyQt on Mac Os X after installing python 3.1, Qt 4.6.2 and SIP 4.10.1 i encounter the following error when i execute $python3 configure.py command. Determining the layout of your Qt installation... This is the GPL version of PyQt 4.7 (licensed under the GNU General Public License) for Python 3.1 on darwin. Type '2' to view the GPL v2 license. Type '3' to view the GPL v3 license. Type 'yes' to accept the terms of the license. Type 'no' to decline the terms of the license. Do you accept the terms of the license? yes Checking to see if the QtGui module should be built... Checking to see if the QtHelp module should be built... Checking to see if the QtMultimedia module should be built... Checking to see if the QtNetwork module should be built... Checking to see if the QtOpenGL module should be built... Checking to see if the QtScript module should be built... Checking to see if the QtScriptTools module should be built... Checking to see if the QtSql module should be built... Checking to see if the QtSvg module should be built... Checking to see if the QtTest module should be built... Checking to see if the QtWebKit module should be built... Checking to see if the QtXml module should be built... Checking to see if the QtXmlPatterns module should be built... Checking to see if the phonon module should be built... Checking to see if the QtAssistant module should be built... Checking to see if the QtDesigner module should be built... Qt v4.6.2 free edition is being used. Qt is built as a framework. SIP 4.10.1 is being used. The Qt header files are in /usr/include. The shared Qt libraries are in /Library/Frameworks. The Qt binaries are in /Developer/Tools/Qt. The Qt mkspecs directory is in /usr/local/Qt4.6. These PyQt modules will be built: QtCore. The PyQt Python package will be installed in /Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages. PyQt is being built with generated docstrings. PyQt is being built with 'protected' redefined as 'public'. The Designer plugin will be installed in /Developer/Applications/Qt/plugins/designer. The PyQt .sip files will be installed in /Library/Frameworks/Python.framework/Versions/3.1/share/sip/PyQt4. pyuic4, pyrcc4 and pylupdate4 will be installed in /Library/Frameworks/Python.framework/Versions/3.1/bin. Generating the C++ source for the QtCore module... sip: Usage: sip [-h] [-V] [-a file] [-b file] [-c dir] [-d file] [-e] [-g] [-I dir] [-j #] [-k] [-m file] [-o] [-p module] [-r] [-s suffix] [-t tag] [-w] [-x feature] [-z file] [file] Error: Unable to create the C++ code. Anybody here installed PyQt on Mac OS X 10.6.2 successfully.. Any help would be much appreciated.. Thanks in advance..

    Read the article

  • Am I going about this the right way?

    - by Psytronic
    Hey Guys, I'm starting a WPF project, and just finished the base of the UI, it seems very convoluted though, so I'm not sure if I've gone around laying it out in the right way. I don't want to get to start developing the back-end and realise that I've done the front wrong, and make life harder for myself. Coming from a background of <DIV's and CSS to style this is a lot different, and really want to get it right from the start. Essentially it's a one week calendar (7 days, Mon-Sunday, defaulting to the current week.) Which will eventually link up to a DB and if I have an appointment for something on this day it will show it in the relevant day. I've opted for a Grid rather than ListView because of the way it will work I will not be binding the results to a collection or anything along those lines. Rather I will be filling out a Combo box within the canvas for each day (yet to be placed in the code) for each event and on selection it will show me further details. XAML: <Window x:Class="WOW_Widget.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:s="clr-namespace:System;assembly=mscorlib" xmlns:Extensions="clr-namespace:WOW_Widget" DataContext="{Binding RelativeSource={RelativeSource Self}}" Title="Window1" Height="239" Width="831" <Window.Resources <LinearGradientBrush x:Key="NormalBrush" StartPoint="0,0" EndPoint="0,1" <GradientBrush.GradientStops <GradientStopCollection <GradientStop Offset="1.0" Color="White"/ <GradientStop Offset="0.0" Color="LightSlateGray"/ </GradientStopCollection </GradientBrush.GradientStops </LinearGradientBrush <LinearGradientBrush x:Key="grdDayHeader" StartPoint="0,0" EndPoint="0,1" <GradientBrush.GradientStops <GradientStopCollection <GradientStop Offset="0.0" Color="Peru" / <GradientStop Offset="1.0" Color="White" / </GradientStopCollection </GradientBrush.GradientStops </LinearGradientBrush <LinearGradientBrush x:Key="grdToday" StartPoint="0,0" EndPoint="0,1" <GradientBrush.GradientStops <GradientStopCollection <GradientStop Offset="0.0" Color="LimeGreen"/ <GradientStop Offset="1.0" Color="DarkGreen" / </GradientStopCollection </GradientBrush.GradientStops </LinearGradientBrush <Style TargetType="{x:Type GridViewColumnHeader}" <Setter Property="Background" Value="Khaki" / </Style <Style x:Key="DayHeader" TargetType="{x:Type Label}" <Setter Property="Background" Value="{StaticResource grdDayHeader}" / <Setter Property="Width" Value="111" / <Setter Property="Height" Value="25" / <Setter Property="HorizontalContentAlignment" Value="Center" / </Style <Style x:Key="DayField" <Setter Property="Canvas.Width" Value="111" / <Setter Property="Canvas.Height" Value="60" / <Setter Property="Canvas.Background" Value="White" / </Style <Style x:Key="Today" <Setter Property="Canvas.Background" Value="{StaticResource grdToday}" / </Style <Style x:Key="CalendarColSpacer" <Setter Property="Canvas.Width" Value="1" / <Setter Property="Canvas.Background" Value="Black" / </Style <Style x:Key="CalendarRowSpacer" <Setter Property="Canvas.Height" Value="1" / <Setter Property="Canvas.Background" Value="Black" / </Style </Window.Resources <Grid Background="{StaticResource NormalBrush}" <Border BorderBrush="Black" BorderThickness="1" Width="785" Height="86" Margin="12,12,12,104" <Canvas Height="86" Width="785" VerticalAlignment="Top" <Grid <Grid.ColumnDefinitions <ColumnDefinition / <ColumnDefinition / <ColumnDefinition / <ColumnDefinition / <ColumnDefinition / <ColumnDefinition / <ColumnDefinition / <ColumnDefinition / <ColumnDefinition / <ColumnDefinition / <ColumnDefinition / <ColumnDefinition / <ColumnDefinition / </Grid.ColumnDefinitions <Grid.RowDefinitions <RowDefinition / <RowDefinition / <RowDefinition / </Grid.RowDefinitions <Label Grid.Column="0" Grid.Row="0" Content="Monday" Style="{StaticResource DayHeader}" / <Canvas Grid.Column="1" Grid.RowSpan="3" Grid.Row="0" Style="{StaticResource CalendarColSpacer}" / <Label Grid.Column="2" Grid.Row="0" Content="Tuesday" Style="{StaticResource DayHeader}" / <Canvas Grid.Column="3" Grid.RowSpan="3" Grid.Row="0" Style="{StaticResource CalendarColSpacer}" / <Label Grid.Column="4" Grid.Row="0" Content="Wednesday" Style="{StaticResource DayHeader}" / <Canvas Grid.Column="5" Grid.RowSpan="3" Grid.Row="0" Style="{StaticResource CalendarColSpacer}" / <Label Grid.Column="6" Grid.Row="0" Content="Thursday" Style="{StaticResource DayHeader}" / <Canvas Grid.Column="7" Grid.RowSpan="3" Grid.Row="0" Style="{StaticResource CalendarColSpacer}" / <Label Grid.Column="8" Grid.Row="0" Content="Friday" Style="{StaticResource DayHeader}" / <Canvas Grid.Column="9" Grid.RowSpan="3" Grid.Row="0" Style="{StaticResource CalendarColSpacer}" / <Label Grid.Column="10" Grid.Row="0" Content="Saturday" Style="{StaticResource DayHeader}" / <Canvas Grid.Column="11" Grid.RowSpan="3" Grid.Row="0" Style="{StaticResource CalendarColSpacer}" / <Label Grid.Column="12" Grid.Row="0" Content="Sunday" Style="{StaticResource DayHeader}" / <Canvas Grid.Column="0" Grid.ColumnSpan="13" Grid.Row="1" Style="{StaticResource CalendarRowSpacer}" / <Canvas Grid.Column="0" Grid.Row="2" Margin="0" Style="{StaticResource DayField}" <Label Name="lblMondayDate" / </Canvas <Canvas Grid.Column="2" Grid.Row="2" Margin="0" Style="{StaticResource DayField}" <Label Name="lblTuesdayDate" / </Canvas <Canvas Grid.Column="4" Grid.Row="2" Margin="0" Style="{StaticResource DayField}" <Label Name="lblWednesdayDate" / </Canvas <Canvas Grid.Column="6" Grid.Row="2" Margin="0" Style="{StaticResource DayField}" <Label Name="lblThursdayDate" / </Canvas <Canvas Grid.Column="8" Grid.Row="2" Margin="0" Style="{StaticResource DayField}" <Label Name="lblFridayDate" / </Canvas <Canvas Grid.Column="10" Grid.Row="2" Margin="0" Style="{StaticResource DayField}" <Label Name="lblSaturdayDate" / </Canvas <Canvas Grid.Column="12" Grid.Row="2" Margin="0" Style="{StaticResource DayField}" <Label Name="lblSundayDate" / </Canvas </Grid </Canvas </Border <Canvas Height="86" HorizontalAlignment="Right" Margin="0,0,12,12" Name="canvas1" VerticalAlignment="Bottom" Width="198"</Canvas </Grid </Window CS: public partial class Window1 : Window { private DateTime today = new DateTime(); private Label[] Dates = new Label[7]; public Window1() { DateTime start = today = DateTime.Now; int day = (int)today.DayOfWeek; while (day != 1) { start = start.Subtract(new TimeSpan(1, 0, 0, 0)); day--; } InitializeComponent(); Dates[0] = lblMondayDate; Dates[1] = lblTuesdayDate; Dates[2] = lblWednesdayDate; Dates[3] = lblThursdayDate; Dates[4] = lblFridayDate; Dates[5] = lblSaturdayDate; Dates[6] = lblSundayDate; FillWeek(start); } private void FillWeek(DateTime start) { for (int d = 0; d < Dates.Length; d++) { TimeSpan td = new TimeSpan(d, 0, 0, 0); DateTime _day = start.Add(td); if (_day.Date == today.Date) { Canvas dayCanvas = (Canvas)Dates[d].Parent; dayCanvas.Style = (Style)this.Resources["Today"]; } Dates[d].Content = (int)start.Add(td).Day; } } } Thanks for any tips you guys can give Psytronic

    Read the article

  • Multiple "pages" in GWT with human friendly URLs

    - by Andreas Borglin
    Hi. I'm playing with a GWT/GAE project which will have three different "pages", although it is not really pages in a GWT sense. The top views (one for each page) will have completely different layouts, but some of the widgets will be shared. One of the pages is the main page which is loaded by the default url (http://www.site.com), but the other two needs additional URL information to differentiate the page type. They also need a name parameter, (like http://www.site.com/project/project-name. There are at least two solutions to this that I'm aware of. Use GWT history mechanism and let page type and parameters (such as project name) be part of the history token. Use servlets with url-mapping patterns (like /project/*) The first choice might seem obvious at first, but it has several drawbacks. First, a user should be able to easily remember and type URL directly to a project. It is hard to produce a human friendly URL with history tokens. Second, I'm using gwt-presenter and this approach would mean that we need to support subplaces in one token, which I'd rather avoid. Third, a user will typically stay at one page, so it makes more sense that the page information is part of the "static" URL. Using servlets solves all these problems, but also creates other ones. So my first questions is, what is the best solution here? If I would go for the servlet solution, new questions pop up. It might make sense to split the GWT app into three separate modules, each with an entry point. Each servlet that is mapped to a certain page would then simply forward the request to the GWT module that handles that page. Since a user typically stays at one page, the browser only needs to load the js for that page. Based on what I've read, this solution is not really recommended. I could also stick with one module, but then GWT needs to find out which page it should display. It could either query the server or parse the URL itself. If I stick with one GWT module, I need to keep the page information stored on server side. Naturally I thought about sessions, but I'm not sure if its a good idea to mix page information with user data. A session usually lives between user login and logout, but in this case it would need different behavior. Would it be bad practise to handle this via sessions? The one GWT module + servlet solution also leads to another problem. If a user goes from a project page to the main page, how will GWT know that this has happened? The app will not be reloaded, so it will be treated as a simple state change. It seems rather ineffecient to have to check page info for every state change. Anyone care to guide me out of the foggy darkness that surrounds me? :-)

    Read the article

  • Apache mod_rewrite driving me mad

    - by WishCow
    The scenario I have a webhost that is shared among multiple sites, the directory layout looks like this: siteA/ - css/ - js/ - index.php siteB/ - css/ - js/ - index.php siteC/ . . . The DocumentRoot is at the top level, so, to access siteA, you type http://webhost/siteA in your browser, to access siteB, you type http://webhost/siteB, and so on. Now I have to deploy my own site, which was designed with having 3 VirtualHosts in mind, so my structure looks like this: siteD/ - sites/sitename.com/ - log/ - htdocs/ - index.php - sites/static.sitename.com - log/ - htdocs/ - css - js - sites/admin.sitename.com - log/ - htdocs/ - index.php As you see, the problem is that my index.php files are not at the top level directory, unlike the already existing sites on the webhost. Each VirtualHost should point to the corresponding htdocs/ folder: http://siteD.com -> siteD/sites/sitename.com/htdocs http://static.siteD.com -> siteD/sites/static.sitename.com/htdocs http://admin.siteD.com -> siteD/sites/admin.sitename.com/htdocs The problem I cannot have VirtualHosts on this host, so I have to emulate it somehow, possibly with mod_rewrite. The idea Have some predefined parts in all of the links on the site, that I can identify, and route accordingly to the correct file, with mod_rewrite. Examples: http://webhost/siteD/static/js/something.js -> siteD/sites/static.sitename.com/htdocs/js/something.js http://webhost/siteD/static/css/something.css -> siteD/sites/static.sitename.com/htdocs/css/something.css http://webhost/siteD/admin/something -> siteD/sites/admin.sitename.com/htdocs/index.php http://webhost/siteD/admin/sub/something -> siteD/sites/admin.sitename.com/htdocs/index.php http://webhost/siteD/something -> siteD/sites/sitename.com/htdocs/index.php http://webhost/siteD/sub/something -> siteD/sites/sitename.com/htdocs/index.php Anything that starts with http://url/sitename/admin/(.*) will get rewritten, to point to siteD/sites/admin.sitename.com/htdocs/index.php Anything that starts with http://url/sitename/static/(.*) will get rewritten, to point to siteD/sites/static.sitename.com/htdocs/$1 Anything that starts with http://url/sitename/(.*) AND did not have a match already from above, will get rewritten to point to siteD/sites/sitename.com/htdocs/index.php The solution Here is the .htaccess file that I've come up with: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^/siteD/static/(.*)$ [NC] RewriteRule ^siteD/static/(.*)$ siteD/sites/static/htdocs/$1 [L] RewriteCond %{REQUEST_URI} ^/siteD/admin/(.*)$ [NC] RewriteRule ^siteD/(.*)$ siteD/sites/admin/htdocs/index.php [L,QSA] So far, so good. It's all working. Now to add the last rule: RewriteCond %{REQUEST_URI} ^/siteD/(.*)$ [NC] RewriteRule ^siteD/(.*)$ siteD/sites/public/htdocs/index.php [L,QSA] And it's broken. The last rule catches everything, even the ones that have static/ or admin/ in them. Why? Shouldn't the [L] flag stop the rewriting process in the first two cases? Why is the third case evaluated? Is there a better way of solving this? I'm not sticking to rewritemod, anything is fine as long as it does not need access to server-level config. I don't have access to RewriteLog, or anything like that. Please help :(

    Read the article

  • DRBD Not syncing between my nodes

    - by Mike Curry
    Some version info: Operating system is Ubuntu 11.10, on EC2, kernel is 3.0.0-16-virtual and the application info is: Version: 8.3.11 (api:88) GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by buildd@allspice, 2011-07-05 19:51:07 Getting some strange errors in dmesg (seen below) as well, there is no replication happening. I have made my first node primary and its showing: drbd driver loaded OK; device status: version: 8.3.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 m:res cs ro ds p mounted fstype 0:r0 StandAlone Primary/Unknown UpToDate/DUnknown r----s ext3 my secondary node is showing: drbd driver loaded OK; device status: version: 8.3.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 m:res cs ro ds p mounted fstype 0:r0 StandAlone Secondary/Unknown Inconsistent/DUnknown r----s Showing /proc/drbd on the master shows: version: 8.3.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 0: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown r----s ns:0 nr:0 dw:4 dr:1073 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:262135964 Showing /proc/drbd on the slave shows that there is nothing being transfered... version: 8.3.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 0: cs:StandAlone ro:Secondary/Unknown ds:Inconsistent/DUnknown r----s ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:262135964 Here is my config... resource r0 { protocol C; startup { wfc-timeout 15; degr-wfc-timeout 60; } net { cram-hmac-alg sha1; shared-secret "test123; } on drbd01 { device /dev/drbd0; disk /dev/xvdm; address 23.XX.XX.XX:7788; # blocked out ip meta-disk internal; } on drbd02 { device /dev/drbd0; disk /dev/xvdm; address 184.XX.XX.XX:7788; #blocked out ip meta-disk internal; } } I have run the following on the master: sudo drbdadm -- --overwrite-data-of-peer primary all There is no firewall between the systems. Here is the dmesg with some errors: [2285172.969955] drbd: initialized. Version: 8.3.11 (api:88/proto:86-96) [2285172.969960] drbd: srcversion: DA5A13F16DE6553FC7CE9B2 [2285172.969962] drbd: registered as block device major 147 [2285172.969965] drbd: minor_table @ 0xffff88000276ea00 [2285173.000952] block drbd0: Starting worker thread (from drbdsetup [1300]) [2285173.003971] block drbd0: disk( Diskless -> Attaching ) [2285173.006150] block drbd0: No usable activity log found. [2285173.006154] block drbd0: Method to ensure write ordering: flush [2285173.006158] block drbd0: max BIO size = 4096 [2285173.006165] block drbd0: drbd_bm_resize called with capacity == 524271928 [2285173.008512] block drbd0: resync bitmap: bits=65533991 words=1023969 pages=2000 [2285173.008518] block drbd0: size = 250 GB (262135964 KB) [2285173.079566] block drbd0: bitmap READ of 2000 pages took 17 jiffies [2285173.081189] block drbd0: recounting of set bits took additional 1 jiffies [2285173.081194] block drbd0: 250 GB (65533991 bits) marked out-of-sync by on disk bit-map. [2285173.081203] block drbd0: Suspended AL updates [2285173.081210] block drbd0: disk( Attaching -> UpToDate ) [2285173.081214] block drbd0: attached to UUIDs 1C1291D39584C1D1:0000000000000004:0000000000000000:0000000000000000 [2285173.095016] block drbd0: conn( StandAlone -> Unconnected ) [2285173.095046] block drbd0: Starting receiver thread (from drbd0_worker [1301]) [2285173.099297] block drbd0: receiver (re)started [2285173.099304] block drbd0: conn( Unconnected -> WFConnection ) [2285173.099330] block drbd0: bind before connect failed, err = -99 [2285173.099346] block drbd0: conn( WFConnection -> Disconnecting ) [2285173.295788] block drbd0: Discarding network configuration. [2285173.295815] block drbd0: Connection closed [2285173.295826] block drbd0: conn( Disconnecting -> StandAlone ) [2285173.295840] block drbd0: receiver terminated [2285173.295844] block drbd0: Terminating drbd0_receiver Edit: Reading some other similar issues, it was suggested to do a 'drbdadm dump all', so I figured it couldn't hurt. ubuntu@drbd01:~$ drbdadm dump all /etc/drbd.conf:19: in resource r0, on drbd01: IP 23.XX.XX.XX not found on this host. and on slave: root@drbd02:~# drbdadm dump all /etc/drbd.conf:25: in resource r0, on drbd02: IP 184.XX.XX.XX not found on this host. Strange it doesn't find its own ip, however, this is an Amazon EC2 system using an elastic IP... here are my ipconfigs for both... master: ubuntu@drbd01:~$ ifconfig eth0 Link encap:Ethernet HWaddr 22:00:0a:1c:27:11 inet addr:10.28.39.17 Bcast:10.28.39.63 Mask:255.255.255.192 inet6 addr: fe80::2000:aff:fe1c:2711/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1569 errors:0 dropped:0 overruns:0 frame:0 TX packets:1169 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:124409 (124.4 KB) TX bytes:213601 (213.6 KB) Interrupt:26 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) slave: root@drbd02:~# ifconfig eth0 Link encap:Ethernet HWaddr 12:31:3f:00:14:9d inet addr:10.160.27.107 Bcast:10.160.27.255 Mask:255.255.254.0 inet6 addr: fe80::1031:3fff:fe00:149d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:915 errors:0 dropped:0 overruns:0 frame:0 TX packets:774 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:75381 (75.3 KB) TX bytes:109673 (109.6 KB) Interrupt:26 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

    Read the article

  • Missing audio and problems playing FLV video converted from 720p .mov file with FFMPEG

    - by undefined
    I have some .mov video files recorded from a JVC GC-FM1 HD video camera in 720p mode. I have FFMPEG running on a Linux box that I upload files to and have them encoded into FLV format. The video appears to be encoding ok but there is no audio in the resulting FLV file and when I play it back in Flash Player in a browser or on Adobe Media Player, the video pauses at the start. It appears that Adobe Media Player waits for the progress bar to reach the end of the video before starting the playback - i.e. the video will load, the picture pauses, the progress bar seeks to the end as if the video was playing then when it reaches the end the video picture starts. There is no audio on the video. I am noticing this in the video player I have built with Flash 8 using an FLVPlayback component and attached seekBar. The seek bar will start moving as if the video is playing but the picture remains paused. Here are some outputs from my FFMPEG log and the command I am using to encode the video - my FFMPEG command called from PHP - $cmd = 'ffmpeg -i ' . $sourcelocation.$filename.".".$fileext . ' -ab 96k -b 700k -ar 44100 -s ' . $target['width'] . 'x' . $target['height'] . ' -ac 1 -acodec libfaac ' . $destlocation.$filename.$ext_trans .' 2>&1'; and here is the output from my error log - FFmpeg version UNKNOWN, Copyright (c) 2000-2010 Fabrice Bellard, et al. built on Jan 22 2010 11:31:03 with gcc 4.1.2 20070925 (Red Hat 4.1.2-33) configuration: --prefix=/usr --enable-static --enable-shared --enable-gpl --enable-nonfree --enable-postproc --enable-avfilter --enable-avfilter-lavf --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libvorbis --enable-libx264 libavutil 50. 7. 0 / 50. 7. 0 libavcodec 52.48. 0 / 52.48. 0 libavformat 52.47. 0 / 52.47. 0 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.17. 0 / 1.17. 0 libswscale 0. 9. 0 / 0. 9. 0 libpostproc 51. 2. 0 / 51. 2. 0 Seems stream 0 codec frame rate differs from container frame rate: 119.88 (120000/1001) -> 59.94 (60000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'uploads/video/60974_v1.mov': Metadata: major_brand : qt minor_version : 0 compatible_brands: qt comment : JVC GC-FM1 comment-eng : JVC GC-FM1 Duration: 00:00:30.41, start: 0.000000, bitrate: 4158 kb/s Stream #0.0(eng): Video: h264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 4017 kb/s, 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 128 kb/s Output #0, rawvideo, to 'uploads/video/60974_v1.jpg': Stream #0.0(eng): Video: mjpeg, yuvj420p, 320x240 [PAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 90k tbn, 59.94 tbc Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding [h264 @ 0x8e67930]B picture before any references, skipping [h264 @ 0x8e67930]decode_slice_header error [h264 @ 0x8e67930]no frame! Error while decoding stream #0.0 [h264 @ 0x8e67930]B picture before any references, skipping [h264 @ 0x8e67930]decode_slice_header error [h264 @ 0x8e67930]no frame! Error while decoding stream #0.0 frame= 1 fps= 0 q=3.8 Lsize= 15kB time=0.02 bitrate=7271.4kbits/s dup=482 drop=0 video:15kB audio:0kB global headers:0kB muxing overhead 0.000000% Which are the important errors here - B picture before any references, skipping? decode_slice_header error? no frame? or Seems stream 0 codec frame rate differs from container frame rate: 119.88 (120000/1001) - 59.94 (60000/1001) Any advice welcome, thanks

    Read the article

  • ASP.NET MVC 2 - Saving child entities on form submit

    - by Justin
    Hey, I'm using ASP.NET MVC 2 and am struggling with saving child entities. I have an existing Invoice entity (which I create on a separate form) and then I have a LogHours view that I'd like to use to save InvoiceLog's, which are child entities of Invoice. Here's the view: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<TothSolutions.Data.Invoice>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Log Hours </asp:Content> <asp:Content ID="Content3" ContentPlaceHolderID="HeadContent" runat="server"> <script type="text/javascript"> $(document).ready(function () { $("#InvoiceLogs_0__Description").focus(); }); </script> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Log Hours</h2> <% using (Html.BeginForm("SaveHours", "Invoices")) {%> <%: Html.ValidationSummary(true) %> <fieldset> <legend>Fields</legend> <table> <tr> <th>Date</th> <th>Description</th> <th>Hours</th> </tr> <% int index = 0; foreach (var log in Model.InvoiceLogs) { %> <tr> <td><%: log.LogDate.ToShortDateString() %></td> <td><%: Html.TextBox("InvoiceLogs[" + index + "].Description")%></td> <td><%: Html.TextBox("InvoiceLogs[" + index + "].Hours")%></td> <td>Hours</td> </tr> <% index++; } %> </table> <p> <%: Html.Hidden("InvoiceID") %> <%: Html.Hidden("CreateDate") %> <input type="submit" value="Save" /> </p> </fieldset> <% } %> <div> <%: Html.ActionLink("Back to List", "Index") %> </div> </asp:Content> And here's the controller code: //GET: /Secure/Invoices/LogHours/ public ActionResult LogHours(int id) { var invoice = DataContext.InvoiceData.Get(id); if (invoice == null) { throw new Exception("Invoice not found with id: " + id); } return View(invoice); } //POST: /Secure/Invoices/SaveHours/ [AcceptVerbs(HttpVerbs.Post)] public ActionResult SaveHours([Bind(Exclude = "InvoiceLogs")]Invoice invoice) { TryUpdateModel(invoice.InvoiceLogs, "InvoiceLogs"); invoice.UpdateDate = DateTime.Now; invoice.DeveloperID = DeveloperID; //attaching existing invoice. DataContext.InvoiceData.Attach(invoice); //save changes. DataContext.SaveChanges(); //redirect to invoice list. return RedirectToAction("Index"); } And the data access code: public static void Attach(Invoice invoice) { var i = new Invoice { InvoiceID = invoice.InvoiceID }; db.Invoices.Attach(i); db.Invoices.ApplyCurrentValues(invoice); } In the SaveHours action, it properly sets the values of the InvoiceLog entities after I call TryUpdateModel but when it does SaveChanges it doesn't update the database with the new values. Also, if you manually update the values of the InvoiceLog entries in the database and then go to this page it doesn't populate the textboxes so it's clearly not binding correctly. Thanks, Justin

    Read the article

  • Login loop in Snow Leopard

    - by hgpc
    I can't get out of a login loop of a particular admin user. After entering the password the login screen is shown again after about a minute. Other users work fine. It started happening after a simple reboot. Can you please help me? Thank you! Tried to no avail: Change the password Remove the password Repair disk (no errors) Boot in safe mode Reinstall Snow Leopard and updating to 10.6.6 Remove content of ~/Library/Caches Removed content of ~/Library/Preferences Replaced /etc/authorization with Install DVD copy The system.log mentions a crash report. I'm including both below. system.log Jan 8 02:43:30 loginwindow218: Login Window - Returned from Security Agent Jan 8 02:43:30 loginwindow218: USER_PROCESS: 218 console Jan 8 02:44:42 kernel[0]: Jan 8 02:44:43: --- last message repeated 1 time --- Jan 8 02:44:43 com.apple.launchd[1] (com.apple.loginwindow218): Job appears to have crashed: Bus error Jan 8 02:44:43 com.apple.UserEventAgent-LoginWindow223: ALF error: cannot find useragent 1102 Jan 8 02:44:43 com.apple.UserEventAgent-LoginWindow223: plugin.UserEventAgentFactory: called with typeID=FC86416D-6164-2070-726F-70735C216EC0 Jan 8 02:44:43 /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow233: Login Window Application Started Jan 8 02:44:43 SecurityAgent228: CGSShutdownServerConnections: Detaching application from window server Jan 8 02:44:43 com.apple.ReportCrash.Root232: 2011-01-08 02:44:43.936 ReportCrash232:2903 Saved crash report for loginwindow218 version ??? (???) to /Library/Logs/DiagnosticReports/loginwindow_2011-01-08-024443_localhost.crash Jan 8 02:44:44 SecurityAgent228: MIG: server died: CGSReleaseShmem : Cannot release shared memory Jan 8 02:44:44 SecurityAgent228: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Jan 8 02:44:44 SecurityAgent228: CGSDisplayServerShutdown: Detaching display subsystem from window server Jan 8 02:44:44 SecurityAgent228: HIToolbox: received notification of WindowServer event port death. Jan 8 02:44:44 SecurityAgent228: port matched the WindowServer port created in BindCGSToRunLoop Jan 8 02:44:44 loginwindow233: Login Window Started Security Agent Jan 8 02:44:44 WindowServer234: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Jan 8 02:44:44 com.apple.WindowServer234: Sat Jan 8 02:44:44 .local WindowServer234 <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Jan 8 02:44:54 SecurityAgent243: NSSecureTextFieldCell detected a field editor ((null)) that is not a NSTextView subclass designed to work with the cell. Ignoring... Crash report Process: loginwindow 218 Path: /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow Identifier: loginwindow Version: ??? (???) Code Type: X86-64 (Native) Parent Process: launchd [1] Date/Time: 2011-01-08 02:44:42.748 +0100 OS Version: Mac OS X 10.6.6 (10J567) Report Version: 6 Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: 0x000000000000000a, 0x000000010075b000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 com.apple.security 0x00007fff801c6e8b Security::ReadSection::at(unsigned int) const + 25 1 com.apple.security 0x00007fff801c632f Security::DbVersion::open() + 123 2 com.apple.security 0x00007fff801c5e41 Security::DbVersion::DbVersion(Security::AppleDatabase const&, Security::RefPointer<Security::AtomicBufferedFile> const&) + 179 3 com.apple.security 0x00007fff801c594e Security::DbModifier::getDbVersion(bool) + 330 4 com.apple.security 0x00007fff801c57f5 Security::DbModifier::openDatabase() + 33 5 com.apple.security 0x00007fff801c5439 Security::Database::_dbOpen(Security::DatabaseSession&, unsigned int, Security::AccessCredentials const*, void const*) + 221 6 com.apple.security 0x00007fff801c4841 Security::DatabaseManager::dbOpen(Security::DatabaseSession&, Security::DbName const&, unsigned int, Security::AccessCredentials const*, void const*) + 77 7 com.apple.security 0x00007fff801c4723 Security::DatabaseSession::DbOpen(char const*, cssm_net_address const*, unsigned int, Security::AccessCredentials const*, void const*, long&) + 285 8 com.apple.security 0x00007fff801d8414 cssm_DbOpen(long, char const*, cssm_net_address const*, unsigned int, cssm_access_credentials const*, void const*, long*) + 108 9 com.apple.security 0x00007fff801d7fba CSSM_DL_DbOpen + 106 10 com.apple.security 0x00007fff801d62f6 Security::CssmClient::DbImpl::open() + 162 11 com.apple.security 0x00007fff801d8977 SSDatabaseImpl::open(Security::DLDbIdentifier const&) + 53 12 com.apple.security 0x00007fff801d8715 SSDLSession::DbOpen(char const*, cssm_net_address const*, unsigned int, Security::AccessCredentials const*, void const*, long&) + 263 13 com.apple.security 0x00007fff801d8414 cssm_DbOpen(long, char const*, cssm_net_address const*, unsigned int, cssm_access_credentials const*, void const*, long*) + 108 14 com.apple.security 0x00007fff801d7fba CSSM_DL_DbOpen + 106 15 com.apple.security 0x00007fff801d62f6 Security::CssmClient::DbImpl::open() + 162 16 com.apple.security 0x00007fff802fa786 Security::CssmClient::DbImpl::unlock(cssm_data const&) + 28 17 com.apple.security 0x00007fff80275b5d Security::KeychainCore::KeychainImpl::unlock(Security::CssmData const&) + 89 18 com.apple.security 0x00007fff80291a06 Security::KeychainCore::StorageManager::login(unsigned int, void const*, unsigned int, void const*) + 3336 19 com.apple.security 0x00007fff802854d3 SecKeychainLogin + 91 20 com.apple.loginwindow 0x000000010000dfc5 0x100000000 + 57285 21 com.apple.loginwindow 0x000000010000cfb4 0x100000000 + 53172 22 com.apple.Foundation 0x00007fff8721e44f __NSThreadPerformPerform + 219 23 com.apple.CoreFoundation 0x00007fff82627401 __CFRunLoopDoSources0 + 1361 24 com.apple.CoreFoundation 0x00007fff826255f9 __CFRunLoopRun + 873 25 com.apple.CoreFoundation 0x00007fff82624dbf CFRunLoopRunSpecific + 575 26 com.apple.HIToolbox 0x00007fff8444493a RunCurrentEventLoopInMode + 333 27 com.apple.HIToolbox 0x00007fff8444473f ReceiveNextEventCommon + 310 28 com.apple.HIToolbox 0x00007fff844445f8 BlockUntilNextEventMatchingListInMode + 59 29 com.apple.AppKit 0x00007fff80b01e64 _DPSNextEvent + 718 30 com.apple.AppKit 0x00007fff80b017a9 -NSApplication nextEventMatchingMask:untilDate:inMode:dequeue: + 155 31 com.apple.AppKit 0x00007fff80ac748b -NSApplication run + 395 32 com.apple.loginwindow 0x0000000100004b16 0x100000000 + 19222 33 com.apple.loginwindow 0x0000000100004580 0x100000000 + 17792 Thread 1: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x00007fff8755216a kevent + 10 1 libSystem.B.dylib 0x00007fff8755403d _dispatch_mgr_invoke + 154 2 libSystem.B.dylib 0x00007fff87553d14 _dispatch_queue_invoke + 185 3 libSystem.B.dylib 0x00007fff8755383e _dispatch_worker_thread2 + 252 4 libSystem.B.dylib 0x00007fff87553168 _pthread_wqthread + 353 5 libSystem.B.dylib 0x00007fff87553005 start_wqthread + 13 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x000000010075b000 rbx: 0x00007fff5fbfd990 rcx: 0x00007fff875439da rdx: 0x0000000000000000 rdi: 0x00007fff5fbfd990 rsi: 0x0000000000000000 rbp: 0x00007fff5fbfd5d0 rsp: 0x00007fff5fbfd5d0 r8: 0x0000000000000007 r9: 0x0000000000000000 r10: 0x00007fff8753beda r11: 0x0000000000000202 r12: 0x0000000100133e78 r13: 0x00007fff5fbfda50 r14: 0x00007fff5fbfda50 r15: 0x00007fff5fbfdaa0 rip: 0x00007fff801c6e8b rfl: 0x0000000000010287 cr2: 0x000000010075b000

    Read the article

  • AppDomain dependencies across directories

    - by strager
    I am creating a plugin system and I am creating one AppDomain per plugin. Each plugin has its own directory with its main assemblies and references. The main assemblies will be loaded by my plugin loader, in addition to my interface assemblies (so the plugin can interact with the application). Creating the AppDomain: this.appDomain = AppDomain.CreateDomain("AppDomain", null, new AppDomainSetup { ApplicationBase = pluginPath, PrivateBinPath = pluginPath, }); Loading the assemblies: this.appDomain.Load(myInterfaceAssembly.GetName(true)); var assemblies = new List<Assembly>(); foreach (var assemblyName in this.assemblyNames) { assemblies.Add(this.appDomain.Load(assemblyName)); } The format of assemblyName is the assembly's filename without ".dll". The problem is that AppDomain.Load(assemblyName) throws an exception: Could not load file or assembly '[[assemblyName]], Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. All of the dependencies of [[assemblyName]] are: Inside the directory pluginPath, The myInterfaceAssembly which is already loaded, or In the GAC (e.g. mscorelib). Clearly I'm not doing something right. I have tried: Creating an object using this.appDomain.CreateInstanceAndUnwrap inheriting from MarshalByRefObject with a LoadAssembly method to load the assembly. I get an exception saying that the current assembly (containing the proxy class) could not be loaded (file not found, as above), even if I manually call this.appDomain.Load(Assembly.GetExecutingAssembly().GetName(true)). Attaching an AssemblyResolve handler to this.appDomain. I'm met with the same exception as in (1), and manually loading doesn't help. Recursively loading assemblies by loading their dependencies into this.appDomain first. This doesn't work, but I doubt my code is correct: private static void LoadAssemblyInto(AssemblyName assemblyName, AppDomain appDomain) { var assembly = Assembly.Load(assemblyName); foreach (var referenceName in assembly.GetReferencedAssemblies()) { if (!referenceName.FullName.StartsWith("MyProject")) { continue; } var loadedAssemblies = appDomain.GetAssemblies(); if (loadedAssemblies.Any((asm) => asm.FullName == referenceName.FullName)) { continue; } LoadAssemblyInto(referenceName, appDomain); } appDomain.Load(assembly.GetName(true)); } How can I load my plugin assembly with its dependencies in that plugin's directory while also loading some assemblies in the current directory? Note: The assemblies a plugin may (probably will) reference are already loaded in the current domain. This can be shared across domains (performance benefit? simplicity?) if required. Fusion log: *** Assembly Binder Log Entry (12/24/2010 @ 10:46:40 AM) *** The operation failed. Bind result: hr = 0x80070002. The system cannot find the file specified. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\v4.0.30319\clr.dll Running under executable C:\MyProject\bin\Debug\MyProject.vshost.exe --- A detailed error log follows. LOG: Start binding of native image vshost32, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=x86. WRN: No matching native image found. LOG: Bind to native image assembly did not succeed. Use IL image. LOG: IL assembly loaded from C:\MyProject\bin\Debug\MyProject.vshost.exe

    Read the article

  • Mysql performance problem & Failed DIMM

    - by murdoch
    Hi I have a dedicated mysql database server which has been having some performance problems recently, under normal load the server will be running fine, then suddenly out of the blue the performance will fall off a cliff. The server isn't using the swap file and there is 12GB of RAM in the server, more than enough for its needs. After contacting my hosting comapnies support they have discovered that there is a failed 2GB DIMM in the server and have scheduled to replace it tomorow morning. My question is could a failed DIMM result in the performance problems I am seeing or is this just coincidence? My worry is that they will replace the ram tomorrow but the problems will persist and I will still be lost of explanations so I am just trying to think ahead. The reason I ask is that there is plenty of RAM in the server, more than required and simply missing 2GB should be a problem, so if this failed DIMM is causing these performance problems then the OS must be trying to access the failed DIMM and slowing down as a result. Does that sound like a credible explanation? This is what DELLs omreport program says about the RAM, notice one dimm is "Critical" Memory Information Health : Critical Memory Operating Mode Fail Over State : Inactive Memory Operating Mode Configuration : Optimizer Attributes of Memory Array(s) Attributes : Location Memory Array 1 : System Board or Motherboard Attributes : Use Memory Array 1 : System Memory Attributes : Installed Capacity Memory Array 1 : 12288 MB Attributes : Maximum Capacity Memory Array 1 : 196608 MB Attributes : Slots Available Memory Array 1 : 18 Attributes : Slots Used Memory Array 1 : 6 Attributes : ECC Type Memory Array 1 : Multibit ECC Total of Memory Array(s) Attributes : Total Installed Capacity Value : 12288 MB Attributes : Total Installed Capacity Available to the OS Value : 12004 MB Attributes : Total Maximum Capacity Value : 196608 MB Details of Memory Array 1 Index : 0 Status : Ok Connector Name : DIMM_A1 Type : DDR3-Registered Size : 2048 MB Index : 1 Status : Ok Connector Name : DIMM_A2 Type : DDR3-Registered Size : 2048 MB Index : 2 Status : Ok Connector Name : DIMM_A3 Type : DDR3-Registered Size : 2048 MB Index : 3 Status : Critical Connector Name : DIMM_B1 Type : DDR3-Registered Size : 2048 MB Index : 4 Status : Ok Connector Name : DIMM_B2 Type : DDR3-Registered Size : 2048 MB Index : 5 Status : Ok Connector Name : DIMM_B3 Type : DDR3-Registered Size : 2048 MB the command free -m shows this, the server seems to be using more than 10GB of ram which would suggest it is trying to use the DIMM total used free shared buffers cached Mem: 12004 10766 1238 0 384 4809 -/+ buffers/cache: 5572 6432 Swap: 2047 0 2047 iostat output while problem is occuring avg-cpu: %user %nice %system %iowait %steal %idle 52.82 0.00 11.01 0.00 0.00 36.17 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 47.00 0.00 576.00 0 576 sda1 0.00 0.00 0.00 0 0 sda2 1.00 0.00 32.00 0 32 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 46.00 0.00 544.00 0 544 avg-cpu: %user %nice %system %iowait %steal %idle 53.12 0.00 7.81 0.00 0.00 39.06 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 592.00 0 592 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 592.00 0 592 avg-cpu: %user %nice %system %iowait %steal %idle 56.09 0.00 7.43 0.37 0.00 36.10 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 232.00 0.00 64520.00 0 64520 sda1 0.00 0.00 0.00 0 0 sda2 159.00 0.00 63728.00 0 63728 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 73.00 0.00 792.00 0 792 avg-cpu: %user %nice %system %iowait %steal %idle 52.18 0.00 9.24 0.06 0.00 38.51 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 600.00 0 600 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 600.00 0 600 avg-cpu: %user %nice %system %iowait %steal %idle 54.82 0.00 8.64 0.00 0.00 36.55 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 100.00 0.00 2168.00 0 2168 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 100.00 0.00 2168.00 0 2168 avg-cpu: %user %nice %system %iowait %steal %idle 54.78 0.00 6.75 0.00 0.00 38.48 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 84.00 0.00 896.00 0 896 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 84.00 0.00 896.00 0 896 avg-cpu: %user %nice %system %iowait %steal %idle 54.34 0.00 7.31 0.00 0.00 38.35 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 81.00 0.00 840.00 0 840 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 81.00 0.00 840.00 0 840 avg-cpu: %user %nice %system %iowait %steal %idle 55.18 0.00 5.81 0.44 0.00 38.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 317.00 0.00 105632.00 0 105632 sda1 0.00 0.00 0.00 0 0 sda2 224.00 0.00 104672.00 0 104672 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 93.00 0.00 960.00 0 960 avg-cpu: %user %nice %system %iowait %steal %idle 55.38 0.00 7.63 0.00 0.00 36.98 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 74.00 0.00 800.00 0 800 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 74.00 0.00 800.00 0 800 avg-cpu: %user %nice %system %iowait %steal %idle 56.43 0.00 7.80 0.00 0.00 35.77 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 72.00 0.00 784.00 0 784 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 72.00 0.00 784.00 0 784 avg-cpu: %user %nice %system %iowait %steal %idle 54.87 0.00 6.49 0.00 0.00 38.64 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 80.20 0.00 855.45 0 864 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 80.20 0.00 855.45 0 864 avg-cpu: %user %nice %system %iowait %steal %idle 57.22 0.00 5.69 0.00 0.00 37.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 33.00 0.00 432.00 0 432 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 33.00 0.00 432.00 0 432 avg-cpu: %user %nice %system %iowait %steal %idle 56.03 0.00 7.93 0.00 0.00 36.04 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 41.00 0.00 560.00 0 560 sda1 0.00 0.00 0.00 0 0 sda2 2.00 0.00 88.00 0 88 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 39.00 0.00 472.00 0 472 avg-cpu: %user %nice %system %iowait %steal %idle 55.78 0.00 5.13 0.00 0.00 39.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 29.00 0.00 392.00 0 392 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 29.00 0.00 392.00 0 392 avg-cpu: %user %nice %system %iowait %steal %idle 53.68 0.00 8.30 0.06 0.00 37.95 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 78.00 0.00 4280.00 0 4280 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 78.00 0.00 4280.00 0 4280

    Read the article

  • Credentials can not be delegated - Alfresco Share

    - by leftcase
    I've hit a brick wall configuring Alfresco 4.0.d on Redhat 6. I'm using Kerberos authentication, it seems to be working normally, and single sign on is working on the main alfresco app itself. I've been through the configuration steps to get the share app working, but try as I may, I keep getting this error in catalina.out each time a browser accesses http://server:8080/share along with a 'Windows Security' password box. WARN [site.servlet.KerberosSessionSetupPrivilegedAction] credentials can not be delegated! Here's what I've done so far: Using AD users and computers, selected the alfrescohttp account, and selected 'trust this user for delegation to any service (Kerberos only). Copied /opt/alfresco-4.0.d/tomcat/shared/classes/alfresco/web-extension/share-config-custom.xml.sample to share-config-custom.xml and edited like this: <config evaluator="string-compare" condition="Kerberos" replace="true"> <kerberos> <password>*****</password> <realm>MYDOMAIN.CO.UK</realm> <endpoint-spn>HTTP/[email protected]</endpoint-spn> <config-entry>ShareHTTP</config-entry> </kerberos> </config> <config evaluator="string-compare" condition="Remote"> <remote> <keystore> <path>alfresco/web-extension/alfresco-system.p12</path> <type>pkcs12</type> <password>alfresco-system</password> </keystore> <connector> <id>alfrescoCookie</id> <name>Alfresco Connector</name> <description>Connects to an Alfresco instance using cookie-based authentication</description> <class>org.springframework.extensions.webscripts.connector.AlfrescoConnector</class> </connector> <endpoint> <id>alfresco</id> <name>Alfresco - user access</name> <description>Access to Alfresco Repository WebScripts that require user authentication</description> <connector-id>alfrescoCookie</connector-id> <endpoint-url>http://localhost:8080/alfresco/wcs</endpoint-url> <identity>user</identity> <external-auth>true</external-auth> </endpoint> </remote> </config> Setup the /etc/krb5.conf file like this: [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = MYDOMAIN.CO.UK default_tkt_enctypes = rc4-hmac default_tgs_enctypes = rc4-hmac forwardable = true proxiable = true [realms] MYDOMAIN.CO.UK = { kdc = mydc.mydomain.co.uk admin_server = mydc.mydomain.co.uk } [domain_realm] .mydc.mydomain.co.uk = MYDOMAIN.CO.UK mydc.mydomain.co.uk = MYDOMAIN.CO.UK /opt/alfresco-4.0.d/java/jre/lib/security/java.login.config is configured like this: Alfresco { com.sun.security.auth.module.Krb5LoginModule sufficient; }; AlfrescoCIFS { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescocifs.keytab" principal="cifs/server.mydomain.co.uk"; }; AlfrescoHTTP { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescohttp.keytab" principal="HTTP/server.mydomain.co.uk"; }; com.sun.net.ssl.client { com.sun.security.auth.module.Krb5LoginModule sufficient; }; other { com.sun.security.auth.module.Krb5LoginModule sufficient; }; ShareHTTP { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescohttp.keytab" principal="HTTP/server.mydomain.co.uk"; }; And finally, the following settings in alfresco-global.conf authentication.chain=kerberos1:kerberos,alfrescoNtlm1:alfrescoNtlm kerberos.authentication.real=MYDOMAIN.CO.UK kerberos.authentication.user.configEntryName=Alfresco kerberos.authentication.cifs.configEntryName=AlfrescoCIFS kerberos.authentication.http.configEntryName=AlfrescoHTTP kerberos.authentication.cifs.password=****** kerberos.authentication.http.password=***** kerberos.authentication.defaultAdministratorUserNames=administrator ntlm.authentication.sso.enabled=true As I say, I've hit a brick wall with this and I'd really appreciate any help you can give me! This question is also posted on the Alfresco forum, but I wondered if any folk here on serverfault have come across similar implementation challenges?

    Read the article

  • SOAP call with query on result (SSRS, Sharepoint)

    - by Erik404
    Hi! I created a report in VS using a shared data source which is connected to a sharepoint list. In the report I created a dataset with a SOAP call to the data source so I get the result from the sharepoint list in a table. this is the soap call <Query> <SoapAction>http://schemas.microsoft.com/sharepoint/soap/GetListItems</SoapAction> <Method Namespace="http://schemas.microsoft.com/sharepoint/soap/" Name="GetListItems"> <Parameters> <Parameter Name="listName"> <DefaultValue>{BD8D39B7-FA0B-491D-AC6F-EC9B0978E0CE}</DefaultValue> </Parameter> <Parameter Name="viewName"> <DefaultValue>{E2168426-804F-4836-9BE4-DC5F8D08A54F}</DefaultValue> </Parameter> <Parameter Name="rowLimit"> <DefaultValue>9999</DefaultValue> </Parameter> </Parameters> </Method> <ElementPath IgnoreNamespaces="True">*</ElementPath> </Query> THis works fine, I have a result which I can show in a report, but I want to have the ability to select a parameter to filter the result on. I have created a parameter and when I preview the Report I see the dropdownbox which I can use to make a selection from the Title field, when I do this it still shows the first record, obviously it doens't work yet (DUH!) because I need to create a query somewhere, But! I have no idea where, I tried to include <Where> <Eq> <FieldRef Name="ows_Title" /> <Value Type="Text">testValue</Value> </Eq> </Where> in the the soap request but it didn't worked... I've searched teh intarwebz but couldn't find any simliar problems... kinda stuck now...any thoughts on this? EDIT Here's the query I used according to the blogpost Alex Angas linked. <Query> <SoapAction>http://schemas.microsoft.com/sharepoint/soap/GetListItems</SoapAction> <Method Namespace="http://schemas.microsoft.com/sharepoint/soap/" Name="GetListItems"> <queryOptions></queryOptions> <query><Query> <Where> <Eq> <FieldRef Name="ows_Title"/> <Value Type="Text">someValue</Value> </Eq> </Where> </Query></query> <Parameters> <Parameter Name="listName"> <DefaultValue>{BD8D39B7-FA0B-491D-AC6F-EC9B0978E0CE}</DefaultValue> </Parameter> <Parameter Name="viewName"> <DefaultValue>{E2168426-804F-4836-9BE4-DC5F8D08A54F}</DefaultValue> </Parameter> <Parameter Name="rowLimit"> <DefaultValue>9999</DefaultValue> </Parameter> </Parameters> </Method> <ElementPath IgnoreNamespaces="True">*</ElementPath> </Query> I tried to put the new query statement in every possible way in the existing, but it doesn't work at all, I do not get an error though so the code is valid, but I still get an unfiltered list as return... pulling my hair out here!

    Read the article

  • Curl Certificate Error when Using RVM to install Ruby 1.9.2

    - by Will Dennis
    RVM is running into a certificate error when trying to download ruby 1.9.2. It looks like curl is having a certificate issue but I am not sure how to bypass it. NAy help would be great. Thanks so much, I have included the exact error info below. $ rvm install 1.9.2 Installing Ruby from source to: /Users/willdennis/.rvm/rubies/ruby-1.9.2-p180, this may take a while depending on your cpu(s)... ruby-1.9.2-p180 - #fetching ERROR: Error running 'bunzip2 '/Users/willdennis/.rvm/archives/ruby-1.9.2-p180.tar.bz2'', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/extract.log ruby-1.9.2-p180 - #extracting ruby-1.9.2-p180 to /Users/willdennis/.rvm/src/ruby-1.9.2-p180 ruby-1.9.2-p180 - #extracted to /Users/willdennis/.rvm/src/ruby-1.9.2-p180 Fetching yaml-0.1.3.tar.gz to /Users/willdennis/.rvm/archives curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). The default bundle is named curl-ca-bundle.crt; you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. ERROR: There was an error, please check /Users/willdennis/.rvm/log/ruby-1.9.2-p180/*.log. Next we'll try to fetch via http. Trying http:// URL instead. curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). The default bundle is named curl-ca-bundle.crt; you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. ERROR: There was an error, please check /Users/willdennis/.rvm/log/ruby-1.9.2-p180/*.log Extracting yaml-0.1.3.tar.gz to /Users/willdennis/.rvm/src ERROR: Error running 'tar zxf /Users/willdennis/.rvm/archives/yaml-0.1.3.tar.gz -C /Users/willdennis/.rvm/src --no-same-owner', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/yaml/extract.log /Users/willdennis/.rvm/scripts/functions/packages: line 55: cd: /Users/willdennis/.rvm/src/yaml-0.1.3: No such file or directory Configuring yaml in /Users/willdennis/.rvm/src/yaml-0.1.3. ERROR: Error running ' ./configure --prefix="/Users/willdennis/.rvm/usr" ', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/yaml/configure.log Compiling yaml in /Users/willdennis/.rvm/src/yaml-0.1.3. ERROR: Error running '/usr/bin/make ', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/yaml/make.log Installing yaml to /Users/willdennis/.rvm/usr ERROR: Error running '/usr/bin/make install', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/yaml/make.install.log ruby-1.9.2-p180 - #configuring ERROR: Error running ' ./configure --prefix=/Users/willdennis/.rvm/rubies/ruby-1.9.2-p180 --enable-shared --disable-install-doc --with-libyaml-dir=/Users/willdennis/.rvm/usr ', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/configure.log ERROR: There has been an error while running configure. Halting the installation. Will

    Read the article

  • Installing nGinX Reverse Proxy on CentOS 5

    - by heavymark
    I'm trying to install nGinX as a reverse proxy on CentOS 5 with apache. The instructions to do this are here: http://wiki.mediatemple.net/w/(dv):Configure_nginx_as_reverse_proxy_web_server Note- in the instructions, for the url to get nginx I'm using the following: http://nginx.org/download/nginx-1.0.10.tar.gz Now here is my problem. After installing the required packages and running .configure I get the following: checking for OS + Linux 2.6.18-028stab094.3 x86_64 checking for C compiler ... found + using GNU C compiler + gcc version: 4.1.2 20080704 (Red Hat 4.1.2-51) checking for gcc -pipe switch ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found checking for sendfile() ... found checking for sendfile64() ... found checking for sys/prctl.h ... found checking for prctl(PR_SET_DUMPABLE) ... found checking for sched_setaffinity() ... found checking for crypt_r() ... found checking for sys/vfs.h ... found checking for nobody group ... found checking for poll() ... found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for crypt() in libcrypt ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... found checking for O_DIRECT ... found checking for F_NOCACHE ... not found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... not found checking for dlopen() in libdl ... found checking for sched_yield() ... found checking for SO_SETFIB ... not found checking for SO_ACCEPTFILTER ... not found checking for TCP_DEFER_ACCEPT ... found checking for accept4() ... not found checking for int size ... 4 bytes checking for long size ... 8 bytes checking for long long size ... 8 bytes checking for void * size ... 8 bytes checking for uint64_t ... found checking for sig_atomic_t ... found checking for sig_atomic_t size ... 4 bytes checking for socklen_t ... found checking for in_addr_t ... found checking for in_port_t ... found checking for rlim_t ... found checking for uintptr_t ... uintptr_t found checking for system endianess ... little endianess checking for size_t size ... 8 bytes checking for off_t size ... 8 bytes checking for time_t size ... 8 bytes checking for setproctitle() ... not found checking for pread() ... found checking for pwrite() ... found checking for sys_nerr ... found checking for localtime_r() ... found checking for posix_memalign() ... found checking for memalign() ... found checking for mmap(MAP_ANON|MAP_SHARED) ... found checking for mmap("/dev/zero", MAP_SHARED) ... found checking for System V shared memory ... found checking for POSIX semaphores ... not found checking for POSIX semaphores in libpthread ... found checking for struct msghdr.msg_control ... found checking for ioctl(FIONBIO) ... found checking for struct tm.tm_gmtoff ... found checking for struct dirent.d_namlen ... not found checking for struct dirent.d_type ... found checking for PCRE library ... found checking for system md library ... not found checking for system md5 library ... not found checking for OpenSSL md5 crypto library ... found checking for sha1 in system md library ... not found checking for OpenSSL sha1 crypto library ... found checking for zlib library ... found creating objs/Makefile Configuration summary + using system PCRE library + OpenSSL library is not used + md5: using system crypto library + sha1: using system crypto library + using system zlib library nginx path prefix: "/usr/local/nginx" nginx binary file: "/usr/local/nginx/sbin/nginx" nginx configuration prefix: "/usr/local/nginx/conf" nginx configuration file: "/usr/local/nginx/conf/nginx.conf" nginx pid file: "/usr/local/nginx/logs/nginx.pid" nginx error log file: "/usr/local/nginx/logs/error.log" nginx http access log file: "/usr/local/nginx/logs/access.log" nginx http client request body temporary files: "client_body_temp" nginx http proxy temporary files: "proxy_temp" nginx http fastcgi temporary files: "fastcgi_temp" nginx http uwsgi temporary files: "uwsgi_temp" nginx http scgi temporary files: "scgi_temp" It says if you get errors to stop and make sure packages are installed. I didn't get errors but as you can see I got several "not founds". Are those considered errors? If so how do I resolve that. And as noted in the link, I cannot install through yum, because it wont work with plesk then. Thanks!

    Read the article

  • ASP.net roles and Projects

    - by Zyphrax
    EDIT - Rewrote my original question to give a bit more information Background info At my work I'm working on a ASP.Net web application for our customers. In our implementation we use technologies like Forms authentication with MembershipProviders and RoleProviders. All went well until I ran into some difficulties with configuring the roles, because the roles aren't system-wide, but related to the customer accounts and projects. I can't name our exact setup/formula, because I think our company wouldn't approve that... What's a customer / project? Our company provides management information for our customers on a yearly (or other interval) basis. In our systems a customer/contract consists of: one Account: information about the Company per Account, one or more Products: the bundle of management information we'll provide per Product, one or more Measurements: a period of time, in which we gather and report the data Extranet site setup Eventually we want all customers to be able to access their management information with our online system. The extranet consists of two sites: Company site: provides an overview of Account information and the Products Measurement site: after selecting a Measurement, detailed information on that period of time The measurement site is the most interesting part of the extranet. We will create submodules for new overviews, reports, managing and maintaining resources that are important for the research. Our Visual Studio solution consists of a number of projects. One web application named Portal for the basis. The sites and modules are virtual directories within that application (makes it easier to share MasterPages among things). What kind of roles? The following users (read: roles) will be using the system: Admins: development users :) (not customer related, full access) Employees: employees of our company (not customer related, full access) Customer SuperUser: top level managers (full access to their account/measurement) Customer ContactPerson: primary contact (full access to their measurement(s)) Customer Manager: a department manager (limited access, specific data of a measurement) What about ASP.Net users? The system will have many ASP.Net users, let's focus on the customer users: Users are not shared between Accounts SuperUser X automatically has access to all (and new) measurements User Y could be Primary contact for Measurement 1, but have no role for Measurement 2 User Y could be Primary contact for Measurement 1, but have a Manager role for Measurement 2 The department managers are many individual users (per Measurement), if Manager Z had a login for Measurement 1, we would like to use that login again if he participates in Measurement 2. URL structure These are typical urls in our application: http://host/login - the login screen http://host/project - the account/product overview screen (measurement selection) http://host/project/1000 - measurement (id:1000) details http://host/project/1000/planning - planning overview (for primary contact/superuser) http://host/project/1000/reports - report downloads (manager department X can only access report X) We will also create a document url, where you can request a specific document by it's GUID. The system will have to check if the user has rights to the document. The document is related to a Measurement, the User or specific roles have specific rights to the document. What's the problem? (finally ;)) Roles aren't enough to determine what a user is allowed to see/access/download a specific item. It's not enough to say that a certain navigation item is accessible to Managers. When the user requests Measurement 1000, we have to check that the user not only has a Manager role, but a Manager role for Measurement 1000. Summarized: How can we limit users to their accounts/measurements? (remember superusers see all measurements, some managers only specific measurements) How can we apply roles at a product/measurement level? (user X could be primarycontact for measurement 1, but just a manager for measurement 2) How can we limit manager access to the reports screen and only to their department's reports? All with the magic of asp.net classes, perhaps with a custom roleprovider implementation. Similar Stackoverflow question/problem http://stackoverflow.com/questions/1367483/asp-net-how-to-manage-users-with-different-types-of-roles

    Read the article

  • Asp.net mvc, view with multiple updatable parts - how?

    - by DerDres
    I have started doing asp.net mvc programming and like it more everyday. Most of the examples I have seen use separate views for viewing and editing details of a specific entity. E.g. - table of music albums linking to separate 'detail' and 'update' views [Action] | Title | Artist detail, update | Uuuh Baby | Barry White detail, update | Mr Mojo | Barry White With mvc how can I achieve a design where the R and the U (CRUD) are represented in a single view, and furthermore where the user can edit separate parts of the view, thus limiting the amount of data the user can edit before saving? Example mockup - editing album detials: I have achieved such a design with ajax calls, but Im curious how to do this without ajax. Parts of my own take on this can be seen below. I use a flag (enum EditCode) indicating which part of the view, if any, that has to render a form. Is such a design in accordance with the framework, could it be done more elegantly? AlbumController public class AlbumController : Controller { public ActionResult Index() { var albumDetails = from ManageVM in state.AlbumState.ToList() select ManageVM.Value.Detail; return View(albumDetails); } public ActionResult Manage(int albumId, EditCode editCode) { (state.AlbumState[albumId] as ManageVM).EditCode = (EditCode)editCode; ViewData["albumId"] = albumId; return View(state.AlbumState[albumId]); } [HttpGet] public ActionResult Edit(int albumId, EditCode editCode) { return RedirectToAction("Manage", new { albumId = albumId, editCode = editCode }); } // edit album details [HttpPost] public ActionResult EditDetail(int albumId, Detail details) { (state.AlbumState[albumId] as ManageVM).Detail = details; return RedirectToAction("Manage", new { albumId = albumId, editCode = EditCode.NoEdit });// zero being standard } // edit album thought [HttpPost] public ActionResult EditThoughts(int albumId, List<Thought> thoughts) { (state.AlbumState[albumId] as ManageVM).Thoughts = thoughts; return RedirectToAction("Manage", new { albumId = albumId, editCode = EditCode.NoEdit });// zero being standard } Flag - EditCode public enum EditCode { NoEdit, Details, Genres, Thoughts } Mangae view <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MvcApplication1.Controllers.ManageVM>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Manage </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Manage</h2> <% if(Model.EditCode == MvcApplication1.Controllers.EditCode.Details) {%> <% Html.RenderPartial("_EditDetails", Model.Detail); %> <% }else{%> <% Html.RenderPartial("_ShowDetails", Model.Detail); %> <% } %> <hr /> <% if(Model.EditCode == MvcApplication1.Controllers.EditCode.Thoughts) {%> <% Html.RenderPartial("_EditThoughts", Model.Thoughts); %> <% }else{%> <% Html.RenderPartial("_ShowThoughts", Model.Thoughts); %> <% } %>

    Read the article

  • Python bindings for a vala library

    - by celil
    I am trying to create python bindings to a vala library using the following IBM tutorial as a reference. My initial directory has the following two files: test.vala using GLib; namespace Test { public class Test : Object { public int sum(int x, int y) { return x + y; } } } test.override %% headers #include <Python.h> #include "pygobject.h" #include "test.h" %% modulename test %% import gobject.GObject as PyGObject_Type %% ignore-glob *_get_type %% and try to build the python module source test_wrap.c using the following code build.sh #/usr/bin/env bash valac test.vala -CH test.h python /usr/share/pygobject/2.0/codegen/h2def.py test.h > test.defs pygobject-codegen-2.0 -o test.override -p test test.defs > test_wrap.c However, the last command fails with an error $ ./build.sh Traceback (most recent call last): File "/usr/share/pygobject/2.0/codegen/codegen.py", line 1720, in <module> sys.exit(main(sys.argv)) File "/usr/share/pygobject/2.0/codegen/codegen.py", line 1672, in main o = override.Overrides(arg) File "/usr/share/pygobject/2.0/codegen/override.py", line 52, in __init__ self.handle_file(filename) File "/usr/share/pygobject/2.0/codegen/override.py", line 84, in handle_file self.__parse_override(buf, startline, filename) File "/usr/share/pygobject/2.0/codegen/override.py", line 96, in __parse_override command = words[0] IndexError: list index out of range Is this a bug in pygobject, or is something wrong with my setup? What is the best way to call code written in vala from python? EDIT: Removing the extra line fixed the current problem, but now as I proceed to build the python module, I am facing another problem. Adding the following C file to the existing two in the directory: test_module.c #include <Python.h> void test_register_classes (PyObject *d); extern PyMethodDef test_functions[]; DL_EXPORT(void) inittest(void) { PyObject *m, *d; init_pygobject(); m = Py_InitModule("test", test_functions); d = PyModule_GetDict(m); test_register_classes(d); if (PyErr_Occurred ()) { Py_FatalError ("can't initialise module test"); } } and building with the following script build.sh #/usr/bin/env bash valac test.vala -CH test.h python /usr/share/pygobject/2.0/codegen/h2def.py test.h > test.defs pygobject-codegen-2.0 -o test.override -p test test.defs > test_wrap.c CFLAGS="`pkg-config --cflags pygobject-2.0` -I/usr/include/python2.6/ -I." LDFLAGS="`pkg-config --libs pygobject-2.0`" gcc $CFLAGS -fPIC -c test.c gcc $CFLAGS -fPIC -c test_wrap.c gcc $CFLAGS -fPIC -c test_module.c gcc $LDFLAGS -shared test.o test_wrap.o test_module.o -o test.so python -c 'import test; exit()' results in an error: $ ./build.sh ***INFO*** The coverage of global functions is 100.00% (1/1) ***INFO*** The coverage of methods is 100.00% (1/1) ***INFO*** There are no declared virtual proxies. ***INFO*** There are no declared virtual accessors. ***INFO*** There are no declared interface proxies. Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: ./test.so: undefined symbol: init_pygobject Where is the init_pygobject symbol defined? What have I missed linking to?

    Read the article

  • I am getting error when using Attributes in Rcpp and have RcppArmadillo code

    - by howard123
    I am trying to create a package with RcppArmadillo. The code uses the new attributes methodology of Rcpp. The sourceCpp works fine and compiles the code, but when I build a package I get errors when I use RcppArmadillo code. Without the RcppArmadillo code and using regulare C++, I do not get these errors. The C++ code (it is essentially the fastLm sample code) is: // [[Rcpp::depends(RcppArmadillo)]] #include <Rcpp.h> #include <RcppArmadillo.h> using namespace Rcpp; // [[Rcpp::depends(RcppArmadillo)]] #include <RcppArmadillo.h> // [[Rcpp::export]] List fastLm(NumericVector yr, NumericMatrix Xr) { int n = Xr.nrow(), k = Xr.ncol(); arma::mat X(Xr.begin(), n, k, false); arma::colvec y(yr.begin(), yr.size(), false); arma::colvec coef = arma::solve(X, y); arma::colvec resid = y - X*coef; double sig2 = arma::as_scalar(arma::trans(resid)*resid/(n-k)); arma::colvec stderrest = arma::sqrt( sig2 * arma::diagvec( arma::inv(arma::trans(X)*X)) ); return List::create(Named("coefficients") = coef, Named("stderr") = stderrest); } Here is the compilation error, after I execute "R Rcpp::compileAttributes() * Updated src/RcppExports.cpp == Rcmd.exe INSTALL --no-multiarch NewPackage * installing to library 'C:/Users/Howard/Documents/R/win-library/2.15' * installing *source* package 'NewPackage' ... ** libs g++ -m64 -I"C:/R/R-2-15-2/include" -DNDEBUG -I"C:/Users/Howard/Documents/R/win-library/2.15/Rcpp/include" -I"C:/Users/Howard/Documents/R/win-library/2.15/RcppArmadillo/include" -I"d:/RCompile/CRANpkg/extralibs64/local/include" -O2 -Wall -mtune=core2 -c RcppExports.cpp -o RcppExports.o g++ -m64 -I"C:/R/R-2-15-2/include" -DNDEBUG -I"C:/Users/Howard/Documents/R/win-library/2.15/Rcpp/include" -I"C:/Users/Howard/Documents/R/win-library/2.15/RcppArmadillo/include" -I"d:/RCompile/CRANpkg/extralibs64/local/include" -O2 -Wall -mtune=core2 -c test_arma3.cpp -o test_arma3.o g++ -m64 -shared -s -static-libgcc -o NewPackage.dll tmp.def RcppExports.o test_arma3.o C:/Users/Howard/Documents/R/win-library/2.15/Rcpp/lib/x64/libRcpp.a -Ld:/RCompile/CRANpkg/extralibs64/local/lib/x64 -Ld:/RCompile/CRANpkg/extralibs64/local/lib -LC:/R/R-2-15-2/bin/x64 -lR test_arma3.o:test_arma3.cpp:(.text+0xae4): undefined reference to `dgemm_' test_arma3.o:test_arma3.cpp:(.text+0x19db): undefined reference to `dgemm_' test_arma3.o:test_arma3.cpp:(.text+0x1b0c): undefined reference to `dgemv_' test_arma3.o:test_arma3.cpp:(.text$_ZN4arma6auxlib8solve_odIdNS_3MatIdEEEEbRNS2_IT_EES6_RKNS_4BaseIS4_T0_EE[_ZN4arma6auxlib8solve_odIdNS_3MatIdEEEEbRNS2_IT_EES6_RKNS_4BaseIS4_T0_EE]+0x702): undefined reference to `dgels_' test_arma3.o:test_arma3.cpp:(.text$_ZN4arma6auxlib8solve_udIdNS_3MatIdEEEEbRNS2_IT_EES6_RKNS_4BaseIS4_T0_EE[_ZN4arma6auxlib8solve_udIdNS_3MatIdEEEEbRNS2_IT_EES6_RKNS_4BaseIS4_T0_EE]+0x51c): undefined reference to `dgels_' test_arma3.o:test_arma3.cpp:(.text$_ZN4arma6auxlib10det_lapackIdEET_RKNS_3MatIS2_EEb[_ZN4arma6auxlib10det_lapackIdEET_RKNS_3MatIS2_EEb]+0x14b): undefined reference to `dgetrf_' test_arma3.o:test_arma3.cpp:(.text$_ZN4arma6auxlib5solveIdNS_3MatIdEEEEbRNS2_IT_EES6_RKNS_4BaseIS4_T0_EEb[_ZN4arma6auxlib5solveIdNS_3MatIdEEEEbRNS2_IT_EES6_RKNS_4BaseIS4_T0_EEb]+0x375): undefined reference to `dgesv_' test_arma3.o:test_arma3.cpp:(.text$_ZN4arma4gemvILb1ELb0ELb0EE15apply_blas_typeIdEEvPT_RKNS_3MatIS3_EEPKS3_S3_S3_[_ZN4arma4gemvILb1ELb0ELb0EE15apply_blas_typeIdEEvPT_RKNS_3MatIS3_EEPKS3_S3_S3_]+0x17d): undefined reference to `dgemv_' test_arma3.o:test_arma3.cpp:(.text$_ZN4arma27glue_times_redirect2_helperILb1EE5applyINS_2OpINS_3MatIdEENS_9op_htransEEES5_EEvRNS4_INT_9elem_typeEEERKNS_4GlueIS8_T0_NS_10glue_timesEEE[_ZN4arma27glue_times_redirect2_helperILb1EE5applyINS_2OpINS_3MatIdEENS_9op_htransEEES5_EEvRNS4_INT_9elem_typeEEERKNS_4GlueIS8_T0_NS_10glue_timesEEE]+0x37a): undefined reference to `dgemm_' test_arma3.o:test_arma3.cpp:(.text$_ZN4arma10op_diagvec5applyINS_2OpINS_4GlueINS2_INS_3MatIdEENS_9op_htransEEES5_NS_10glue_timesEEENS_6op_invEEEEEvRNS4_INT_9elem_typeEEERKNS2_ISC_S0_EE[_ZN4arma10op_diagvec5applyINS_2OpINS_4GlueINS2_INS_3MatIdEENS_9op_htransEEES5_NS_10glue_timesEEENS_6op_invEEEEEvRNS4_INT_9elem_typeEEERKNS2_ISC_S0_EE]+0x2c1): undefined reference to `dgetrf_' test_arma3.o:test_arma3.cpp:(.text$_ZN4arma10op_diagvec5applyINS_2OpINS_4GlueINS2_INS_3MatIdEENS_9op_htransEEES5_NS_10glue_timesEEENS_6op_invEEEEEvRNS4_INT_9elem_typeEEERKNS2_ISC_S0_EE[_ZN4arma10op_diagvec5applyINS_2OpINS_4GlueINS2_INS_3MatIdEENS_9op_htransEEES5_NS_10glue_timesEEENS_6op_invEEEEEvRNS4_INT_9elem_typeEEERKNS2_ISC_S0_EE]+0x322): undefined reference to `dgetri_' test_arma3.o:test_arma3.cpp:(.text$_ZN4arma10op_diagvec5applyINS_2OpINS_4GlueINS2_INS_3MatIdEENS_9op_htransEEES5_NS_10glue_timesEEENS_6op_invEEEEEvRNS4_INT_9elem_typeEEERKNS2_ISC_S0_EE[_ZN4arma10op_diagvec5applyINS_2OpINS_4GlueINS2_INS_3MatIdEENS_9op_htransEEES5_NS_10glue_timesEEENS_6op_invEEEEEvRNS4_INT_9elem_typeEEERKNS2_ISC_S0_EE]+0x398): undefined reference to `dgetri_' test_arma3.o:test_arma3.cpp:(.text$_ZN4arma10op_diagvec5applyINS_2OpINS_4GlueINS2_INS_3MatIdEENS_9op_htransEEES5_NS_10glue_timesEEENS_6op_invEEEEEvRNS4_INT_9elem_typeEEERKNS2_ISC_S0_EE[_ZN4arma10op_diagvec5applyINS_2OpINS_4GlueINS2_INS_3MatIdEENS_9op_htransEEES5_NS_10glue_timesEEENS_6op_invEEEEEvRNS4_INT_9elem_typeEEERKNS2_ISC_S0_EE]+0x775): undefined reference to `dgetrf_' test_arma3.o:test_arma3.cpp:(.text$_ZN4arma10op_diagvec5applyINS_2OpINS_4GlueINS2_INS_3MatIdEENS_9op_htransEEES5_NS_10glue_timesEEENS_6op_invEEEEEvRNS4_INT_9elem_typeEEERKNS2_ISC_S0_EE[_ZN4arma10op_diagvec5applyINS_2OpINS_4GlueINS2_INS_3MatIdEENS_9op_htransEEES5_NS_10glue_timesEEENS_6op_invEEEEEvRNS4_INT_9elem_typeEEERKNS2_ISC_S0_EE]+0x7d6): undefined reference to `dgetri_' test_arma3.o:test_arma3.cpp:(.text$_ZN4arma10op_diagvec5applyINS_2OpINS_4GlueINS2_INS_3MatIdEENS_9op_htransEEES5_NS_10glue_timesEEENS_6op_invEEEEEvRNS4_INT_9elem_typeEEERKNS2_ISC_S0_EE[_ZN4arma10op_diagvec5applyINS_2OpINS_4GlueINS2_INS_3MatIdEENS_9op_htransEEES5_NS_10glue_timesEEENS_6op_invEEEEEvRNS4_INT_9elem_typeEEERKNS2_ISC_S0_EE]+0x892): undefined reference to `dgetri_' collect2: ld returned 1 exit status ERROR: compilation failed for package 'NewPackage' * removing 'C:/Users/Howard/Documents/R/win-library/2.15/NewPackage' * restoring previous 'C:/Users/Howard/Documents/R/win-library/2.15/NewPackage' Exited with status 1.

    Read the article

  • Linux networking crash: best steps to find out the cause?

    - by Aron Rotteveel
    One of our Linux (CentOS) servers was unreachable last night. The server was not reachable in any way except for the remote console. After logging in with the remote console, it turned out I could not ping any outside hosts either. A simple service network restart solved the issue, but I am still wondering what could have caused this. My log files seem to indicate no error at all (except for the various daemons that need a network connection and failed after the network failure). Are there any additional steps I can take to find out the cause of this problem? EDIT: this just happened again. The server was completely unresponsive until I issued a networking service restart. Any advise is welcome. Could this be caused by a faulty hardware component? As per Madhatters request, here are some excerpts from the log at the time (the network crashed at 20:13): /var/log/messages: Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=101 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=100 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=101 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:13:34 graviton junglediskserver: Connection to gateway failed: xGatewayTransport - Connection to gateway failed. The first three messages are simple responses to iptables rules I have set up through the LFD firewall. The last message indicates that JungleDisk, which I use for backups can no longer connect to the gateway. Apart from this, there are no interesting messages around this time. EDIT 4 dec: as per Mattdm's request, here is the output of ethtool eth0: (Please not that these are the settings that currently work. If things go wrong again, I will be sure to post this again if necessary. Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: g Wake-on: d Link detected: yes As per Joris' request, here is also the output of route -n: aron@graviton [~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface xx.xx.xx.58 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.42 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.43 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.41 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.46 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.47 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.44 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.45 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.50 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.51 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.48 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.49 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.54 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.52 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.53 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0 xx.xx.xx.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 xx.xx.xx.62 0.0.0.0 UG 0 0 0 eth0 The bottom xx.62 is my gateway. EDIT december 28th: the problem occurred again and I got the chance to compare some of the outputs of the above tests. What I found out is that arp -an returns an incomplete MAC address for my gateway (which is not under my control; the server is in a shared rack): During failure: ? (xx.xx.xx.62) at <incomplete> on eth0 After service network restart: ? (xx.xx.xx.62) at 00:00:0C:9F:F0:30 [ether] on eth0 Is this something I can fix or is it time for me to contact the data centre?

    Read the article

  • MySQL 5.1.49 freezing every two days

    - by maximus
    Hi all, our mysql system is "freezing" every two days. By "freezing" i mean the following: it doesn't respond to ping we can't login with SSH we don't get any answer from MySQL there is no entry in the error logs! neither from linux neither from MySQL. we have already changed to a completely new hardware, we have the same problem, so it's definitely not a hardware problem. we do not have any other software installed except a firewall (iptables rule) we can restart the server from another server using rsyslog (www.rsyslog.com)(software reset) Could someone help me, by giving me some pointers what could i do to figure out the problem? I have included every detail about our settings. Thank you in advance for your help. Max. Our system parameters and settings: System-Memory: 12GB Processor: Intel 7-920 Quadcore Operating system: Debian 5 (lenny) 64bit MySQL 5.1.49 Databases: (a) a small phpbb forum (b) a 6GB database 3 tables with about 15 million rows my.cnf # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = our-ip-address # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 256K thread_cache_size = 32 max_connections = 300 table_cache = 2048 #thread_concurrency = 4 # Used for InnoDB tables recommended to 50%-80% available memory innodb_buffer_pool_size = 6G # 20MB sometimes larger innodb_additional_mem_pool_size = 20M # 8M-16M is good for most situations innodb_log_buffer_size = 8M # Disable XA support because we do not use it innodb-support-xa = 0 # 1 is default wich is 100% secure but 2 offers better performance innodb_flush_log_at_trx_commit = 1 innodb_flush_method = O_DIRECT #innodb_thread_concurency = 8 # Recommended 64M - 512M depending on server size innodb_log_file_size = 512M # One file per table innodb_file_per_table # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M #query_cache_type = 1 #query_cache_min_res_unit= 2K #join_buffer_size = 1M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 log_bin = /var/log/mysql/mysql-bin.log # WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian! expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # * InnoDB plugin # As of MySQL 5.1.38, the InnoDB plugin from Oracle is included in the MySQL source code. # It has many improvements and better performances than the built-in InnoDB storage engine. # Please read http://www.innodb.com/products/innodb_plugin/ for more information. # Uncommenting the two following lines to use the InnoDB plugin. ignore_builtin_innodb plugin-load=innodb=ha_innodb_plugin.so # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # !includedir /etc/mysql/conf.d/ UPDATE After installing sysstat and configuring it to collect data after every minute i have the following datas. I used sar to generate the following output: The log-file is too big so coudn't enter it here but uploaded to box.net. The link is http://www.box.net/shared/xc6rh7qqob SECOND UPDATE We started a ping command in the background, and that solved the problem. Now the server does work since more then a week. We still don't know what's the problem.

    Read the article

  • FFmpeg audio dont work in converted videos

    - by Juddy Swaft
    NOTICE: when i convert videos via terminal and download them from ftp into pc the audio works fine. I use: if($ext == "avi" && $convert_avi == true) { $convert_source = _VIDEOS_DIR_PATH.$new_name; $conv_name = substr(md5($file['name'].rand(1,888)), 2, 10).".mp4"; $converted_file = _VIDEOS_DIR_PATH.$conv_name; $ffmpeg_command = 'ffmpeg -i '.$convert_source.' -acodec libmp3lame -vcodec libx264 -s 1280x720 -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 '.$converted_file; echo exec($ffmpeg_command); $sql = "UPDATE pm_temp SET url = '".$conv_name."' WHERE url = '".$new_name."' LIMIT 1"; $result = @mysql_query($sql); unlink($convert_source); } This code to convert avi to mp4 ffmpeg concole output: root@1tb:~# ffmpeg -i sample.avi -acodec libmp3lame -vcodec libx264 -s 1280x720 -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 goodsample.mp4 ffmpeg version 0.7.15, Copyright (c) 2000-2013 the FFmpeg developers built on Feb 22 2013 07:18:58 with gcc 4.4.5 configuration: --enable-libdc1394 --prefix=/usr --extra-cflags='-Wall -g ' --cc='ccache cc' --enable-shared --enable-libmp3lame --enable-gpl --enable-libvorbis --enable-pthreads --enable-libfaac --enable-libxvid --enable-postproc --enable-x11grab --enable-libgsm --enable-libtheora --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libx264 --enable-libspeex --enable-nonfree --disable-stripping --enable-avfilter --enable-libdirac --disable-decoder=libdirac --enable-libfreetype --enable-libschroedinger --disable-encoder=libschroedinger - s libavutil 50. 43. 0 / 50. 43. 0 libavcodec 52.123. 0 / 52.123. 0 libavformat 52.111. 0 / 52.111. 0 libavdevice 52. 5. 0 / 52. 5. 0 libavfilter 1. 80. 0 / 1. 80. 0 libswscale 0. 14. 1 / 0. 14. 1 libpostproc 51. 2. 0 / 51. 2. 0 [mp3 @ 0x191d4100] Header missing [mpeg4 @ 0x191d1dc0] Invalid and inefficient vfw-avi packed B frames detected Input #0, avi, from 'sample.avi': Metadata: encoder : VirtualDubMod 1.5.10.2 (build 2540/release) Duration: 00:01:01.81, start: 0.000000, bitrate: 1194 kb/s Stream #0.0: Video: mpeg4, yuv420p, 640x352 [PAR 1:1 DAR 20:11], 23.98 tbr, Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s [buffer @ 0x191d1c80] w:640 h:352 pixfmt:yuv420p tb:1/1000000 sar:1/1 sws_param: [scale @ 0x191d6880] w:640 h:352 fmt:yuv420p -> w:1280 h:720 fmt:yuv420p flags:0 [libx264 @ 0x191ce5a0] Default settings detected, using medium profile [libx264 @ 0x191ce5a0] using SAR=45/44 [libx264 @ 0x191ce5a0] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle S [libx264 @ 0x191ce5a0] profile High, level 3.1 [libx264 @ 0x191ce5a0] 264 - core 118 - H.264/MPEG-4 AVC codec - Copyleft 2003-2 6 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_off 1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_l Output #0, mp4, to 'goodsample.mp4': Metadata: encoder : Lavf52.111.0 Stream #0.0: Video: libx264, yuv420p, 1280x720 [PAR 45:44 DAR 20:11], q=2-31 Stream #0.1: Audio: libmp3lame, 44100 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop, [?] for help [mp3 @ 0x191d4100] Header missing Error while decoding stream #0.1 [mpeg4 @ 0x191d1dc0] Invalid and inefficient vfw-avi packed B frames detected [mp3 @ 0x191d4100] incomplete frame 9467kB time=00:01:00.32 bitrate=1285.5kbits/ Error while decoding stream #0.1 frame= 1852 fps= 20 q=29.0 Lsize= 9652kB time=00:01:01.72 bitrate=1280.9kbits video:9121kB audio:483kB global headers:0kB muxing overhead 0.499688% frame I:11 Avg QP:16.78 size: 51456 [libx264 @ 0x191ce5a0] frame P:784 Avg QP:20.81 size: 8954 [libx264 @ 0x191ce5a0] frame B:1057 Avg QP:26.06 size: 1659 [libx264 @ 0x191ce5a0] consecutive B-frames: 22.0% 3.1% 7.5% 67.4% [libx264 @ 0x191ce5a0] mb I I16..4: 31.1% 59.8% 9.1% [libx264 @ 0x191ce5a0] mb P I16..4: 1.8% 2.6% 0.2% P16..4: 24.3% 7.0% 4.0 [libx264 @ 0x191ce5a0] mb B I16..4: 0.1% 0.1% 0.0% B16..8: 22.7% 0.8% 0.2 [libx264 @ 0x191ce5a0] 8x8 transform intra:57.0% inter:72.6% [libx264 @ 0x191ce5a0] coded y,uvDC,uvAC intra: 44.4% 33.3% 10.3% inter: 7.6% 5. [libx264 @ 0x191ce5a0] i16 v,h,dc,p: 68% 14% 8% 10% [libx264 @ 0x191ce5a0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 14% 27% 5% 7% 7% 6 [libx264 @ 0x191ce5a0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 14% 14% 6% 10% 9% 7 [libx264 @ 0x191ce5a0] i8c dc,h,v,p: 67% 13% 17% 3% [libx264 @ 0x191ce5a0] Weighted P-Frames: Y:1.9% UV:0.4% [libx264 @ 0x191ce5a0] ref P L0: 62.2% 12.8% 10.3% 14.5% 0.2% [libx264 @ 0x191ce5a0] ref B L0: 88.1% 5.5% 6.4% [libx264 @ 0x191ce5a0] ref B L1: 95.7% 4.3% [libx264 @ 0x191ce5a0] kb/s:1209.03 I know there is couple errors tough, but i dont know hot to fix it. Also i would be very thankfull if someone can help reduce video size but is not main problem video weights as original avi but sill.

    Read the article

  • Server Memory with Magento

    - by Mohamed Elgharabawy
    I have a cloud server with the following specifications: 2vCPUs 4G RAM 160GB Disk Space Network 400Mb/s System Image: Ubuntu 12.04 LTS I am only running Magento CE 1.7.0.2 on this server. Nothing else. Usually, the server has a loading time of 4-5 seconds. Recently, this has dropped to over 30 seconds and sometimes the server just goes away and I get HTTP error reports to my email stating that HTTP requests took more than 20000ms. Running top command and sorting them returns the following: top - 15:29:07 up 3:40, 1 user, load average: 28.59, 25.95, 22.91 Tasks: 112 total, 30 running, 82 sleeping, 0 stopped, 0 zombie Cpu(s): 90.2%us, 9.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.2%st PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31901 www-data 20 0 360m 71m 5840 R 7 1.8 1:39.51 apache2 32084 www-data 20 0 362m 72m 5548 R 7 1.8 1:31.56 apache2 32089 www-data 20 0 348m 59m 5660 R 7 1.5 1:41.74 apache2 32295 www-data 20 0 343m 54m 5532 R 7 1.4 2:00.78 apache2 32303 www-data 20 0 354m 65m 5260 R 7 1.6 1:38.76 apache2 32304 www-data 20 0 346m 56m 5544 R 7 1.4 1:41.26 apache2 32305 www-data 20 0 348m 59m 5640 R 7 1.5 1:50.11 apache2 32291 www-data 20 0 358m 69m 5256 R 6 1.7 1:44.26 apache2 32517 www-data 20 0 345m 56m 5532 R 6 1.4 1:45.56 apache2 30473 www-data 20 0 355m 66m 5680 R 6 1.7 2:00.05 apache2 32093 www-data 20 0 352m 63m 5848 R 6 1.6 1:53.23 apache2 32302 www-data 20 0 345m 56m 5512 R 6 1.4 1:55.87 apache2 32433 www-data 20 0 346m 57m 5500 S 6 1.4 1:31.58 apache2 32638 www-data 20 0 354m 65m 5508 R 6 1.6 1:36.59 apache2 32230 www-data 20 0 347m 57m 5524 R 6 1.4 1:33.96 apache2 32231 www-data 20 0 355m 66m 5512 R 6 1.7 1:37.47 apache2 32233 www-data 20 0 354m 64m 6032 R 6 1.6 1:59.74 apache2 32300 www-data 20 0 355m 66m 5672 R 6 1.7 1:43.76 apache2 32510 www-data 20 0 347m 58m 5512 R 6 1.5 1:42.54 apache2 32521 www-data 20 0 348m 59m 5508 R 6 1.5 1:47.99 apache2 32639 www-data 20 0 344m 55m 5512 R 6 1.4 1:34.25 apache2 32083 www-data 20 0 345m 56m 5696 R 5 1.4 1:59.42 apache2 32085 www-data 20 0 347m 58m 5692 R 5 1.5 1:42.29 apache2 32293 www-data 20 0 353m 64m 5676 R 5 1.6 1:52.73 apache2 32301 www-data 20 0 348m 59m 5564 R 5 1.5 1:49.63 apache2 32528 www-data 20 0 351m 62m 5520 R 5 1.6 1:36.11 apache2 31523 mysql 20 0 3460m 576m 8288 S 5 14.4 2:06.91 mysqld 32002 www-data 20 0 345m 55m 5512 R 5 1.4 2:01.88 apache2 32080 www-data 20 0 357m 68m 5512 S 5 1.7 1:31.30 apache2 32163 www-data 20 0 347m 58m 5512 S 5 1.5 1:58.68 apache2 32509 www-data 20 0 345m 56m 5504 R 5 1.4 1:49.54 apache2 32306 www-data 20 0 358m 68m 5504 S 4 1.7 1:53.29 apache2 32165 www-data 20 0 344m 55m 5524 S 4 1.4 1:40.71 apache2 32640 www-data 20 0 345m 56m 5528 R 4 1.4 1:36.49 apache2 31888 www-data 20 0 359m 70m 5664 R 4 1.8 1:57.07 apache2 32511 www-data 20 0 357m 67m 5512 S 3 1.7 1:47.00 apache2 32054 www-data 20 0 357m 68m 5660 S 2 1.7 1:53.10 apache2 1 root 20 0 24452 2276 1232 S 0 0.1 0:01.58 init Moreover, running free -m returns the following: total used free shared buffers cached Mem: 4003 3919 83 0 118 901 -/+ buffers/cache: 2899 1103 Swap: 0 0 0 To investigate this further, I have installed apache buddy, it recommeneded that I need to reduce the maxclient connections. Which I did. I also installed MysqlTuner and it suggests that I need to set my innodb_buffer_pool_size to = 3.0G. However, I cannot do that, since the whole memory is 4G. Here is the output from apache buddy: ### GENERAL REPORT ### Settings considered for this report: Your server's physical RAM: 4003MB Apache's MaxClients directive: 40 Apache MPM Model: prefork Largest Apache process (by memory): 73.77MB [ OK ] Your MaxClients setting is within an acceptable range. Max potential memory usage: 2950.8 MB Percentage of RAM allocated to Apache 73.72 % And this is the output of MySQLTuner: -------- Performance Metrics ------------------------------------------------- [--] Up for: 47m 22s (675K q [237.552 qps], 12K conn, TX: 1B, RX: 300M) [--] Reads / Writes: 45% / 55% [--] Total buffers: 2.1G global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 2.5G (64% of installed RAM) [OK] Slow queries: 0% (0/675K) [OK] Highest usage of available connections: 26% (40/151) [OK] Key buffer size / total MyISAM indexes: 36.0M/18.7M [OK] Key buffer hit rate: 100.0% (245K cached / 105 reads) [OK] Query cache efficiency: 92.5% (500K cached / 541K selects) [!!] Query cache prunes per day: 302886 [OK] Sorts requiring temporary tables: 0% (1 temp sorts / 15K sorts) [!!] Joins performed without indexes: 12135 [OK] Temporary tables created on disk: 25% (8K on disk / 32K total) [OK] Thread cache hit rate: 90% (1K created / 12K connections) [!!] Table cache hit rate: 17% (400 open / 2K opened) [OK] Open file limit used: 12% (123/1K) [OK] Table locks acquired immediately: 100% (196K immediate / 196K locks) [!!] InnoDB buffer pool / data size: 2.0G/3.5G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: query_cache_size ( 64M) join_buffer_size ( 128.0K, or always use indexes with joins) table_cache ( 400) innodb_buffer_pool_size (= 3G) Last but not least, the server still has more than 60% of free disk space. Now, based on the above, I have few questions: Are these numbers normal? Do they make sense? Do I need to upgrade the server? If I don't need to upgrade and my configuration is not correct, how do I optimize it?

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >